Featured FREE Whitepapers

What's New Here?


JavaFX 2 GameTutorial Part 3

Introduction This is part 3 of a six part series related to a JavaFX 2 Game Tutorial. If you’ve missed Part 1 and Part 2, I encourage you to go through them before beginning this tutorial. To recap Part 2 I discussed the inner workings of a gaming loop where we used an animation (JavaFX Timeline) to update sprites, check collisions, and clean up game world elements.I then felt compelled to create a simple game engine to enable the ease of developing 2D games. This tutorial is about using the game engine and demonstrating input using your mouse and keyboard.In this tutorial I will give you some background history, event handling fundamentals, a demo game, and finally the implementation. The demo will showcase a spaceship capable of shooting at floating spheres similar to the video game Asteroids. If you want to run the demo,scroll down and click on the WebStart button below. Please read the requirements before launching the game.History Back in the day (during the 80s) as a kid growing up there were arcade centers, bowling alleys, pizza parlors, and 7 Eleven stores where I spent huge amounts of time putting quarters on the glass display areas to be next in line to the guy who was currently playing an intense video game. As everyone was crowded around him watching him beat the all time high score we all cheered as we witnessed greatness. One of those incredibly awesome arcade games was ‘Asteroids‘ created by Atari Inc.(to play visit play.vg) Speaking of high scores, not too many folks know, but Scott Safran (February 3, 1967 – March 27, 1989) had the highest record of all time playing Asteroids. He achieved this at his local 7-Eleven convenience store by playing for approximately twenty hours non stop. Later in life (while still young), he passed away from a tragic accident on March 27, 1989. In Honor of Scott, I created this tutorial. I hope people will remember him as one of the greatest video gamers of all time (I’m sure a good brother and son also). Regarding the game, Asteroids, vector based hardware was used to render shapes as opposed to raster graphics (bitmap). On an added note, Space Invaders by Midway Inc.was created using raster graphics. It’s exciting to point out that there are discussions about JavaFX 2.x having the ability to use bitmaps called the JavaFX Canvas Node which can provide raster graphics to enable developers to take advantage of pixel level manipulations. I am still amazed at the construction of these arcade style cabinets which house the CRT, motherboard, and the controllers (input devices) such as buttons, joystick, track balls, and turning knobs. Classic Arcade Games Below are some classic arcade games with many types of input devices:Buttons only:Asteroids, Space Invaders, Rip Off, Phoenix Joystick only:Q*bert, PacMan Turn knob only:Pong Track ball only:Marble Madness Steering column and buttons:Star Wars, Pole position, Spy Hunter Bike handle bars:Stunt Cycle, Paper Boy Buttons and Throttle bar:Lunar Lander Periscope and Buttons:Sea Wolf Buttons and Yoke:Tron, Battle Zone Buttons, Turn knob, and Yoke: Star Trek, Tempest Buttons and track ball:Missile Command, Centipede Buttons and Joystick:Defender, Gauntlet, Frogger, Joust, Berzerk, Mario Bros., Donkey Kong, Xevious,Galaga, Kung Fu, Contra, Street Fighter, Double Dragon, Ninja magic (or spirit), DigDug,Dragon’s Lair.Input / (Mouse, Keyboard) Leaving the past behind, we currently encounter new kinds of input devices such as touch screens, accelerometers,infrared receivers, cameras, etc. The most common input on the desktop today is the mouse and keyboard. Of course, touch screen is all the rage with mobile devices and tablets, however in this tutorial we will only be focusing on the ‘Mouse‘ and ‘Keyboard‘ as inputs to control your game. Based on the JavaFX Roadmap, multi-touch inputis in the works (by the time you read this it’s already implemented). When intercepting keyboard and mouse events, JavaFX 2.x has many types of events which provide an opportunity for the developer to implement event handlers that intercept the events triggered. The JavaFX 2.x API for the Node or Scene contains many methods having the prefix ‘on’ such as the onMousePressProperty() or onKeyPressProperty()method. Whenever you implement these methods you will simply implement the handle() method using Java’s generic type to specify the event object to be passed in for interrogation. So, when you instantiate an EventHandler<MouseEvent> class you will implement ahandle() method that takes a MouseEvent as a parameter to be passed in. The code snippets shown below add two event handlers to the JavaFX Scene. The first handler will respond to mouse events. In our simple game when a mouse press occurs, this handler will respond by firing the weapon or navigating the ship. The second handler shown below will respond to key event. When a key is pressed, this handler will process KeyEvent objects. In our game, the keystroke ‘2‘ will change your secondary weapon into a bigger blaster (slower). Any other keystroke will default back to the smaller blaster (faster). Move ship and Fire weapon EventHandler fireOrMove = new EventHandler<MouseEvent>() { @Override public void handle(MouseEvent event) { if (event.getButton() == MouseButton.PRIMARY) { // Fire weapon systems. On Windows left mouse button } else if (event.getButton() == MouseButton.SECONDARY) { // Navigate ship thrust. On Windows right mouse button } } }; primaryStage.getScene().setOnMousePressed(fireOrMove);Changing the weapon EventHandler changeWeapons = new EventHandler<KeyEvent>() { @Override public void handle(KeyEvent event) { myShip.changeWeapon(event.getCode()); } }; primaryStage.getScene().setOnKeyPressed(changeWeapons);JavaFX 2 Input Demo – ‘The Expanse’ The simple demo game will be a mix between StarCraft and Asteroids. When using the mouse to navigate the ship it will resemble StarCraft’s Battle Cruiser. If you remember from Part 2 of this series, I created spheres bouncing around. I reused the code from Part 2 ‘Atom Smasher’ to act as asteroids like in the famous arcade game. Except in this game you cannot get harmed at all. The objective is to fire your weapon at the spheres before they hit other spheres which implode upon impact. Because this is a simple tutorial, or even a game in its early stages of development, the game doesn’t keep track of the score. I encourage you to go to GitHub to download the code and enhance the game. Later, you will see a high level UML class diagram describing the classes that make up the game. For the sake of brevity, I will not be going through each class in great detail, but I trust you will visit GitHub here: https://github.com/carldea/JFXGen for all the demos and source code. Requirements:Java 7 or later JavaFX 2.1 or later Windows XP or later (Should be available soon for Linux/MacOS)A simple Asteroid type game called ‘The Expanse’. Instructions:Right mouse click (on Windows) to fly ship. Left mouse click (left click on Windows mouse) to fire weapon. Key press ’2? to change to large missiles.(blue circular projectiles) Other key press defaults to smaller missiles. (red circular projectiles)Part 3 ‘The Expanse’Shown below is figure 2 of a high level class diagram depicting all the classes created for this demo. The GameWorld and Sprite classes are part of the game engine from the previous post. The rest of the classes are new which make up this demo for this tutorial.InputPart3 TheInputPart3is the driver or main JavaFX application that runs the game. This creates a GameWorld object to be initialized and starts the game loop. Shown below is the source code of the main JavaFX application Input Part3. import carlfx.gameengine.GameWorld; package carlfx.demos.navigateship;import javafx.application.Application; import javafx.stage.Stage;/** * The main driver of the game. * @author cdea */ public class InputPart3 extends Application {GameWorld gameWorld = new TheExpanse(59, "JavaFX 2 GameTutorial Part 3 - Input"); /** * @param args the command line arguments */ public static void main(String[] args) { launch(args); }@Override public void start(Stage primaryStage) { // setup title, scene, stats, controls, and actors. gameWorld.initialize(primaryStage);// kick off the game loop gameWorld.beginGameLoop();// display window primaryStage.show(); }}TheExpanse The TheExpanse class inherits from the GameWorld class. This is practically identical to Part 2?s ‘AtomSmasher’ where the driver application will invoke the GameWorld instance’s initialize() method to set up all the game elements such as the input, spaceship, and those pesky floating spheres. The job of this class is to make sure asteroids or spheres bounce off walls and remove any missiles which reach the edge of the screen. It’s main responsibly is to manage the assets and create new levels. When there are no moving objects and the player moves the ship on the screen new spheres will be generated for the next level.The key take away from this class is the setupInput() method. The setupInput() method which I created is responsible for establishing your event handlers to be able to listen to key events and mouse event. package carlfx.demos.navigateship;import carlfx.gameengine.GameWorld; import carlfx.gameengine.Sprite; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.scene.CacheHint; import javafx.scene.Group; import javafx.scene.Node; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.control.Label; import javafx.scene.control.TextField; import javafx.scene.input.KeyEvent; import javafx.scene.input.MouseButton; import javafx.scene.input.MouseEvent; import javafx.scene.layout.HBox; import javafx.scene.layout.VBox; import javafx.scene.paint.Color; import javafx.scene.shape.Circle; import javafx.stage.Stage;import java.util.Random;/** * This is a simple game world simulating a bunch of spheres looking * like atomic particles colliding with each other. When the game loop begins * the user will notice random spheres (atomic particles) floating and * colliding. The user will navigate his/her ship by right clicking the mouse to * trust forward and left click to fire weapon to atoms. * * @author cdea */ public class TheExpanse extends GameWorld {// mouse pt label Label mousePtLabel = new Label();// mouse press pt label Label mousePressPtLabel = new Label();TextField xCoordinate = new TextField("234"); TextField yCoordinate = new TextField("200"); Button moveShipButton = new Button("Rotate ship");Ship myShip = new Ship();public TheExpanse(int fps, String title) { super(fps, title); }/** * Initialize the game world by adding sprite objects. * * @param primaryStage The game window or primary stage. */ @Override public void initialize(final Stage primaryStage) { // Sets the window title primaryStage.setTitle(getWindowTitle()); //primaryStage.setFullScreen(true);// Create the scene setSceneNodes(new Group()); setGameSurface(new Scene(getSceneNodes(), 800, 600)); getGameSurface().setFill(Color.BLACK); primaryStage.setScene(getGameSurface()); // Setup Game input setupInput(primaryStage);// Create many spheres generateManySpheres(2);// Display the number of spheres visible. // Create a button to add more spheres. // Create a button to freeze the game loop. //final Timeline gameLoop = getGameLoop(); getSpriteManager().addSprites(myShip); getSceneNodes().getChildren().add(myShip.node);// mouse point VBox stats = new VBox();HBox row1 = new HBox(); mousePtLabel.setTextFill(Color.WHITE); row1.getChildren().add(mousePtLabel); HBox row2 = new HBox(); mousePressPtLabel.setTextFill(Color.WHITE); row2.getChildren().add(mousePressPtLabel);stats.getChildren().add(row1); stats.getChildren().add(row2);// mouse point HBox enterCoord1 = new HBox(); enterCoord1.getChildren().add(xCoordinate); enterCoord1.getChildren().add(yCoordinate); enterCoord1.getChildren().add(moveShipButton); stats.getChildren().add(enterCoord1); moveShipButton.setOnAction(new EventHandler() { @Override public void handle(ActionEvent actionEvent) { double x = Double.parseDouble(xCoordinate.getText()); double y = Double.parseDouble(yCoordinate.getText()); myShip.plotCourse(x, y, false); } });// =================================================== // Debugging purposes // uncomment to test mouse press and rotation angles. //getSceneNodes().getChildren().add(stats); }/** * Sets up the mouse input. * * @param primaryStage The primary stage (app window). */ private void setupInput(Stage primaryStage) { System.out.println("Ship's center is (" + myShip.getCenterX() + ", " + myShip.getCenterY() + ")");EventHandler fireOrMove = new EventHandler() { @Override public void handle(MouseEvent event) { mousePressPtLabel.setText("Mouse Press PT = (" + event.getX() + ", " + event.getY() + ")"); if (event.getButton() == MouseButton.PRIMARY) { // Aim myShip.plotCourse(event.getX(), event.getY(), false); // fire Missile m1 = myShip.fire(); getSpriteManager().addSprites(m1); getSceneNodes().getChildren().add(0, m1.node); } else if (event.getButton() == MouseButton.SECONDARY) { // determine when all atoms are not on the game surface. Ship should be one sprite left. if (getSpriteManager().getAllSprites().size() generateManySpheres(30); }// stop ship from moving forward myShip.applyTheBrakes(event.getX(), event.getY()); // move forward and rotate ship myShip.plotCourse(event.getX(), event.getY(), true); }} };// Initialize input primaryStage.getScene().setOnMousePressed(fireOrMove); //addEventHandler(MouseEvent.MOUSE_PRESSED, me);// set up stats EventHandler changeWeapons = new EventHandler() { @Override public void handle(KeyEvent event) { myShip.changeWeapon(event.getCode()); } }; primaryStage.getScene().setOnKeyPressed(changeWeapons);// set up stats EventHandler showMouseMove = new EventHandler() { @Override public void handle(MouseEvent event) { mousePtLabel.setText("Mouse PT = (" + event.getX() + ", " + event.getY() + ")"); } };primaryStage.getScene().setOnMouseMoved(showMouseMove); }/** * Make some more space spheres (Atomic particles) * * @param numSpheres The number of random sized, color, and velocity atoms to generate. */ private void generateManySpheres(int numSpheres) { Random rnd = new Random(); Scene gameSurface = getGameSurface(); for (int i = 0; i < numSpheres; i++) { Color c = Color.rgb(rnd.nextInt(255), rnd.nextInt(255), rnd.nextInt(255)); Atom b = new Atom(rnd.nextInt(15) + 5, c, true); Circle circle = b.getAsCircle(); // random 0 to 2 + (.0 to 1) * random (1 or -1) b.vX = (rnd.nextInt(2) + rnd.nextDouble()) * (rnd.nextBoolean() ? 1 : -1); b.vY = (rnd.nextInt(2) + rnd.nextDouble()) * (rnd.nextBoolean() ? 1 : -1); // random x between 0 to width of scene double newX = rnd.nextInt((int) gameSurface.getWidth()); // check for the right of the width newX is greater than width // minus radius times 2(width of sprite) if (newX > (gameSurface.getWidth() - (circle.getRadius() * 2))) { newX = gameSurface.getWidth() - (circle.getRadius() * 2); }// check for the bottom of screen the height newY is greater than height // minus radius times 2(height of sprite) double newY = rnd.nextInt((int) gameSurface.getHeight()); if (newY > (gameSurface.getHeight() - (circle.getRadius() * 2))) { newY = gameSurface.getHeight() - (circle.getRadius() * 2); }circle.setTranslateX(newX); circle.setTranslateY(newY); circle.setVisible(true); circle.setId(b.toString()); circle.setCache(true); circle.setCacheHint(CacheHint.SPEED); circle.setManaged(false); // add to actors in play (sprite objects) getSpriteManager().addSprites(b);// add sprite's getSceneNodes().getChildren().add(0, b.node);} }/** * Each sprite will update it's velocity and bounce off wall borders. * * @param sprite - An atomic particle (a sphere). */ @Override protected void handleUpdate(Sprite sprite) { // advance object sprite.update(); if (sprite instanceof Missile) { removeMissiles((Missile) sprite); } else { bounceOffWalls(sprite); } }/** * Change the direction of the moving object when it encounters the walls. * * @param sprite The sprite to update based on the wall boundaries. * TODO The ship has got issues. */ private void bounceOffWalls(Sprite sprite) { // bounce off the walls when outside of boundariesNode displayNode; if (sprite instanceof Ship) { displayNode = sprite.node;//((Ship)sprite).getCurrentShipImage(); } else { displayNode = sprite.node; } // Get the group node's X and Y but use the ImageView to obtain the width. if (sprite.node.getTranslateX() > (getGameSurface().getWidth() - displayNode.getBoundsInParent().getWidth()) || displayNode.getTranslateX() < 0) { // bounce the opposite direction sprite.vX = sprite.vX * -1; } // Get the group node's X and Y but use the ImageView to obtain the height. if (sprite.node.getTranslateY() > getGameSurface().getHeight() - displayNode.getBoundsInParent().getHeight() || sprite.node.getTranslateY() < 0) { sprite.vY = sprite.vY * -1; } } /** * Remove missiles when they reach the wall boundaries. * * @param missile The missile to remove based on the wall boundaries. */ private void removeMissiles(Missile missile) { // bounce off the walls when outside of boundaries if (missile.node.getTranslateX() > (getGameSurface().getWidth() - missile.node.getBoundsInParent().getWidth()) || missile.node.getTranslateX() < 0) { getSpriteManager().addSpritesToBeRemoved(missile); getSceneNodes().getChildren().remove(missile.node); } if (missile.node.getTranslateY() > getGameSurface().getHeight() - missile.node.getBoundsInParent().getHeight() || missile.node.getTranslateY() < 0) {getSpriteManager().addSpritesToBeRemoved(missile); getSceneNodes().getChildren().remove(missile.node); } }/** * How to handle the collision of two sprite objects. Stops the particle * by zeroing out the velocity if a collision occurred. * * @param spriteA Sprite from the first list. * @param spriteB Sprite from the second list. * @return boolean returns a true if the two sprites have collided otherwise false. */ @Override protected boolean handleCollision(Sprite spriteA, Sprite spriteB) { if (spriteA != spriteB) { if (spriteA.collide(spriteB)) { if (spriteA instanceof Atom && spriteB instanceof Atom) {((Atom) spriteA).implode(this); // will remove from the Scene onFinish() ((Atom) spriteB).implode(this); getSpriteManager().addSpritesToBeRemoved(spriteA, spriteB);return true; } } }return false; }}Ship The Shipclass represents our cool looking spaceship. The Ship class inherits from the Sprite class to help us contain velocity information (vector). This class will also contain a doubly linked list containing 32 ImageView (RotatedShipImage) instances which represent each direction to simulate the ship rotating about its center (centroid). At some point I want to change this by making a single SVGPath object to be rotated (I know there are trade offs). For this tutorial I implemented the ship by taking ImageViews objects to be rotated 32 direction evenly from 0 to 360 degrees. Shown below in Figure 3 are all 32 directions using 32 ImageView instances and a single Image object of the image of a spaceship to simulate the rotation about its center (pivot point).When animating the ship rotating I simply set all but the current image visible using the setVisible(true) method on the ImageView node. Disclaimer: In gaming you will inevitably encounter math (Trigonometry). If you are interested and want to dig deeper please look at the source code of the TheExpanse class’ initialize() method. At the end of the method uncomment the statement: getSceneNodes().getChildren().add(stats);. This will display controls which will allow you to use to debug and inspect mouse press coordinates. Also, you can see output in your console (stdout) relating to angles, vectors, etc. The Ship’s member variables:turnDirection – enum DIRECTION with Clockwise, CounterClockwise, and Neither u – Vec object which contains a vector in relation to the center of the ship coordinates denoting the starting direction the ship begins to rotate directionalShips – list of RotatedShipImage objects each having a previous and next reference to other RotatedShipImage objects. Zero degrees (uIndex=0) is when the spaceship facing east. When rotating a JavaFX nodes going in a counter clockwise direction is positive numbers in degrees uIndex – index of the current RotatedShipImage in the directionalShips list that is to be displayed vIndex – index of the RotatedShipImage in the directionalShips list that is to be displayed at the end of the rotation animation stopArea – a JavaFX Circle with a radius for the ship to know when to stop the ship from moving flipBook – A JavaFX Group containing all RotatedShipImage objects (32). The group is rendered on the Scene. Like a flip book in animation each RotatedShipImage will be determined to be displayed based on uIndex and vIndex keyCode – a JavaFX KeyCode will help determine if a key press to help change your weapon (character ’2?)The Ship’s member functions:update() – Updates the ships velocity and direction. Also will determine when to stop moving. getCurrentShipImage() – Based on the uIndex it returns the ImageView that is the current ship direction image that is being displayed getCenterX() – returns the screen’s X coordinate of the center of the ship getCenterY() – returns the screen’s X coordinate of the center of the ship plotCourse(double screenX, double screenY, boolean thrust) – After the user clicks their mouse on the screen this method will calculate the angle to rotate the ship and change the velocity to thrust toward the coordinates onto the destination point. When using the Vec object the screen coordinates will be converted to Cartesian coordinates for determining the angle between two vectors (U and V). turnShip() – The plotCourse() method calls turnShip() method to perform the actual animation of the rotation of the ship applyTheBrakes(double screenX, double screenY) – After the user has chosen (right mouse click) where the ship will navigate to applyTheBrakes() method simply sets up the stopArea (Circle) object to let the ship know when to stop fire() – Returns a Missile (Sprite) object for the game engine to put into the scene. Each missile contains the same direction of the ship with a scaled up velocity (increase speed). Should be faster than the ship can fly. changeWeapon(KeyCode keyCode) – After the user (player) hit the key stroke of a ’2? the weapon will change to create a larger missile projectile but slightly slower. Any other key press will be the default weapon that creates small missile projectiles which are faster moving.Shown below is figure 4 of a class diagram displaying the members of the class Ship. Ship Class DiagramShown below is the source code of the Ship class. package carlfx.demos.navigateship;import carlfx.gameengine.Sprite; import javafx.animation.KeyFrame; import javafx.animation.Timeline; import javafx.animation.TimelineBuilder; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.scene.CacheHint; import javafx.scene.Group; import javafx.scene.Node; import javafx.scene.image.Image; import javafx.scene.input.KeyCode; import javafx.scene.paint.Color; import javafx.scene.shape.Circle; import javafx.util.Duration; import java.util.ArrayList; import java.util.List;/** * A space ship with 32 directions * When two atoms collide each will fade and become removed from the scene. The * method called implode() implements a fade transition effect. * * @author cdea */ public class Ship extends Sprite {/** * 360 degree turn */ private final static int TWO_PI_DEGREES = 360;/** * Number of ship frames and directions the ship is pointing nose */ private final static int NUM_DIRECTIONS = 32;/** * The angle of one direction (adjacent directions) (11.25 degrees) */ private final static float UNIT_ANGLE_PER_FRAME = ((float) TWO_PI_DEGREES / NUM_DIRECTIONS);/** * Amount of time it takes the ship to move 180 degrees in milliseconds. */ private final static int MILLIS_TURN_SHIP_180_DEGREES = 300;/** * When the ship turns on each direction one amount of time for one frame or turn of the ship. (18.75 milliseconds) */ private final static float MILLIS_PER_FRAME = (float) MILLIS_TURN_SHIP_180_DEGREES / (NUM_DIRECTIONS / 2);/** * All possible turn directions Clockwise, Counter Clockwise, or Neither when the user clicks mouse around ship */ private enum DIRECTION { CLOCKWISE, COUNTER_CLOCKWISE, NEITHER }/** * Velocity amount used vector when ship moves forward. scale vector of ship. See flipBook translateX and Y. */ private final static float THRUST_AMOUNT = 3.3f;/***/ private final static float MISSILE_THRUST_AMOUNT = 6.3F;/** * Angle in degrees to rotate ship. *//** * Current turning direction. default is NEITHER. Clockwise and Counter Clockwise. */ private DIRECTION turnDirection = DIRECTION.NEITHER;/** * The current starting position of the vector or coordinate where the nose of the ship is pointing towards. */ private Vec u; // current or start vector/** * All ImageViews of all the possible image frames for each direction the ship is pointing. ie: 32 directions. */ private final List directionalShips = new ArrayList<>();/** * The Timeline instance to animate the ship rotating using images. This is an optical illusion similar to page * flipping as each frame is displayed the previous visible attribute is set to false. No rotation is happening. */ private Timeline rotateShipTimeline;/** * The current index into the list of ImageViews representing each direction of the ship. Zero is the ship * pointing to the right or zero degrees. */ private int uIndex = 0;/** * The end index into the list of ImageViews representing each direction of the ship. Zero is the ship * pointing to the right or zero degrees. */ private int vIndex = 0;/** * The spot where the user has right clicked letting the engine check the ship's center is in this area. */ private final Circle stopArea = new Circle();/** * A group contain all of the ship image view nodes. */ private final Group flipBook = new Group();/** * A key code will be used for weapon selection. */ private KeyCode keyCode;public Ship() {// Load one image. Image shipImage = new Image(getClass().getClassLoader().getResource("ship.png").toExternalForm(), true); stopArea.setRadius(40); RotatedShipImage prev = null;// create all the number of directions based on a unit angle. 360 divided by NUM_DIRECTIONS for (int i = 0; i < NUM_DIRECTIONS; i++) { RotatedShipImage imageView = new RotatedShipImage(); imageView.setImage(shipImage); imageView.setRotate(-1 * i * UNIT_ANGLE_PER_FRAME); imageView.setCache(true); imageView.setCacheHint(CacheHint.SPEED); imageView.setManaged(false); imageView.prev = prev; imageView.setVisible(false); directionalShips.add(imageView); if (prev != null) { prev.next = imageView; } prev = imageView; flipBook.getChildren().add(imageView); } RotatedShipImage firstShip = directionalShips.get(0); firstShip.prev = prev; prev.next = firstShip; // set javafx node to an image firstShip.setVisible(true); node = flipBook; flipBook.setTranslateX(200); flipBook.setTranslateY(300);}/** * Change the velocity of the atom particle. */ @Override public void update() { flipBook.setTranslateX(flipBook.getTranslateX() + vX); flipBook.setTranslateY(flipBook.getTranslateY() + vY);if (stopArea.contains(getCenterX(), getCenterY())) { vX = 0; vY = 0; }}private RotatedShipImage getCurrentShipImage() { return directionalShips.get(uIndex); }/** * The center X coordinate of the current visible image. See <code>getCurrentShipImage()</code> method. * * @return The scene or screen X coordinate. */ public double getCenterX() { RotatedShipImage shipImage = getCurrentShipImage(); return node.getTranslateX() + (shipImage.getBoundsInLocal().getWidth() / 2); }/** * The center Y coordinate of the current visible image. See <code>getCurrentShipImage()</code> method. * * @return The scene or screen Y coordinate. */ public double getCenterY() { RotatedShipImage shipImage = getCurrentShipImage(); return node.getTranslateY() + (shipImage.getBoundsInLocal().getHeight() / 2); }/** * Determines the angle between it's starting position and ending position (Similar to a clock's second hand). * When the user is shooting the ship nose will point in the direction of the mouse press using the primary button. * When the user is thrusting to a location on the screen the right click mouse will pass true to the thrust * parameter. * * @param screenX The mouse press' screen x coordinate. * @param screenY The mouse press' screen ycoordinate. * @param thrust Thrust ship forward or not. True move forward otherwise false. */ public void plotCourse(double screenX, double screenY, boolean thrust) { // get center of ship double sx = getCenterX(); double sy = getCenterY();// get user's new turn position based on mouse click Vec v = new Vec(screenX, screenY, sx, sy); if (u == null) { u = new Vec(1, 0); }double atan2RadiansU = Math.atan2(u.y, u.x); double atan2DegreesU = Math.toDegrees(atan2RadiansU);double atan2RadiansV = Math.atan2(v.y, v.x); double atan2DegreesV = Math.toDegrees(atan2RadiansV);double angleBetweenUAndV = atan2DegreesV - atan2DegreesU;// if abs value is greater than 180 move counter clockwise //(or opposite of what is determined) double absAngleBetweenUAndV = Math.abs(angleBetweenUAndV); boolean goOtherWay = false; if (absAngleBetweenUAndV > 180) { if (angleBetweenUAndV < 0) { turnDirection = DIRECTION.COUNTER_CLOCKWISE; goOtherWay = true; } else if (angleBetweenUAndV > 0) { turnDirection = DIRECTION.CLOCKWISE; goOtherWay = true; } else { turnDirection = Ship.DIRECTION.NEITHER; } } else { if (angleBetweenUAndV < 0) { turnDirection = Ship.DIRECTION.CLOCKWISE; } else if (angleBetweenUAndV > 0) { turnDirection = Ship.DIRECTION.COUNTER_CLOCKWISE; } else { turnDirection = Ship.DIRECTION.NEITHER; } }double degreesToMove = absAngleBetweenUAndV; if (goOtherWay) { degreesToMove = TWO_PI_DEGREES - absAngleBetweenUAndV; }//int q = v.quadrant();uIndex = Math.round((float) (atan2DegreesU / UNIT_ANGLE_PER_FRAME)); if (uIndex < 0) { uIndex = NUM_DIRECTIONS + uIndex; } vIndex = Math.round((float) (atan2DegreesV / UNIT_ANGLE_PER_FRAME)); if (vIndex < 0) { vIndex = NUM_DIRECTIONS + vIndex; } String debugMsg = turnDirection + " U [m(" + u.mx + ", " + u.my + ") => c(" + u.x + ", " + u.y + ")] " + " V [m(" + v.mx + ", " + v.my + ") => c(" + v.x + ", " + v.y + ")] " + " start angle: " + atan2DegreesU + " end angle:" + atan2DegreesV + " Angle between: " + degreesToMove + " Start index: " + uIndex + " End index: " + vIndex;System.out.println(debugMsg);if (thrust) { vX = Math.cos(atan2RadiansV) * THRUST_AMOUNT; vY = -Math.sin(atan2RadiansV) * THRUST_AMOUNT; } turnShip();u = v; }private void turnShip() {final Duration oneFrameAmt = Duration.millis(MILLIS_PER_FRAME); RotatedShipImage startImage = directionalShips.get(uIndex); RotatedShipImage endImage = directionalShips.get(vIndex); List frames = new ArrayList<>();RotatedShipImage currImage = startImage;int i = 1; while (true) {final Node displayNode = currImage;KeyFrame oneFrame = new KeyFrame(oneFrameAmt.multiply(i), new EventHandler() {@Override public void handle(javafx.event.ActionEvent event) { // make all ship images invisible for (RotatedShipImage shipImg : directionalShips) { shipImg.setVisible(false); } // make current ship image visible displayNode.setVisible(true);// update the current index //uIndex = directionalShips.indexOf(displayNode); } }); // oneFrameframes.add(oneFrame);if (currImage == endImage) { break; } if (turnDirection == DIRECTION.CLOCKWISE) { currImage = currImage.prev; } if (turnDirection == DIRECTION.COUNTER_CLOCKWISE) { currImage = currImage.next; } i++; }if (rotateShipTimeline != null) { rotateShipTimeline.stop(); rotateShipTimeline.getKeyFrames().clear(); rotateShipTimeline.getKeyFrames().addAll(frames); } else { // sets the game world's game loop (Timeline) rotateShipTimeline = TimelineBuilder.create() .keyFrames(frames) .build();}rotateShipTimeline.playFromStart();}/** * Stops the ship from thrusting forward. * * @param screenX the screen's X coordinate to stop the ship. * @param screenY the screen's Y coordinate to stop the ship. */ public void applyTheBrakes(double screenX, double screenY) { stopArea.setCenterX(screenX); stopArea.setCenterY(screenY); }public Missile fire() { Missile m1;float slowDownAmt = 0; if (KeyCode.DIGIT2 == keyCode) { m1 = new Missile(10, Color.BLUE); slowDownAmt = 2.3f; } else { m1 = new Missile(Color.RED); } // velocity vector of the missile m1.vX = Math.cos(Math.toRadians(uIndex * UNIT_ANGLE_PER_FRAME)) * (MISSILE_THRUST_AMOUNT - slowDownAmt); m1.vY = -Math.sin(Math.toRadians(uIndex * UNIT_ANGLE_PER_FRAME)) * (MISSILE_THRUST_AMOUNT - slowDownAmt);// make the missile launch in the direction of the current direction of the ship nose. based on the // current frame (uIndex) into the list of image view nodes. RotatedShipImage shipImage = directionalShips.get(uIndex);// start to appear in the center of the ship to come out the direction of the nose of the ship. double offsetX = (shipImage.getBoundsInLocal().getWidth() - m1.node.getBoundsInLocal().getWidth()) / 2; double offsetY = (shipImage.getBoundsInLocal().getHeight() - m1.node.getBoundsInLocal().getHeight()) / 2;// initial launch of the missile m1.node.setTranslateX(node.getTranslateX() + offsetX + m1.vX); m1.node.setTranslateY(node.getTranslateY() + offsetY + m1.vY); return m1; }public void changeWeapon(KeyCode keyCode) { this.keyCode = keyCode; }}Vec The Vec class is a simple helper container class to assist in holding a mouse click’s screen coordinates and converting them to Cartesian coordinates based on the center of a sprite, image, or shape. This class is used to help determine the angle between two vectors [Math.atan2(y,x)]. By determining the angle the ship is able to perform the rotation animation of the sprite image. Shown below is the source code of the Vec class. package carlfx.demos.navigateship;/** * This class represents a container class to hold a Vector in space and direction * the ship will move. Assuming the center of the ship is the origin the angles can * be determined by a unit circle via Cartesian coordinates. * When the user clicks on the screen the mouse coordinates or screen coordinates * will be stored into the mx and my instance variables. * The x and y data members are converted to cartesian coordinates before storing. * * I purposefully left out getters and setters. In gaming just keep things minimalistic. * @author cdea */ public class Vec { public double mx; public double my;public double x; public double y;/** * This is a default constructor which will take a Cartesian coordinate. * @param x X coordinate of a point on a Cartesian system. * @param y Y coordinate of a point on a Cartesian system. */ public Vec(float x, float y) { this.x = x; this.y = y; }/** * Constructor will convert mouse click points into Cartesian coordinates based on the sprite's center point as * the origin. * @param mx Mouse press' screen X coordinate. * @param my Mouse press' screen Y coordinate. * @param centerX Screen X coordinate of the center of the ship sprite. * @param centerY Screen Y coordinate of the center of the ship sprite. */ public Vec(double mx, double my, double centerX, double centerY) { this.x = convertX(mx, centerX); this.y = convertY(my, centerY); this.mx = mx; this.my = my; }/** * Returns a Cartesian coordinate system's quadrant from 1 to 4. * * first quadrant - 1 upper right * second quadrant - 2 upper left * third quadrant - 3 lower left * fourth quadrant - 4 lower right * * @return int quadrant number 1 through 4 */ public int quadrant() { int q = 0; if (x > 0 && y > 0) { q =1; } else if (x < 0 && y > 0) { q = 2; } else if (x < 0 && y < 0) { q = 3; } else if (x > 0 && y < 0) { q = 4; } return q; } @Override public String toString(){ return "(" + x + "," + y + ") quadrant=" + quadrant(); } /** * Converts point's X screen coordinate into a Cartesian system. * @param mouseX Converts the mouse X coordinate into Cartesian system based on the ship center X (originX). * @param originX The ship center point's X coordinate. * @return double value of a Cartesian system X coordinate based on the origin X. */ static double convertX(double mouseX, double originX) { return mouseX - originX; }/** * Converts point's Y screen coordinate into a Cartesian system. * @param mouseY Converts the mouse Y coordinate into Cartesian system based on the ship center Y (originY). * @param originY The ship center point's Y coordinate. * @return double value of a Cartesian system Y coordinate based on the origin Y. */ static double convertY(double mouseY, double originY) { return originY - mouseY; }}RotatedShipImage A RotatedShipImage class inherits from a JavaFX’s ImageView class, but also contains references to a previous and a next RotatedShipImage instances making up a doubly linked list. Figure 3 depicts 32 images of the of the “ship.png” rendered in each RotatedShipImage which are all placed in a JavaFX Group node. When the ship appears to rotate, one image is displayed at a time. Shown below is the source code of the RotatedShipImageclass. package carlfx.demos.navigateship;import javafx.scene.image.ImageView;/** * Represents a double link list to assist in the rotation of the ship. * This helps to move clockwise and counter clockwise. */ public class RotatedShipImage extends ImageView { public RotatedShipImage next; public RotatedShipImage prev; }Missile The Missile class inherits from the Atom class. The Missile acts as a marker class to differentiate between spheres (asteroids) and missiles. When missiles are created, they will contain the same direction of the ship (where the ship’s nose is pointing) with a larger velocity. Shown below is the source code of the Missile class.. package carlfx.demos.navigateship;import javafx.scene.paint.Color;/** * A missile projectile without the radial gradient. */ public class Missile extends Atom { public Missile(Color fill) { super(5, fill, false); }public Missile(int radius, Color fill) { super(radius, fill, true); } }Conclusion Input is so vital to any game play that it is often hard to get right. Older game engines will poll during a game loop. When using JavaFX 2.x’s event handling, you implement the type of event to be added to the scene graph or to an individual Node object. Hopefully in the future, we will see more ingenious input devices used in gaming (see Oracle’s Java Technology Evangelist Simon Ritter). Keep your eyes open for Part 4 which deals with collision detection. So, stay tuned and feel free to comment. Useful links: 7-Eleven: http://www.7-eleven.com Playing Asteroids: http://www.play.vg/games/4-Asteroids.html Asteroids: http://en.wikipedia.org/wiki/Asteroids_(video_game) Scott Safran: http://en.wikipedia.org/wiki/Scott_Safran Arcade in Backyard: http://www.themysteryworld.com/2011/02/guy-builds-video-arcade-in-his-back.html StarWars downsized: http://techland.time.com/2012/04/26/man-builds-16-scale-star-wars-arcade-game/ Trigonometry:http://en.wikipedia.org/wiki/Trigonometry JavaFX Node API: http://docs.oracle.com/javafx/2/api/javafx/scene/Node.html JavaFX Scene API: http://docs.oracle.com/javafx/2/api/javafx/scene/Scene.html JavaFX SVGPath API: http://docs.oracle.com/javafx/2/api/javafx/scene/shape/SVGPath.html Multi-Touch and Gestures Support: http://www.oracle.com/technetwork/java/javafx/overview/roadmap-1446331.html Pro JavaFX 2 Apress publishing – pg. 62 chapter 2 section on “Handling Input Events” . http://www.apress.com/9781430268727 Java 7 Recipes Apress publishing- pg. 602 chapter 16 Recipe 16-3 “Animating Shapes Along a Path”. http://www.apress.com/9781430240563 Video Game arcade cabinet: http://en.wikipedia.org/wiki/Video_game_arcade_cabinet Raster Graphics: http://en.wikipedia.org/wiki/Raster_graphics Part 3 source code at the GitHub: https://github.com/carldea/JFXGen/tree/master/demos/navigateship JavaFX Canvas Node: http://mail.openjdk.java.net/pipermail/openjfx-dev/2012-April/001210.html JavaFX- Optimizing Performance for JavaFX Applications:http://www.parleys.com/#st=5&id=2738&sl=0 Oracle’s Java Technology Evangelist Simon  Ritter: https://blogs.oracle.com/javaone/entry/interfacing_with_the_interface_javafx Video Game High School episode 1:http://www.rocketjump.com/?video=vghs-episode-1 Video Game High School episode 2:http://www.rocketjump.com/?video=vghs-episode-2-5 Reference: JavaFX 2 GameTutorial Part 3 from our JCG partner Carl Dea at the Carl’s FX Blog blog....

Book review: ‘Are you smart enough to work at Google?’

You need to toss a coin for a football match. The only coin you have is bent and biased towards one outcome. How do you use the coin and ensure a fair toss? I love a good a puzzle and there are certainty plenty of thought provoking mind benders in this book – most of which I had not heard before. Author William Poundstone (author of ‘ How Would You Move Mount Fuji‘ and ‘ Fortune’s Formula‘) describes various puzzles that are describes various puzzles that are likely to be part of a Google interview process – that company now estimated to be running over one billion search requests per day! Some other aspects of Google are covered, but the subject matter is predominately puzzles – all types of puzzles: fermi questions, deductive logic, numeracy skill, algorithmic questions and some grade A counter intuitive mind boggling teasers!William Poundstone One can’t help asking the question why Google bothers with all of this? Surely, the point of an interview is to see if someone can do a certain type of work and the interview should be a fair attempt to assess a candidate’s suitability. I have had the fortune (some would say misfortune) to be part of world of Software engineering for the last 15 years. I am passionate about it, but I’ll be the first to admit it isn’t just about solving fun puzzles. Following best practises, following agreed processes, keeping up to speed with technology, documenting solutions so others can see what’s going on are all very important things to make a good software engineer. And it’s not always sexy work. Sometimes it requires patience debugging ugly code while sticking to a tight project deadline. Ascertaining how good someone is at all this in an interview setting can be difficult – especially when it’s very easy for a good candidate to freeze from nerves or get an unexpected mental block. It’s very difficult to objectify what makes a good software engineer. Sometimes someone very intelligent can get hung up on abstractions or theoritical patterns and forget they have deadlines or just not be a good team player. Sometimes, there’s just inescapable subjectivity.Joel SpolksySo how do brain teasers help out? Acclaimed tech guru, Joel Spolsky advises to avoid asking them in interviews because they are usually just a case of either the candidate knows it or he doesn’t – and not much else. In my opinion, it can take months to understand someone’s technical strengths and weaknesses. Puzzles can be useful for demostrating how someone approaches problem solving, how they think on their feet and how they communicate ideas. So yes they do serve a purpose. But even if they serve no purpose whatsoever other than a bit of fun, that’s fine for me. I love a good puzzle so I really enjoyed this book and for that reason I’d recommend it to anyone who likes to dabble in some cryptic challenges. References: 1. Are you smart enough to work at Google Reference: Book review: ‘Are you smart enough to work at Google?’ from our JCG partner Alex Staveley at the Dublin’s Tech Blog blog....

Implementing Master Slave / Grid Computing Pattern in Akka

Master Slave pattern is a prime example of fault tolerance and parallel computation. The idea behind the pattern is to partition the work into identical sub tasks which are then delegated to Slaves. These slave node or instances will process the work task and send back the result to the master. The master will then compile the resultsreceivedfrom all the slave nodes.Key here is that the Slave nodes are only aware on how to process the task and not aware of what happens to the output. The Master Slave pattern is analogous to the Grid Computing pattern where a control node distributes the work to other nodes. Idea is to make use of the nodes on the network for their computing power. SETI@Home was one of the earliest pioneers in using this model. I have build a similar example with difference being that worker nodes get started on Remote Nodes, Worker Nodes register with Master(WorkServer) and then subsequently start processing work packets. If there is no worker slave registered with Master(WorkServer), the master waits the workers to register. The workers can register at any time and will start getting work packets from there on.The example demonstrates how an WorkerActor system sends a request for registration. The RegisterRemoteWorker recieves the request and forwards the same to JobController where the RoundRobinRouter is updated for the new worker information. The WorkScheduler sends a periodic request to JobController, who then sends packets to all the registered worker actors. The example does not implement fault tolerance with respect to on how to handle failures when the remote actors die or how to re-process packets that have not been processed. Similarly, there may be cases where the remote worker actors might want to shutdown after processing certain amount of packets, they can then indicate to the master to stop giving them work. I will add fault tolerance soon! Updated: Code base updated to handle worker shutdowns. If the remote actors die or shut down, the JobController detects thefail-oversusing remote actorlistenersand updates the router. The code base for the program is available at the following location – https://github.com/write2munish/Akka-Essentials under the GridPatternExample Reference: Implementing Master Slave / Grid Computing Pattern in Akka from our JCG partner Munish K Gupta at the Akka Essentials blog....

Apache Shiro Part 3 – Cryptography

Besides securing web pages and managing access rights Apache Shiro does also basic cryptography tasks. The framework is able to:encrypt and decrypt data, hash data, generate random numbers.Shiro does not implement any cryptography algorithms. All calculations are delegated to Java Cryptography Extension (JCE) API. The main benefit of using Shiro instead of what is already present in Java is ease of use and secure defaults. Shiro crypto module is written in higher abstraction level and by default implements all known best practices. This is third part of series dedicated to Apache Shiro. First part showed how to secure web application and add log in/log out functionality. Second part showed how to store user accounts in database and give users an option to authenticate themselves via PGP certificates. This post begins with a short Shiro and JCE overview and continues with description of few useful conversion utilities. Following chapters explain random number generation, hashing and how to encrypt and decrypt data. The final chapter shows how to customize a cipher and how to create a new one. Overview Shiro cryptography module resides in org.apache.shiro.crypto package. It does not have manual, but fortunately all crypto classes are Javadoc heavy. Javadoc contains everything that would be written in manual. Shiro relies heavily on java cryptography extension. You do not need to understand JCE to use Shiro. However, you need JCE basics to customize it or add new features to it. If you are not interested in JCE, skip to the next chapter. JCE is a set of highly customizable APIs and their default implementation. It is provider-based. If the default implementation does not have what you need, you can easily install a new provider. Each cipher, cipher option, hash algorithm or any other JCE feature has a name. JCE defines two sets of standard names for algorithms and algorithm modes. Those are available with any JDK. Any provider, for example Bouncy Castle, is free to extend the names sets with new algorithms and options. Names are composed into so-called transformations strings which are used to look up needed objects. For example, Cipher.getInstance('DES/ECB/PKCS5Padding') returns DES cipher in ECB mode with PKCS#5 padding. Returned cipher usually requires further initialization, may not use safe defaults and is not thread safe. Apache Shiro composes transformation strings, configures acquired objects and adds thread safety to them. Most importantly, it has easy to use API and adds higher level best practices that should be implemented anyway. Encoding, Decoding and ByteSource Crypto package encrypts, decrypts and hashes byte arrays ( byte[]). If you need to encrypt or hash s string, you have to convert it to byte array first. Conversely, if you need to store hashed or encrypted value in text file or string database column, you have to convert it to string. Text to Byte Array Static class CodecSupport is able to convert the text to byte array and back. The method byte[] toBytes(String source) converts a string to byte array and the method String toString(byte[] bytes) converts it back. Example Use codec support to convert between text and byte array: @Test public void textToByteArray() { String encodeMe = 'Hello, I'm a text.';byte[] bytes = CodecSupport.toBytes(encodeMe); String decoded = CodecSupport.toString(bytes);assertEquals(encodeMe, decoded); }Encode and Decode Byte Arrays Conversion from byte array to string is called encoding. The reverse process is called decoding. Shiro provides two different algorithms:Base64 implemented in class Base64, Hexadecimal implemented in class Hex.Both classes are static and both have encodeToString and decode utility methods available. Examples Encode a random array into its Hexadecimal representation, decode it and verify the result: @Test public void testStaticHexadecimal() { byte[] encodeMe = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20}; String hexadecimal = Hex.encodeToString(encodeMe); assertEquals('020406080a0c0e101214', hexadecimal); byte[] decoded = Hex.decode(hexadecimal); assertArrayEquals(encodeMe, decoded); } Encode a random array into its Byte64 representation, decode it and verify the result: @Test public void testStaticBase64() { byte[] encodeMe = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20}; String base64 = Base64.encodeToString(encodeMe); assertEquals('AgQGCAoMDhASFA==', base64); byte[] decoded = Base64.decode(base64); assertArrayEquals(encodeMe, decoded); }ByteSource Cryptography package often returns an instance of ByteSource interface instead of byte array. Its implementation SimpleByteSource is a simple wrapper around byte array with additional encoding methods available:String toHex() – returns Hexadecimal byte array representation, String toBase64() – returns Base64 byte array representation, byte[] getBytes() – returns wrapped byte array.Examples The test uses ByteSource to encode an array into its Hexadecimal representation. It then decodes it and verifies the result: @Test public void testByteSourceHexadecimal() { byte[] encodeMe = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20}; ByteSource byteSource = ByteSource.Util.bytes(encodeMe); String hexadecimal = byteSource.toHex(); assertEquals('020406080a0c0e101214', hexadecimal); byte[] decoded = Hex.decode(hexadecimal); assertArrayEquals(encodeMe, decoded); } Use Bytesource to encode an array into its Base64 representation. Decode it and verify the result: @Test public void testByteSourceBase64() { byte[] encodeMe = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20}; ByteSource byteSource = ByteSource.Util.bytes(encodeMe); String base64 = byteSource.toBase64(); assertEquals('AgQGCAoMDhASFA==', base64); byte[] decoded = Base64.decode(base64); assertArrayEquals(encodeMe, decoded); }Random Number Generator Random number generator is composed of RandomNumberGenerator interface and its default implementation SecureRandomNumberGenerator. The interface is fairly simple, it has only two methods:ByteSource nextBytes() – generates a random fixed length byte source, ByteSource nextBytes(int numBytes) – generates a random byte source with specified length.The default implementation implements these two methods and provides some additional configuration:setSeed(byte[] bytes) – custom seed configuration, setDefaultNextBytesSize(int defaultNextBytesSize) – the length of nextBytes() output.The seed is a number (byte array in fact) that initializes random number generator. It allows you to generate ‘predictable random numbers’. Two instances of the same random generator initialized with the same seed always generate the same random numbers sequence. It is useful for debugging, but be very careful with it. If you can, do not specify custom seed for cryptography. Use the default one. Unless you really know what you are doing, the attacker may be able to guess the custom one. That would beat all security purposes of random numbers. Under the hood: SecureRandomNumberGenerator delegates random number generation to JCE SecureRandom implementation. Examples First example creates two random number generators and verifies whether they generate two different things: @Test public void testRandomWithoutSeed() { //create random generators RandomNumberGenerator firstGenerator = new SecureRandomNumberGenerator(); RandomNumberGenerator secondGenerator = new SecureRandomNumberGenerator(); //generate random bytes ByteSource firstRandomBytes = firstGenerator.nextBytes(); ByteSource secondRandomBytes = secondGenerator.nextBytes(); //compare random bytes assertByteSourcesNotSame(firstRandomBytes, secondRandomBytes); } Second example creates two random number generators, initializes them with the same seed and checks whether they generate the same expected 20 bytes long random array: @Test public void testRandomWithSeed() { byte[] seed = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; //create and initialize first random generator SecureRandomNumberGenerator firstGenerator = new SecureRandomNumberGenerator(); firstGenerator.setSeed(seed); firstGenerator.setDefaultNextBytesSize(20);//create and initialize second random generator SecureRandomNumberGenerator secondGenerator = new SecureRandomNumberGenerator(); secondGenerator.setSeed(seed); secondGenerator.setDefaultNextBytesSize(20);//generate random bytes ByteSource firstRandomBytes = firstGenerator.nextBytes(); ByteSource secondRandomBytes = secondGenerator.nextBytes(); //compare random arrays assertByteSourcesEquals(firstRandomBytes, secondRandomBytes);//following nextBytes are also the same ByteSource firstNext = firstGenerator.nextBytes(); ByteSource secondNext = secondGenerator.nextBytes();//compare random arrays assertByteSourcesEquals(firstRandomBytes, secondRandomBytes);//compare against expected values byte[] expectedRandom = {-116, -31, 67, 27, 13, -26, -38, 96, 122, 31, -67, 73, -52, -4, -22, 26, 18, 22, -124, -24}; assertArrayEquals(expectedRandom, firstNext.getBytes()); }Hashing A hash function takes an arbitrary long data as an input and converts it to a smaller fixed length data. Hash function result is called hash. Hashing is one way operation. It is not possible to convert hash back to original data. The most important thing to remember is: always store passwords hash instead of password itself. Never ever store it directly. Shiro provides two hash related interfaces, both support two concepts necessary for secure password hashing: salting and hash iterations:Hash – represents hash algorithm. Hasher – use this to hash passwords.A salt is a random array concatenated to the password before hashing. It is usually stored together with the password. Without salt, identical passwords would have the same hash. That would make password hacking much easier. Specify a number of hash iterations to slow down the hash operation. The slower the operation, the more difficult it is to crack stored passwords. Use a lot of iterations. Hash Hash interface implementations compute hash functions. Shiro implements six standard hash functions: Md2, Md5, Sha1, Sha256, Sha384 and Sha512. Each hash implementation extends from ByteSource. The constructor takes input data, salt and number of required iterations. Salt and iterations number are optional. ByteSource interface methods return:byte[] getBytes() – hash, String toBase64() – hash in Base64 representation, String toHex() – hash in Hexadecimal representation.Following code computes Md5 hash of ‘Hello Md5′ text with no salt: @Test public void testMd5Hash() { Hash hash = new Md5Hash('Hello Md5'); byte[] expectedHash = {-7, 64, 38, 26, 91, 99, 33, 9, 37, 50, -22, -112, -99, 57, 115, -64}; assertArrayEquals(expectedHash, hash.getBytes()); assertEquals('f940261a5b6321092532ea909d3973c0', hash.toHex()); assertEquals('+UAmGltjIQklMuqQnTlzwA==', hash.toBase64());print(hash, 'Md5 with no salt iterations of 'Hello Md5': '); } Next snippet calculates 10 iterations of Sha256 with salt: @Test public void testIterationsSha256Hash() { byte[] salt = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};Hash hash = new Sha256Hash('Hello Sha256', salt, 10); byte[] expectedHash = {24, 4, -97, -61, 70, 28, -29, 85, 110, 0, -107, -8, -12, -93, -121, 99, -5, 23, 60, 46, -23, 92, 67, -51, 65, 95, 84, 87, 49, -35, -78, -115}; String expectedHex = '18049fc3461ce3556e0095f8f4a38763fb173c2ee95c43cd415f545731ddb28d'; String expectedBase64 = 'GASfw0Yc41VuAJX49KOHY/sXPC7pXEPNQV9UVzHdso0='; assertArrayEquals(expectedHash, hash.getBytes()); assertEquals(expectedHex, hash.toHex()); assertEquals(expectedBase64, hash.toBase64());print(hash, 'Sha256 with salt and 10 iterations of 'Hello Sha256': '); } Compare iterations calculated by the framework and by the client code: @Test public void testIterationsDemo() { byte[] salt = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; //iterations computed by the framework Hash shiroIteratedHash = new Sha256Hash('Hello Sha256', salt, 10);//iterations computed by the client code Hash clientIteratedHash = new Sha256Hash('Hello Sha256', salt); for (int i = 1; i < 10; i++) { clientIteratedHash = new Sha256Hash(clientIteratedHash.getBytes()); } //compare results assertByteSourcesEquals(shiroIteratedHash, clientIteratedHash); } Under the hood: all concrete hash classes extend from SimpleHash which delegates hash computation to JCE MessageDigest implementation. If you wish to extend Shiro with another hash function, instance it directly. The constructor takes JCE message digest (hash) algorithm name as a parameter. Hasher Hasher works on top of hash functions and implements best practices related to salting. The interface has only one method:HashResponse computeHash(HashRequest request)Hash request provides byte source to be hashed and an optional salt. Hash response returns a hash and a salt. The response salt is not necessary the same as supplied salt. More importantly, it may not be the whole salt used for hashing operation. Any hasher implementation is free to generate its own random salt. The default implementation does that only if the request contains null salt. Additionally, used salt may be composed of ‘base salt’ and ‘public salt’. ‘Public salt’ is returned in the hash response. To understand why it is done this way, you have to recall that salt is usually stored together with the password. The attacker with access to the database would have all information needed for brute-force attack. Therefore, the ‘public salt’ is stored at the same place as the password and ‘base salt’ is stored elsewhere. The attacker then needs to get access to two different places. Default hasher is configurable. You can specify base salt, number of iterations and hash algorithm to be used. Use hash algorithm name from any Shiro hash implementation. It also always returns public salt from the hash request. See the demo: @Test public void fullyConfiguredHasher() { ByteSource originalPassword = ByteSource.Util.bytes('Secret');byte[] baseSalt = {1, 1, 1, 2, 2, 2, 3, 3, 3}; int iterations = 10; DefaultHasher hasher = new DefaultHasher(); hasher.setBaseSalt(baseSalt); hasher.setHashIterations(iterations); hasher.setHashAlgorithmName(Sha256Hash.ALGORITHM_NAME); //custom public salt byte[] publicSalt = {1, 3, 5, 7, 9}; ByteSource salt = ByteSource.Util.bytes(publicSalt); //use hasher to compute password hash HashRequest request = new SimpleHashRequest(originalPassword, salt); HashResponse response = hasher.computeHash(request); byte[] expectedHash = {55, 9, -41, -9, 82, -24, 101, 54, 116, 16, 2, 68, -89, 56, -41, 107, -33, -66, -23, 43, 63, -61, 6, 115, 74, 96, 10, -56, -38, -83, -17, 57}; assertArrayEquals(expectedHash, response.getHash().getBytes()); } If you need compare passwords or data check-sums, provide a ‘public salt’ back to the same hasher. It will reproduce the hash operation. The example uses Shiro DefaultHasher implementation: @Test public void hasherDemo() { ByteSource originalPassword = ByteSource.Util.bytes('Secret'); ByteSource suppliedPassword = originalPassword; Hasher hasher = new DefaultHasher(); //use hasher to compute password hash HashRequest originalRequest = new SimpleHashRequest(originalPassword); HashResponse originalResponse = hasher.computeHash(originalRequest); //Use salt from originalResponse to compare stored password with user supplied password. We assume that user supplied correct password. HashRequest suppliedRequest = new SimpleHashRequest(suppliedPassword, originalResponse.getSalt()); HashResponse suppliedResponse = hasher.computeHash(suppliedRequest); assertEquals(originalResponse.getHash(), suppliedResponse.getHash()); //important: the same request hashed twice may lead to different results HashResponse anotherResponse = hasher.computeHash(originalRequest); assertNotSame(originalResponse.getHash(), anotherResponse.getHash()); } Note: as the supplied public salt in the above example was null, default hasher generated new random public salt. Encryption / Decryption A cipher encrypts the data into ciphertext unreadable without a secret key. Ciphers are divided into two groups: symmetric and asymmetric. A symmetric cipher uses the same key for encryption and decryption. Asymmetric cipher uses two different keys, one is used for encryption and another for decryption. Apache Shiro contains two symmetric ciphers: AES and Blowfish. Both are stateless and thus thread-safe. Asymmetric ciphers are not supported. Both ciphers are able to generate random encryption key and both implement CipherService interface. The interface defines two encryption and two decryption methods. First group serves for encryption/decryption of byte arrays:ByteSource encrypt(byte[] raw, byte[] encryptionKey), ByteSource decrypt(byte[] encrypted, byte[] decryptionKey).Second group encrypts/decrypts streams:encrypt(InputStream in, OutputStream out, byte[] encryptionKey), decrypt(InputStream in, OutputStream out, byte[] decryptionKey).Next code snippet generates new key, encrypts secret message with AES cipher, decrypts it and compares original message with decryption result: @Test public void encryptStringMessage() { String secret = 'Tell nobody!'; AesCipherService cipher = new AesCipherService(); //generate key with default 128 bits size Key key = cipher.generateNewKey(); byte[] keyBytes = key.getEncoded(); //encrypt the secret byte[] secretBytes = CodecSupport.toBytes(secret); ByteSource encrypted = cipher.encrypt(secretBytes, keyBytes); //decrypt the secret byte[] encryptedBytes = encrypted.getBytes(); ByteSource decrypted = cipher.decrypt(encryptedBytes, keyBytes); String secret2 = CodecSupport.toString(decrypted.getBytes()); //verify correctness assertEquals(secret, secret2); } Another snipped shows how to encrypt/decryption streams with Blowfish. Shiro ciphers do not close nor flush neither input nor output stream. You have to do it by yourself: @Test public void encryptStream() { InputStream secret = openSecretInputStream(); BlowfishCipherService cipher = new BlowfishCipherService();// generate key with default 128 bits size Key key = cipher.generateNewKey(); byte[] keyBytes = key.getEncoded();// encrypt the secret OutputStream encrypted = openSecretOutputStream(); try { cipher.encrypt(secret, encrypted, keyBytes); } finally { // The cipher does not flush neither close streams. closeStreams(secret, encrypted); }// decrypt the secret InputStream encryptedInput = convertToInputStream(encrypted); OutputStream decrypted = openSecretOutputStream(); try { cipher.decrypt(encryptedInput, decrypted, keyBytes); } finally { // The cipher does not flush neither close streams. closeStreams(secret, encrypted); }// verify correctness assertStreamsEquals(secret, decrypted); } If you encrypt the same text with the same key twice, you will get two different encrypted texts: @Test public void unpredictableEncryptionProof() { String secret = 'Tell nobody!'; AesCipherService cipher = new AesCipherService();// generate key with default 128 bits size Key key = cipher.generateNewKey(); byte[] keyBytes = key.getEncoded();// encrypt two times byte[] secretBytes = CodecSupport.toBytes(secret); ByteSource encrypted1 = cipher.encrypt(secretBytes, keyBytes); ByteSource encrypted2 = cipher.encrypt(secretBytes, keyBytes);// verify correctness assertArrayNotSame(encrypted1.getBytes(), encrypted2.getBytes()); } Both previous examples used Key generateNewKey() method to generate keys. Use the method setKeySize(int keySize) to override the default key size (128 bits). Alternatively, the keyBitSize parameter of the method Key generateNewKey(int keyBitSize) specifies a key size in bits. Some ciphers support only some key sizes. For example, AES supports only 128, 192, and 256 bits log keys: @Test(expected=RuntimeException.class) public void aesWrongKeySize() { AesCipherService cipher = new AesCipherService(); //The call throws an exception. Aes supports only keys of 128, 192, and 256 bits. cipher.generateNewKey(200); }@Test public void aesGoodKeySize() { AesCipherService cipher = new AesCipherService(); //aes supports only keys of 128, 192, and 256 bits cipher.generateNewKey(128); cipher.generateNewKey(192); cipher.generateNewKey(256); } As far as basics go, this is it. You do not need more to encrypt and decrypt sensitive data in your applications. Update: I was overly optimistic here. Learning more is always useful, especially if you are handling sensitive data. This method is mostly, but not entirely secure. Both the problem and the solution are described in my other post. Encryption / Decryption – Advanced Previous chapter showed how to encrypt and decrypt some data. This chapter shows little bit more about how Shiro encryption works and how to customize it. It also shows how to easily add a new cipher if the standard two are not suitable for you. Initialization Vector Initialization vector is randomly generated byte array used during ecryption. The cipher that uses initialization vector is less predictable and thus harder to decrypt for an attacker. Shiro automatically generates initialization vector and uses it to encrypt the data. The vector is then concatenated with encrypted data and returned to client code. You can turn it off by calling setGenerateInitializationVectors(false) on the cipher. The method is defined on JcaCipherService class. Both default encryption classes extend it. Initialization vector size is encryption algorithm specific. If the default size (128 bits) does not work, use the method setInitializationVectorSize to customize it. Random Generator Turning off an initialization vector does not necessary mean that the cipher becomes predictable. Both Blowfish and AES have an element of randomness in them. Following example turns off the initialization vector, but encrypted texts are still different: @Test public void unpredictableEncryptionNoIVProof() { String secret = 'Tell nobody!'; AesCipherService cipher = new AesCipherService(); cipher.setGenerateInitializationVectors(false);// generate key with default 128 bits size Key key = cipher.generateNewKey(); byte[] keyBytes = key.getEncoded();// encrypt two times byte[] secretBytes = CodecSupport.toBytes(secret); ByteSource encrypted1 = cipher.encrypt(secretBytes, keyBytes); ByteSource encrypted2 = cipher.encrypt(secretBytes, keyBytes);// verify correctness assertArrayNotSame(encrypted1.getBytes(), encrypted2.getBytes()); } It is possible to customize or turn off the randomness. However, never ever do it in a production code. The randomness is absolute necessity for secure data encryption. Both Shiro encryption algorithms extend from JcaCipherService class. The class have setSecureRandom(SecureRandom secureRandom) method. Secure random is standard java JCE random number generator. Extend it to create own implementation and pass it to the cipher. Our ConstantSecureRandom implementation of SecureRandom always returns zero. We supplied it to the cipher and turned off the initialization vector to create an unsecure predictable encryption: @Test public void predictableEncryption() { String secret = 'Tell nobody!'; AesCipherService cipher = new AesCipherService(); cipher.setSecureRandom(new ConstantSecureRandom()); cipher.setGenerateInitializationVectors(false);// define the key byte[] keyBytes = {5, -112, 36, 113, 80, -3, -114, 77, 38, 127, -1, -75, 65, -102, -13, -47};// encrypt first time byte[] secretBytes = CodecSupport.toBytes(secret); ByteSource encrypted = cipher.encrypt(secretBytes, keyBytes);// verify correctness, the result is always the same byte[] expectedBytes = {76, 69, -49, -110, -121, 97, -125, -111, -11, -61, 61, 11, -40, 26, -68, -58}; assertArrayEquals(expectedBytes, encrypted.getBytes()); } Constant secure random implementation is long and uninteresting. It is available on Github. Custom Cipher Out of the box Shiro provides only Blowfish and AES encryption methods. The framework does not implement its own algorithms. Instead, it delegates the encryption to JCE classes. Shiro provides only secure defaults and easier API. This design makes it possible to extend Shiro with any JCE block cipher. Block ciphers encrypt messages per blocks. All blocks have equal fixed size. If the last block is too short, a padding is added to make it the same size as all other blocks. Each block is encrypted and combined with previously encrypted blocks. Therefore, you have to configure:encryption method, block size, padding, how to combine blocks.Encryption Method A custom cipher extends a DefaultBlockCipherService class. The class has only one constructor with one parameter: algorithm name. You may supply any JCE compatible algorithm name. For example, this is source code of Shiro AES cipher: public class AesCipherService extends DefaultBlockCipherService {private static final String ALGORITHM_NAME = 'AES';public AesCipherService() { super(ALGORITHM_NAME); }} AES does not need to specify no other encryption parameter (block size, padding, encryption method). Defaults are good enough for AES. Block Size Default block cipher service has two methods for block size customization. The method setBlockSize(int blockSize) works only for byte array encoding and decoding. The method setStreamingBlockSize(int streamingBlockSize) works only for stream encoding and decoding. The value 0 means that the default algorithm specific block size will be used. This is the default value. Block cipher block size is very algorithm-specific. Selected encryption algorithm may not work with an arbitrary block size: @Test(expected=CryptoException.class) public void aesWrongBlockSize() { String secret = 'Tell nobody!'; AesCipherService cipher = new AesCipherService(); // set wrong block size cipher.setBlockSize(200);// generate key with default 128 bits size Key key = cipher.generateNewKey(); byte[] keyBytes = key.getEncoded();// encrypt the secret byte[] secretBytes = CodecSupport.toBytes(secret); cipher.encrypt(secretBytes, keyBytes); }Padding Use the method setPaddingScheme(PaddingScheme paddingScheme) to specify byte array encryption and decryption padding. The method setStreamingPaddingScheme( PaddingScheme paddingScheme) specifies stream encryption and decryption padding. The enumeration PaddingScheme represents all typical padding schemes. Not all of them are available by default, you might have to install custom JCE provider to use them. The value null means that the default algorithm specific padding will be used. This is the default value. If you need a padding not included in the PaddingScheme enumeration, use either setPaddingSchemeName or setStreamingPaddingSchemeName methods. These methods take a string with padding scheme name as a parameter. They are less type-safe but more flexible than the above ones. Padding is very algorithm-specific. Selected encryption algorithm may not work with an arbitrary padding: @Test(expected=CryptoException.class) public void aesWrongPadding() { String secret = 'Tell nobody!'; BlowfishCipherService cipher = new BlowfishCipherService(); // set wrong block size cipher.setPaddingScheme(PaddingScheme.PKCS1);// generate key with default 128 bits size Key key = cipher.generateNewKey(); byte[] keyBytes = key.getEncoded();// encrypt the secret byte[] secretBytes = CodecSupport.toBytes(secret); cipher.encrypt(secretBytes, keyBytes); }Operation Mode Operation mode specifies how are blocks chained (combined) together. As it was with a padding scheme, you might use either an OperationMode enumeration or a string to supply them. Be careful, not each operation mode might be available. Additionally, they are not born equal. Some chaining modes are less safe than others. The default Cipher Feedback operation mode is both safe and available on all JDK environments. Methods to set the operation mode for byte array encryption and decryption:setMode(OperationMode mode) setModeName(String modeName)Methods to set the operation mode for stream encryption and decryption:setStreamingMode(OperationMode mode) setStreamingModeName(String modeName)Exercise – Decrypt Openssl Suppose that an application sends data encrypted with Linux openssl command. We know both hexadecimal representation of the key and command used to encrypt the data:The key: B9FAB84B65870109A6E8707BC95151C245BF18204C028A6A. The command: openssl des3 -base64 -p -K <secret key> -iv <initialization vector>.Each message contains both hexadecimal representation of the initialization vector and base64 encoded encrypted message. Sample message:The initialization vector: F758CEEB7CA7E188. The message: GmfvxhbYJbVFT8Ad1Xc+Gh38OBmhzXOV.Generate Sample With OpenSSL The sample message was encrypted with the command: #encrypt 'yeahh, that worked!' echo yeahh, that worked! | openssl des3 -base64 -p -K B9FAB84B65870109A6E8707BC95151C245BF18204C028A6A -iv F758CEEB7CA7E188 Use OpenSSL option -P to generate either a secret key or a random initial vector. Solution First, we have to find out algorithm name, padding and operation mode. Fortunately, all three are available in OpenSSL documentation. Des3 is an alias for triple DES encryption algorithm in CBC mode and OpenSSL uses PKCS#5 padding. Cipher-block chaining (CBC) requires an initialization vector of the same size as the block size. Triple DES requires 64 bit long blocks. Java JCE uses ‘DESede’ algorithm name for Triple DES. Our custom cipher extends and configures DefaultBlockCipherService: public class OpensslDes3CipherService extends DefaultBlockCipherService {public OpensslDes3CipherService() { super('DESede'); setMode(OperationMode.CBC); setPaddingScheme(PaddingScheme.PKCS5); setInitializationVectorSize(64); } } Shiro cipher decrypt method expects two input byte arrays, ciphertext and key. Ciphertext should contain both initialization vector and encrypted cipher text. Therefore, we have to combine them together before we try to decrypt the message. The method combine combines two arrays into one: private byte[] combine(byte[] iniVector, byte[] ciphertext) { byte[] ivCiphertext = new byte[iniVector.length + ciphertext.length];System.arraycopy(iniVector, 0, ivCiphertext, 0, iniVector.length); System.arraycopy(ciphertext, 0, ivCiphertext, iniVector.length, ciphertext.length);return ivCiphertext; } The actual decryption looks as usually: @Test public void opensslDes3Decryption() { String hexInitializationVector = 'F758CEEB7CA7E188'; String base64Ciphertext = 'GmfvxhbYJbVFT8Ad1Xc+Gh38OBmhzXOV'; String hexSecretKey = 'B9FAB84B65870109A6E8707BC95151C245BF18204C028A6A';//decode secret message and initialization vector byte[] iniVector = Hex.decode(hexInitializationVector); byte[] ciphertext = Base64.decode(base64Ciphertext);//combine initialization vector and ciphertext together byte[] ivCiphertext = combine(iniVector, ciphertext); //decode secret key byte[] keyBytes = Hex.decode(hexSecretKey);//initialize cipher and decrypt the message OpensslDes3CipherService cipher = new OpensslDes3CipherService(); ByteSource decrypted = cipher.decrypt(ivCiphertext, keyBytes); //verify result String theMessage = CodecSupport.toString(decrypted.getBytes()); assertEquals('yeahh, that worked!\n', theMessage); }End This part of Apache Shiro tutorial covered cryptography features available in 1.2 version. All used examples are available on Github. Reference: Apache Shiro Part 3 – Cryptography from our JCG partner Maria Jurcovicova at the This is Stuff blog....

Understanding how OSGI bundles get resolved

I’d like to review how OSGI bundles get resolved and use Apache Karaf to demonstrate. Karaf is a full-featured OSGI container based on the Apache Felix kernel and is the corner stone for the Apache ServiceMix integration container. For part one, I will discuss how bundles are resolved by an OSGI framework. In part two, I’ll demonstrate each rule using Apache Karaf. Let’s get started. Bundle Resolution Rules An OSGI bundle’s lifecycle defines the possible states and transitions for a bundle. We will be discussing the “Resolved” state of a bundle which means the state it can reach after being “Installed” and when all of its required dependencies are satisfied. Traditional Java classloading is susceptible to runtime ClassCastExceptions where two classes with the same fully-qualified name from two different class loaders become mixed up and one is used in the wrong classpath space. One of the main goals of OSGI is to avoiding this kind of runtime exception by resolving all dependencies at deploy time with the idea being failing “fast” at deploy time will be easier to debug than trying to track down classloading issues at runtime. Think about how annoying some of the class not found or class cast exceptions are to debug in a Weblogic deployment, for example. OSGI solves this. For a bundle to reach the “Resolved” state, it must have it’s dependencies fulfilled. Think of the “fail fast” approach to bundle resolution like this: if you use a spring application, and one of your beans cannot be wired properly because a bean definition is missing, you will know this at deploy time instead of when a customer is calling your code. The same principle is applied with OSGI; instead of object-level wiring dependencies, we are wiring module and class-loading dependencies.A trivial explanation of a bundle having its dependencies resolved could go like this: if a bundle imports (Import-Package) a specific package, that package must be made available by another bundle’s exports (Export-Package). If bundle A has Import-Package: org.apache.foo then there must be a bundle deployed that has an Export-Package: org.apache.foo For every Import-Package package declaration, there must be a corresponding Export-Package with the same package Bundles can also attach other attributes to the packages it imports or exports. What if we added a version attribute to our example: Bundle-Name: Bundle A Import-Package: org.apache.foo;version="1.2.0" This means, Bundle A has a dependency on package org.apache.foo with a minimum version of 1.2.0. Yes, you read correctly. Although with OSGI you can specify a range of versions, if you don’t specify a range but rather use a fixed version, it will result in a meaning of “a minimum” of the fixed value. If there is a higher version for that same package, the higher version will be used. So bundle A will not resolve correctly unless there is a corresponding bundle B that exports the required package: Bundle-Name: Bundle B Export-Package: org.apache.foo;version="1.2.0" Note that the reverse is not true… If Bundle B exported version 1.2.0, Bundle A is not required to specify a version 1.2.0. It can use this import and resolve just fine: Bundle-Name: Bundle A Import-Package: org.apache.foo This is because imports declare the versions they need. An exported version does not specify anything an importing bundle must use (which holds for any attributes, not just version). Import-Package dictates exactly what version (or attribute) it needs, and a corresponding Export-Package with the same attribute must exist What happens if you have a scenario where Bundle A imports a package and it specifies a version that is provided by two bundles: Bundle-Name: Bundle A Import-Package: org.apache.foo;version="1.2.0" Bundle-Name: Bundle B Export-Package: org.apache.foo;version="1.2.0" Bundle-Name: Bundle C Export-Package: org.apache.foo;version="1.2.0" Which one bundle does Bundle A use? The answer is it depends on which bundle (B or C) was installed first. Bundles installed first are used to satisfy a dependency when multiple packages with the same version are found Things can get a little more complicated when hot deploying bundles after some have already been resolved. What if you install Bundle B first, then try to install Bundle A and the following Bundle D together: Bundle-Name: Bundle D Export-Package: org.apache.foo;version="1.3.0" As we saw from above, the version declaration in Bundle A (1.2.0) means a minimum version of 1.2.0; so if a higher version was available then it would select that (version 1.3.0 from Bundle D in this case). However, that brings us to another temporal rule for the bundle resolution: Bundles that have already been resolved have a higher precedence that those not resolved The reason for this is the OSGI framework tends to favor reusability for a given bundle. If it’s resolved, and new bundles need it, then it won’t try to have many other versions of the same package if it doesn’t need to. Bundle “uses” directive The above rules for bundle resolution are still not enough and the wrong class could still be used at runtime resulting in a class-cast exception or similar. Can you see what could be missing?What if we had this scenario. Bundle A exports a package, org.apache.foo, that contains a class, FooClass. FooClass has a method that returns an object of type BarClass, but BarClass is not defined in the bundle’s class space, it’s imported like this: public class FooClass { public BarClass execute(){ ... } }Bundle-Name: Bundle A Import-Package: org.apache.bar;version="3.6.0" Export-Package: org.apache.foo;version="1.2.0" So far everything is fine as long as there is another bundle that properly exports org.apache.bar with the correct version. Bundle-Name: Bundle B Export-Package: org.apache.bar;version="3.6.0" These two bundles will resolve fine. Now, if we install two more bundles, Bundle C and Bundle D that look like this: Bundle-Name: Bundle C Import-Package: org.apache.foo;version="1.2.0", org.apache.bar;version="4.0.0" Bundle-Name: Bundle D Export-Package: org.apache.bar;version="4.0.0" We can see that Bundle C imports a package, org.apache.foo from Bundle A. Bundle C can try to use FooClass from org.apache.foo, but when it gets the return value, a type of BarClass, what will happen? Bundle A expects to use version 3.6.0 of BarClass, but bundle C is using version 4.0.0. So the classes used are not consistent within bundles at runtime (i.e., you could experience some type of mismatch or class cast exception), but everything will still resolve just fine at deploy time following the rules from above. What we need is to tell anyone that imports org.apache.foo that we use classes from a specific version of org.apache.bar, and if you want to use org.apache.foo you must use the same version that we import. That’s exactly what the uses directive does. Let’s change bundle A to specify exactly that: Bundle-Name: Bundle A Import-Package: org.apache.bar;version="3.6.0" Export-Package: org.apache.foo;version="1.2.0"";uses:=org.apache.bar Given the new configuration for Bundle A, the bundles would not resolve correctly from above. Bundle C could not resolve, because it imports org.apache.foo but the “uses” constraint on Bundle A specifies that C must use the same version that A does (3.6.0) for org.apache.bar, otherwise the bundle will not resolve when trying to deploy. The solution to this is change the version in Bundle C for org.apache.bar to be 3.6.0. Using the Apache Karaf OSGI container Karaf is based on the Apache Felix core, although the Equinox core can be substituted if desired. Karaf is a full-featured OSGI container and is the cornerstone of the Apache ServiceMix integration container. ServiceMix is basically Karaf but specifically tuned for Apache Camel, Apache ActiveMQ and Apache CXF. This tutorial will require Maven and Karaf. Download maven from the maven website. Download and install karaf as described in the getting started guide on the Karaf website. You will also need the code that goes along with this example. You can get it at my github repo. After getting it, make sure to run ‘mvn install’ from the top-level project. This will build and install all of the bundles into your local maven repository. Although you can install bundles a couple different ways, using maven is easiest. Note that this sample code is mostly made up of package names without any real Java classes (except where the tutorial specifies). First thing to do is start up karaf. In a plain distribution there should be no bundles installed. Verify this by doing “osgi:list” at the karaf commandline. Going in order, we will test out the rules given above. For every Import-Package package declaration, there must be a corresponding Export-Package with the same package To test this rule, let’s install Bundle A from our sample bundles. Bundle A specifies an Import-Package of “org.apache.foo” package. According to the first rule, this bundle cannot move to the “Resolved” state since there is no corresponding bundle with an “Export-Package” of org.apache.foo. From the karaf commandline, type “osgi:install mvn:explore-bundle-resolution/bundleA/1.0?. This will install the bundleA bundle. Now do a “osgi:list” again. You should see the bundle installed, and under the “State” column, it should say “Installed”. Now try “osgi:resolve bundle id” where bundle id is the ID listed from the “osgi:list” command. This will try to resolve all bundle dependencies and put it into the “Resolved” state. It won’t resolve, however. Type “osgi:list” again to see the state of the bundle. It’s still in “Installed” state even though we asked OSGI to resolve it. Let’s find out why. Execute the “osgi:headers bundle id“. Under the Import-Package, you should see the package name org.apache.foo listed in a red color. This dependency is missing, so let’s add it. Type “osgi:install -s mvn:explore-bundle-resolution/bundleB/1.0?. Note the ‘-s’ switch in the command. This tells OSGI to start the bundle once it’s installed. Now type the osgi:resolve command again (with the appropriate bundle ID). This will now resolve the bundle. Import-Package dictates exactly what version (or attribute) it needs, and a corresponding Export-Package with the same attribute must exist Let’s install bundle C: “osgi:install -s mvn:explore-bundle-resolution/bundleC/1.0? List the bundles again, and you’ll see that although bundle C depends on org.apache.foo, it specifies an Import-Package with a specific version=1.5. There is no version 1.5 that is resolved, so bundle C will also not resolve. Bundle D happens to export a package org.apache.foo with a version equal to 1.5. Install bundle D the same way we’ve installed the others, using the -s to start it. Now try to resolve bundle C and it should work (“osgi:resolve bundle id“). Bundles installed first are used to satisfy a dependency when multiple packages with the same version are found This rule says that if there are multiple packages exported with the same version, OSGI will choose the first-installed bundle to use when trying to resolve bundles that import the package. Continuing on with the previous example where we installed bundle C and D… consider that bundle D exports org.apache.foo;version=1.5. So if we install bundle F that exports the exact same package and version, we should see that bundle C is resolved with the package from bundle D and not bundle F. Let’s see.. install bundle F: “osgi:install -s mvn:explore-bundle-resolution/bundleF/1.0?. Do an osgi:list and see that both bundle D and F are correctly installed and “Active”. This is a cool feature of OSGI: we can have multiple versions of the same package deployed at the same time, (including in this example the exact same version). Now we should uninstall bundle C and re-install it to see which bundle it uses to resolve for its import of org.apache.foo. Try running “osgi:uninstall bundle id” to uninstall bundle C. Now re-install it using the command from above. It should resolve to use bundle D. Use “package:import bundle id” to verify. You can try switching things around to get F to resolve. You may need to use “osgi:refresh” to refresh the OSGI bundles.  Bundles that have already been resolved have a higher precedence that those not resolved In a way, we have already seen this with the previous rule, but this rule comes into play when hot deploying. This is left as an exercise to the reader as this post is already getting pretty long and I would like to cover the “uses” directive next.  Bundle “uses” directive The “uses” directive adds one of the last rules and constraints to avoid runtime class-cast exceptions. To simulate how the “uses” directive works, we will install bundles G, H, I, and J and notice how the container enforces the “uses” directive. Bundle G represents a sort of “service” module that client modules can call to “execute” some form of processing and return a result. The result it returns is an object of type BarClass that comes from Bundle H. But if a client makes a call to bundle G, it too must use the BarClass from bundle H or it will result in a class cast exception. In our samples, Bundle I is the client code and Bundle J represents a different version of the BarClass. Install the bundles in any order you like, but my demonstration followed this order: J, H, G, I. Note that the version of org.apache.bar is indeed the 2.0.0 version which comes from bundle H even though bundle H was installed second (contrary to the rule above). This is because bundle G specified the “uses” directive to depend on a specific version of org.apache.bar. Reference: Understanding how OSGI bundles get resolved from our JCG partner Christian Posta at the Christian Posta Software blog....

A crash course in Scala types

After many years of Java development, discovering Scala’s type system and related features was something of a departure for me. Suffice to say GADT wasn’t my first four letter utterance when learning about pattern matching on types, let alone what, when and how to use variance annotations and generalized type constraints. To kick things off, here’s a ‘small but powerful‘ few lines of buzzword bingo on the type system: …Scala is a statically, strongly, typed language, with implicit type inference and support for structural and existential types. It also features parameterized types, abstract and phantom types and is capable of implicitly converting between datatypes. These core capabilities are utilized by context and view bounds and complimented by generalized type constraints, to provide powerful compile time contracts. Furthermore, Scala supports declaration site type annotations to facilitate invariant, covariant and contravariant type variance… In a word, Ouch ! In the remainder of this post, I’ll try to demystify these concepts and sew the seeds of intrigue for further investigation. In order to keep the post at a manageable length, some detail will be eluded to and links provided, (both at the end and inline) for the reader to pursue independently. Time and space permitting, I’ll try to briefly cover the hows and whys of some of these features to give context of their practical importance and implications. Apologies in advance for the length of the post, but there’s a lot of ground to cover and lots of opportunity for the reader to skim. Keep with it as there’s gold in them there types, unsurprising given the adventurous depth of features that the language tries to mine. So first up, what is a type system ? One authors definition (which I found thorough, if not a slightly cryptic): ‘..A type system is a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute’ from Ben Pierce’s ‘Types and Programming Languages‘ – a bible for anyone interested in type system theory.. And again, Ouch ! I prefer the lay interpretation of this whereby a type provides some sort of label to a type system. In turn, this enables the type system to then prove (or constrain) some property of the programs’ behaviour. Practically a type system let’s either the compiler [typically] or runtime add some meaning to the data and values/variables, (something I’ll henceforth refer generically as fields/elements due to the overloading of the terms val and var in Scala), in order to react or fail appropriately. 2 fat ladies… So deconstructing the buzzword bingo above, the primary attribute listed is that of being statically typed. So what does this mean ? Static typing provides COMPILE TIME constraints, checks and balances and, as such, provides the first line of defence, QA and feedback against program errors. The converse of this is dynamic typing, where type attribution to elements is determined at runtime. Besides the early feedback loop, other commonly cited benefits of static typing are:better performance and ability to optimise the code – as the compiler can perform more optimisations and removes the need to do type checks at runtime. better implicit and explicit documentation support – as method signatures make this implicit in the code and explicit in any code generated from the source, also type information can be used to convey the authors intent. better tooling support and ability to analyse the code – as the tools can check the types being passed between methods and into constructors etc.. better support for correctness (see Rice’s theorem, this slide deck on structural induction and the type checker and the Curry-Howard isomorphism) – as we’ll see later in this piece, correctness can be further supported by judicious use of type constriants. better abstraction/modularity – as support for abstract types allows the author to frame the problem differently and (potentially), in a more modular fashion.Having said that, in practise, few languages are exclusively dynamic or statically typed. Given this list of static type system features, why would anyone use dynamic typing ? Dynamic typing is typically considered to provide greater flexibility for building loosely coupled systems and for rapid prototyping. These are fairly natural benefits of dynamic languages as they do not have to adhere to the constraints imposed by the compiler, but this comes at the cost of the static typing benefits aforementioned. Learning to let go… letting go to learn… Again, coming from a Java background, my natural instincts assumed: a) I had a reasonably good understanding of static typing having spent many years doing Java development and; b) that statically typed languages are inherently bloated and require a lot of boilerplate code. From the Scala perspective, it is both interesting to see how quickly you hit the limits of Javas support for types ‘out-of-the-box’ (i.e. without adding external libraries or significant hacking of the core system), and how Scala shatters the assumption that static typing == code bloat. One such cruft cutting feature in Scala is evident with type inference support. With this, elements only need to provide their type information ‘on the right’ when they are declared. So the syntax for Scala element declaration tends be ordered by salience [IMHO] with the following precedence in place: the elements mutability status; the name of the element, then; the type information (which is held exclusively, (and not repeated.. that’s right, not repeated ) ‘on the right’. val a = 1 // this type gets inferred to be of type Int val b = "b" // this type in inferred to be a String val c = 2.0 // this type is inferred to be a Doublecase class SomeThing class SomeOtherThingval d = SomeThing // this type is instantiated as SomeThing. Not no need for the 'new' keyword as this is a case class val e = new SomeOtherThing // This type requires the new keyword as no factory method is created for non case classesA further point of note around the type inference strategy used by Scala is that “local type inference” aka “flow type inference” that is used instead of the Damas-Milner (aka Hindley-Milner aka HM) strategy used in other statically typed, implicitly inferred languages (see also System F and its variants). Types cubed Indeed, much has been made of the complexity and richness of Scala’s type support, with Scala’s abstract types being a cause of confusion and much distress (e.g. see this post for an exmaple). As the name suggests, abstract types in Scala allow for types to be referred to in the abstract and hence used as field level members of classes. This means that type fields can act as placeholders for types to be realized at a later date, (making it possible to lazily design the concrete types as the solution to the problem uncovered.. providing just enough (lagom™) design input). In a sense they are similar to type references used in parameterized types (i.e. types that require a type parameter for their declaration such as Java generics), though broken out of their Collection containers. (Note the idiomatic distinction between parameterized types and abstract types tends to distinguish between the type indicator being used for a collection vs other scenarios See ‘A Statically Safe Alternative to Virtual Types‘). import scala.collection.GenSeqtrait SimpleListTypeContainer { type Simple // declare an abstract type with the label Simple type SimpleList <: GenSeq[Simple] // Constrain the Simple List abstract type based on the previously defined abstract type }An additional feature of abstract types is that type bounds can be used, (i.e the actual concrete types that are permitted for a declaration can be constrained programmatically and enforced at compile time). This allows for the intent in the code to be made explicit, such as when trying to suggest certain types of specialisation (e.g. family polymorphism). Coupled with self types, this makes for a powerful set of type tools. Self types, allow ‘this’ references to be explicitly tied to another class using the ’self’ keyword, (so within the code it is possible to make the ‘this’ reference mean a different type than the actual containing type). trait DB { def startDB: Unit } // defines an abstract start for a DB component trait MT { def startMT: Unit } // defines an abstract start for an MT componenttrait Oracle extends DB { def startDB = println("Starting Oracle") } // Some actual concrete instances.. dummied for example trait Service extends MT { def startMT = println("Starting Service") }trait App { self: DB with MT => // declare that self for App refers to a DB with MT, so that we have access to the startXX opsstartDB startMT }object DummyApp extends App with Oracle with Service // create a concrete instance with an actual DB and MT instance DummyApp.run // run it and see "Starting Oracle" then "Starting Service"Self types have a number of uses, such as: providing traits or abstract classes visibility to the fields and/or methods of the class they are masquerading as/ mixed into; as a type-safe, (i.e. compile time checked) way to perform declarative dependency injection (see the cake pattern). One little duck Scala also facilitates type-safe duck typing via structural types (see here for a side by side comparison of structural typing and duck typing). class Duck { def squawk = println("Quack") def waddle = println("Duck walk") } class Pengiun { def squawk = println("Squeek") def waddle = println("Penguin walk") } class Person { }// everybody's heard about the word... def birdIsTheWord(bird : { def squawk; def waddle}) = { bird.squawk bird.waddle }birdIsTheWord(new Duck()) // prints "Quack" then "Duck walk" birdIsTheWord(new Penguin()) // prints "Squeek" then "Penguin walk" birdIsTheWord(new Person()) // Will not compileThe ability, to use classes according to some feature(s) of their structure, (rather than by name), has a number of uses, such as: in conjunction with implicits (still to be come) as part of a Smart Adapter Pattern; for creating quick ad-hoc prototyping code; as an enabler for method reuse in cases where client classes are unrelated, but share an internal structural feature (Note: the typical example touted here being a Gun and a Camera being unrelated items, but both having a shoot() method ! An example that has the dual purpose of also highlighting the inherent dangers of structural and dynamic typing per se.. for some reason, this example always reminds me of the early 90’s films Let him have it). An observation thus far is that the few aforementioned core type system building blocks have the effect of triggering a shift in mindset and approach to problems. In fact, the notion of ‘thinking in Scala’ is not one of syntactic complexity (IMHO), but rather what is the ‘best‘ (by any subjective measure of best) idiomatic use of the extensive feature set provided. Personally, I’ve found myself deconstructing problems into their expected inputs and desired outputs, and looking at modelling my problem domain in terms of types and operations that happen on those types as opposed to looking at the World through Object tinged spectacles.Luckily there are [link to bottom of page]some[/link] resources that help when trying to investigate further, and some idioms (like theorems) come for free ! One such construct is that of dependent types, for which Tuple instantiation provides the simplest example. Extending the notion of dependent types and building upon the inherent nesting capabilities, Scala also supports path-dependent types. As the name suggests, any types created are pivoted around the namespace in which they are created. Idiomatically, path dependent types have been used in making component oriented software and in the Cake Pattern method for handing dependency injection. Interestingly, path and value dependent types can also be interleaved as this example demonstrates. // First a simple example of Tuple creationval myTuple2 = ("A", 1) val myTuple3 = ("B", 2, true) // creates a Tuple3 of String, Int, Boolean// Not on to the path dep types stufftrait BaseTrait { type T val baseA : T }class ChildA[String](param : String) extends BaseTrait { type T = String override val baseA = param println(baseA) }val childA = new ChildA("A")type B = Stringclass ChildB[String](param : String) extends BaseTrait { // type T = B // Won't compile - baseA (below) has an incompatible type ! // type T = childA.T // Won't compile - baseA (below) has an incompatible type ! type T = String override val baseA = param println(baseA) }val childB = new ChildB("B")Variance annotations.. ooh.. pardon ? Generic types in Scala are invariant by default (i.e. they can only accept the exact type as a parameter with which they were declared). Scala provides other variance annotations to permit covariance type usage (i.e. permitting any children of the declared types), and contravariance of declared types (i.e. parents of the declared type are also permitted by no child types). These variance declarations (aka variance annotations) are specified at the declaration site (i.e. where the initial parameterized type is declared), as opposed to at usage site (as is the case in Java e.g. each instantiation is free to define what is expected in the Parameterized type for the specific use). So what does all this mean and how does this manifest itself in the code ? And what is the use of having such constraints on parameterised types ? Probably the clearest explanation of this I have read is from O’Reillys excellent programming Scala. To supplement the description given in programming scala, let’s walk through the implications of their func2-script scala: // Taken from the excellent tutorial here: // http://programming-scala.labs.oreilly.com/ch12.html#VarianceUnderInheritance // Note this sample leverages the Function1 trait: Function1[-T, +R]// WON'T COMPILEclass CSuper { def msuper = println("CSuper") } class C extends CSuper { def m = println("C") } class CSub extends C { def msub = println("CSub") }def useF(f: C => C) = { val c1 = new C // #1 val c2: C = f(c1) // #2 c2.msuper // #3 c2.m // #4 }useF((c: C) => new C) // #5 useF((c: CSuper) => new CSub) // #6 useF((c: CSub) => {println(c.msub); new CSuper}) // #7: ERROR!Taken from: O’Reilly Programming Scala Given the trait Function1 has the following declaration Function1[-T, +R], the variance annotation for the type for the single argument parameter [the -T bit of the declaration], is contravariant and therefore accepts the explicit declared type and any parent type of T, whereas the the return type [the +R bit] is covariant and so enforces that any return parameter is either of type R or a child type of R i.e. the return type ‘is-an’ R. What this means is that we contractually expect any function to be able to take (as a minimum) an instance of a T and return an R. Hence any client code using this function, can be confident in assuming it will be able to call any methods advertised on a type R on the value returned from the function. The contravariant input parameter of type T here also allows for broader application of the function, i.e. a more broad/general purpose function could be substituted for an instance that just accepts the type T. For example, given the following (pseudo) Function1 description: Function1(Ninja, Pirate), a substitute (strictly speaking a child function type) function that is able to take a more generic type (such as a Person i.e. Function1(Person, Pirate)) and return a Pirate would be a valid substitue function in accordance with the advertised contract. By the same token any function that could accept a Ninja and return Long John Silver would be a valid conversion for a Ninja to a Pirate. In this instance the function is actually returning a specialisation of the Pirate return type. While covering parameterized types in Scala, it’s a good juncture to mentioning context bounds in this… erm context ! Context bounds extend the functionality of a given type by using the seed type to be used as the type for a parameterised type. For exmaple, given class A and a trait B[T], context types provide syntactic sugar to allow for the an instance of B[A], (hence allowing method calls from type B to be issued against an instance of type A). The syntactic sugar here would be def someMethod[A: B](a: A) = implicitly[B[A]].someMethodOnB(a) (Note: there is an excellent writeup of context bounds provided by Debasish Ghosh here). One more time… 79 ! Having previously discussed abstract types, it’s worth seeiung how they relate to the variance annotations just mentioned. Functionally, (though with a degree of shoehorning), abstract types and variance annotations could be used interchangeably. In practise, the two constructs have different histories and different intents. Variance annotations essentailly apply to construct decalrations and are commonly used for parameterized types (e.g. in declaring Lists of things, or Options of specific types). In the Java world, parameterized types have manifested as Java Generics and their use is much more widespread within the OO domain where inheritence is a key feature. A corollary of abstact types and type bounds support in Scala is the availability of phantom types. Essentially, phantom types are type variables that are not instantiated at runtime, but used by the compiler to enforce constraints in the source code. As such, phantom types can be used for type level programming in Scala and adding another tier of support for program correctness. (e.g. a sample use of phantom types in type safe reflection and a type-safe builder pattern with phantom types in Scala). View bounds are related to phantom types as they further constrain the use of types based on their ability to be be converted into other types. An example of this use of view bounds was provided in the [link]structural typing example[/link] earlier. View bounds are also inherent to the ‘pimp my library’ pattern, where a class can have its functionality extended by adding functionality from other classes, but [importantly] without changing the original class. This means that the original class can be returned from a call even though it may get used under the guise of another class. Also, although Scala supports implicit conversion of types, (most explicitly seen via view bounds), it is still a strongly typed language, (i.e. only explcitly defined implicit conversions are possible, the compiler doesn’t automagically try to deduce conversions and use them, which can lead to unpredictable runtime consequences). Making it special… As a quick recap, so far we’ve mentioned that Scala is a statically typed language and presented some of the pros and cons of being such. Although Scala is statically and strongly typed, powerful type inference, structural typing and implicit type conversions provide a great amount of flexibility and remove lots of unneccesary boilerplate code. Also, as Scala is predominantly a language and compiler, (i.e. discounting the awesome Scala community, libraries and frameworks that are also available), we’ve seen that one of the big wins from a users perspective is in the increased opportunity to program for correctness granted by the ability to contrain, bound and extend types. Thus far we have focussed on type contraints via variance annotations for parameterized types and view bounds for methods. From Scala 2.8 onwards parameterized types have been afforded even more constriant capabilities via generalised type constraint classes. These classes enable further specialisation in methods, and complement context bounds, as follows:A =:= B asserts that A and B must be the equal A <:< B asserts that A must be a subtype of B A sample usage of these classes would be to enable a specialisation for addition of numeric elements in a collection, or for bespoke print formatting, or to allow for customised liability calculations on specific bet or fund types in a traders portfolio. For example: case class PrintFormatter[T](item : T) { def formatString(implicit evidence: T =:= String) = { // Will only work for String PrintFormatters println("STRING specialised printformatting...") } def formatPrimitive(implicit evidence: T <:< AnyVal) = { // Will only work for Primitive PrintFormatters println("WRAPPED PRIMITIVE specialised printformatting...") } }val stringPrintFormatter = PrintFormatter("String to format...") stringPrintFormatter formatString // stringPrintFormatter formatPrimitive // Will not compile due to type mismatchval intPrintFormatter = PrintFormatter(123) intPrintFormatter formatPrimitive // intPrintFormatter formatString // Will not compile due to type mismatchHouse ! Finally, and as a segue into a somewhat contrived example, it’s worth noting that Scala supports existential types primarily as a means of integrating better with Java (for both generics and primitive type support). In an effort to compensate for the type erasure as implemented in Java’s generics support, Scala includes a feature called ‘Manifests’ to (effectively) keep a record of the classes used in paramaterised types in use. So, let’s close with an extended example using Manifest based (pseudo i.e. compiler derived) reification and specialisation constriants on a few helper methods to show some of Scala’s type system tricks and support in action. class ReifiedManifest[T <: Any : Manifest](value: T) {val m = manifest[T] // So at this point we have the manifest for the Parameterized type // At which point we could either do an if() expression on what type is contained in our manifestif (m equals manifest[String]) { println("The manifest contains a String") } else if (m <:< manifest[AnyVal]) { // A subtype check using the <:< operation on the Manifest trait println("The manifest contains a subtype of AnyVal") } else if (m <:< manifest[AnyRef]) { println("The manifest contains a subtype of AnyRef") } else { println("Not sure what type is contained ?") } // or we could grab the erased type from the manifest and do a match on some attribute of the type m.erasure.toString match { case "class java.lang.String" => println("ERASURE: pattern matches on a String") case "double" | "int" => println("ERASURE: pattern matches on a Numeric value.") case x => println("ERASURE: has picked up another type not spec'd in the pattern match: " + x) } }new ReifiedManifest("Test") // Contains a String / matches on a String new ReifiedManifest(1) // Contains an AnyVal / matches on a Numeric new ReifiedManifest(1.2) // Contains an AnyVal / matches on a Numeric new ReifiedManifest(BigDecimal("3.147")) // Contains an AnyRef / matches on a an unspecified typeThe final word This is really just scratching the surface of both the generic power and complexities of type systems and what is possible and how in Scala. There is a wealth of resources and further reading available in this area, but hopefully some of those listed below should help. I’ve found that having a grasp of the fundamentals of types in Scala has helped me understand other features of the language and actually affected how I approach problems and what I expect from their solutions. I hope this write up has been useful and, until next time, happy hacking ! LinksProgramming Scala – the Type system Meta programming with Scala types Type level programming in Scala Encoding union type in Scala Scala type programming resources hub on SO Scala’s type system and domain model constraints Encoding TicTacToe in Scala’s type system Scala Design patternsReference: A crash course in Scala types from our JCG partner Kingsley Davies at the Scalabound blog....

XML unmarshalling benchmark: JAXB vs STax vs Woodstox

Introduction Towards the end of last week I started thinking how to deal with large amounts of XML data in a resource-friendly way.The main problem that I wanted to solve was how to process large XML files in chunks while at the same time providing upstream/downstream systems with some data to process. Of course I’ve been using JAXB technology for few years now; the main advantage of using JAXB is the quick time-to-market; if one possesses an XML schema, there are tools out there to auto-generate the corresponding Java domain model classes automatically (Eclipse Indigo, Maven jaxb plugins in various sauces, ant tasks, to name a few). The JAXB API then offers a Marshaller and an Unmarshaller to write/read XML data, mapping the Java domain model. When thinking of JAXB as solution for my problem I suddendlly realised that JAXB keeps the whole objectification of the XML schema in memory, so the obvious question was: “How would our infrastructure cope with large XML files (e.g. in my case with a number of elements > 100,000) if we were to use JAXB?”. I could have simply produced a large XML file, then a client for it and find out about memory consumption. As one probably knows there are mainly two approaches to processing XML data in Java: DOM and SAX. With DOM, the XML document is represented into memory as a tree; DOM is useful if one needs cherry-pick access to the tree nodes or if one needs to write brief XML documents. On the other side of the spectrum there is SAX, an event-driven technology, where the whole document is parsed one XML element at the time, and for each XML significative event, callbacks are “pushed” to a Java client which then deals with them (such as START_DOCUMENT, START_ELEMENT, END_ELEMENT, etc). Since SAX does not bring the whole document into memory but it applies a cursor like approach to XML processing it does not consume huge amounts of memory. The drawback with SAX is that it processes the whole document start to finish; this might not be necessarily what one wants for large XML documents. In my scenario, for instance, I’d like to be able to pass to downstream systems XML elements as they are available, but at the same time maybe I’d like to pass only 100 elements at the time, implementing some sort of pagination solution. DOM seems too demanding from a memory-consumption point of view, whereas SAX seems to coarse-grained for my needs. I remembered reading something about STax, a Java technology which offered a middle ground between the capability to pull XML elements (as opposed to pushing XML elements, e.g. SAX) while being RAM-friendly. I then looked into the technology and decided that STax was probably the compromise I was looking for; however I wanted to keep the easy programming model offered by JAXB, so I really needed a combination of the two. While investigating STax, I came across Woodstox; this open source project promises to be a faster XML parser than many othrers, so I decided to include it in my benchmark as well. I now had all elements to create a benchmark to give me memory consumption and processing speed metrics when processing large XML documents. The benchmark plan In order to create a benchmark I needed to do the following:Create an XML schema which defined my domain model. This would be the input for JAXB to create the Java domain model Create three large XML files representing the model, with 10,000 / 100,000 / 1,000,000 elements respectively Have a pure JAXB client which would unmarshall the large XML files completely in memory Have a STax/JAXB client which would combine the low-memory consumption of SAX technologies with the ease of programming model offered by JAXB Have a Woodstox/JAXB client with the same characteristics of the STax/JAXB client (in few words I just wanted to change the underlying parser and see if I could obtain any performance boost) Record both memory consumption and speed of processing (e.g. how quickly would each solution make XML chunks available in memory as JAXB domain model classes) Make the results available graphically, since, as we know, one picture tells one thousands words.The Domain Model XML Schema <?xml version="1.0" encoding="UTF-8"?> <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://uk.co.jemos.integration.xml/large-file" xmlns:tns="http://uk.co.jemos.integration.xml/large-file" elementFormDefault="qualified"><complexType name="PersonType"> <sequence> <element name="firstName" type="string"></element> <element name="lastName" type="string"></element> <element name="address1" type="string"></element> <element name="address2" type="string"></element> <element name="postCode" type="string"></element> <element name="city" type="string"></element> <element name="country" type="string"></element> </sequence> <attribute name="active" type="boolean" use="required" /> </complexType><complexType name="PersonsType"> <sequence> <element name="person" type="tns:PersonType" maxOccurs="unbounded" minOccurs="1"></element> </sequence> </complexType><element name="persons" type="tns:PersonsType"> </element> </schema>I decided for a relatively easy domain model, with XML elements representing people, with their names and addresses. I also wanted to record whether a person was active. Using JAXB to create the Java model I am a fan of Maven and use it as my default tool to build systems. This is the POM I defined for this little benchmark: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>uk.co.jemos.tests.xml</groupId> <artifactId>large-xml-parser</artifactId> <version>1.0.0-SNAPSHOT</version> <packaging>jar</packaging><name>large-xml-parser</name> <url>http://www.jemos.co.uk</url><properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties><build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <version>0.7.5</version> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> <configuration> <schemaDirectory>${basedir}/src/main/resources</schemaDirectory> <includeSchemas> <includeSchema>**/*.xsd</includeSchema> </includeSchemas> <extension>true</extension> <args> <arg>-enableIntrospection</arg> <arg>-XtoString</arg> <arg>-Xequals</arg> <arg>-XhashCode</arg> </args> <removeOldOutput>true</removeOldOutput> <verbose>true</verbose> <plugins> <plugin> <groupId>org.jvnet.jaxb2_commons</groupId> <artifactId>jaxb2-basics</artifactId> <version>0.6.1</version> </plugin> </plugins> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.3.1</version> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> <mainClass>uk.co.jemos.tests.xml.XmlPullBenchmarker</mainClass> </manifest> </archive> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>2.2</version> <configuration><outputDirectory>${project.build.directory}/site/downloads</outputDirectory> <descriptors> <descriptor>src/main/assembly/project.xml</descriptor> <descriptor>src/main/assembly/bin.xml</descriptor> </descriptors> </configuration> </plugin> </plugins> </build><dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.5</version> <scope>test</scope> </dependency> <dependency> <groupId>uk.co.jemos.podam</groupId> <artifactId>podam</artifactId> <version>2.3.11.RELEASE</version> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.0.1</version> </dependency> <!-- XML binding stuff --> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.1.3</version> </dependency> <dependency> <groupId>org.jvnet.jaxb2_commons</groupId> <artifactId>jaxb2-basics-runtime</artifactId> <version>0.6.0</version> </dependency> <dependency> <groupId>org.codehaus.woodstox</groupId> <artifactId>stax2-api</artifactId> <version>3.0.3</version> </dependency> </dependencies> </project>Just few things to notice about this pom.xml.I use Java 6, since starting from version 6, Java contains all the XML libraries for JAXB, DOM, SAX and STax. To auto-generate the domain model classes from the XSD schema, I used the excellent maven-jaxb2-plugin, which allows, amongst other things, to obtain POJOs with toString, equals and hashcode support.I have also declared the jar plugin, to create an executable jar for the benchmark and the assembly plugin to distribute an executable version of the benchmark. The code for the benchmark is attached to this post, so if you want to build it and run it yourself, just unzip the project file, open a command line and run: $ mvn clean install assembly:assembly This command will place *-bin.* files into the folder target/site/downloads. Unzip the one of your preference and to run the benchmark use (-Dcreate.xml=true will generate the XML files. Don’t pass it if you have these files already, e.g. after the first run): $ java -jar -Dcreate.xml=true large-xml-parser-1.0.0-SNAPSHOT.jar Creating the test data To create the test data, I used PODAM, a Java tool to auto-fill POJOs and JavaBeans with data. The code is as simple as: JAXBContext context = JAXBContext.newInstance("xml.integration.jemos.co.uk.large_file"); Marshaller marshaller = context.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE); marshaller.setProperty(Marshaller.JAXB_ENCODING, "UTF-8");PersonsType personsType = new ObjectFactory().createPersonsType(); List<PersonType> persons = personsType.getPerson(); PodamFactory factory = new PodamFactoryImpl(); for (int i = 0; i < nbrElements; i++) { persons.add(factory.manufacturePojo(PersonType.class)); }JAXBElement<PersonsType> toWrite = new ObjectFactory().createPersons(personsType); File file = new File(fileName); BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(file), 4096);try { marshaller.marshal(toWrite, bos); bos.flush(); } finally { IOUtils.closeQuietly(bos); }The XmlPullBenchmarker generates three large XML files under ~/xml-benchmark:large-person-10000.xml (Approx 3M) large-person-100000.xml (Approx 30M) large-person-1000000.xml (Approx 300M)Each file looks like the following: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <persons xmlns="http://uk.co.jemos.integration.xml/large-file"> <person active="false"> <firstName>Ult6yn0D7L</firstName> <lastName>U8DJoUTlK2</lastName> <address1>DxwlpOw6X3</address1> <address2>O4GGvxIMo7</address2> <postCode>Io7Kuz0xmz</postCode> <city>lMIY1uqKXs</city> <country>ZhTukbtwti</country> </person><person active="false"> <firstName>gBc7KeX9Tn</firstName> <lastName>kxmWNLPREp</lastName> <address1>9BIBS1m5GR</address1> <address2>hmtqpXjcpW</address2> <postCode>bHpF1rRldM</postCode> <city>YDJJillYrw</city> <country>xgsTDJcfjc</country> </person>[..etc] </persons>Each file contains 10,000 / 100,000 / 1,000,000 <person> elements. The running environments I tried the benchmarker on three different environments:Ubuntu 10, 64-bit running as Virtual Machine on a Windows 7 ultimate, with CPU i5, 750 @2.67GHz and 2.66GHz, 8GB RAM of which 4GB dedicated to the VM. JVM: 1.6.0_25, Hotspot Windows 7 Ultimate, hosting the above VM, therefore with same processor. JVM, 1.6.0_24, Hotspot Ubuntu 10, 32-bit, 3GB RAM, dual core. JVM, 1.6.0_24, OpenJDKThe XML unmarshalling To unmarshall the code I used three different strategies:Pure JAXB STax + JAXB Woodstox + JAXBPure JAXB unmarshalling The code which I used to unmarshall the large XML files using JAXB follows: private void readLargeFileWithJaxb(File file, int nbrRecords) throws Exception { JAXBContext ucontext = JAXBContext.newInstance("xml.integration.jemos.co.uk.large_file"); Unmarshaller unmarshaller = ucontext.createUnmarshaller(); BufferedInputStream bis = new BufferedInputStream(new FileInputStream(file)); long start = System.currentTimeMillis(); long memstart = Runtime.getRuntime().freeMemory(); long memend = 0L;try { JAXBElement<PersonsType> root = (JAXBElement<PersonsType>) unmarshaller.unmarshal(bis); root.getValue().getPerson().size(); memend = Runtime.getRuntime().freeMemory(); long end = System.currentTimeMillis(); LOG.info("JAXB (" + nbrRecords + "): - Total Memory used: " + (memstart - memend)); LOG.info("JAXB (" + nbrRecords + "): Time taken in ms: " + (end - start)); } finally { IOUtils.closeQuietly(bis); } }The code uses a one-liner to unmarshall each XML file: JAXBElement<PersonsType> root = (JAXBElement<PersonsType>) unmarshaller.unmarshal(bis);I also accessed the size of the underlying PersonType collection to “touch” in memory data. BTW, debugging the application showed that all 10,000 elements were indeed available in memory after this line of code. JAXB + STax With STax, I just had to use an XMLStreamReader, iterate through all <person> elements, and pass each in turn to JAXB to unmarshall it into a PersonType domain model object. The code follows: // set up a StAX reader XMLInputFactory xmlif = XMLInputFactory.newInstance(); XMLStreamReader xmlr = xmlif.createXMLStreamReader(new FileReader(file)); JAXBContext ucontext = JAXBContext.newInstance(PersonType.class); Unmarshaller unmarshaller = ucontext.createUnmarshaller(); long start = System.currentTimeMillis(); long memstart = Runtime.getRuntime().freeMemory(); long memend = 0L;try { xmlr.nextTag(); xmlr.require(XMLStreamConstants.START_ELEMENT, null, "persons"); xmlr.nextTag(); while (xmlr.getEventType() == XMLStreamConstants.START_ELEMENT) { JAXBElement<PersonType> pt = unmarshaller.unmarshal(xmlr,PersonType.class);if (xmlr.getEventType() == XMLStreamConstants.CHARACTERS) { xmlr.next(); } }memend = Runtime.getRuntime().freeMemory(); long end = System.currentTimeMillis(); LOG.info("STax - (" + nbrRecords + "): - Total memory used: " + (memstart - memend)); LOG.info("STax - (" + nbrRecords + "): Time taken in ms: " + (end - start)); } finally { xmlr.close(); } }Note that this time when creating the context, I had to specify that it was for the PersonType object, and when invoking the JAXB unmarshalling I had to pass also the desired returned class type, with: JAXBElement<PersonType> pt = unmarshaller.unmarshal(xmlr, PersonType.class);Note that I don’t to anything with the object, just create it, to keep the benchmark as truthful and possible by not introducing any unnecessary steps. JAXB + Woodstox With Woodstox, the approach is very similar to the one used with STax. In fact Woodstox provides a STax2 compatible API, so all I had to do was to provide the correct factory and…bang! I had Woodstox under the cover working. private void readLargeXmlWithFasterStax(File file, int nbrRecords) throws FactoryConfigurationError, XMLStreamException,FileNotFoundException, JAXBException {// set up a Woodstox reader XMLInputFactory xmlif = XMLInputFactory2.newInstance(); XMLStreamReader xmlr = xmlif.createXMLStreamReader(new FileReader(file)); JAXBContext ucontext = JAXBContext.newInstance(PersonType.class); Unmarshaller unmarshaller = ucontext.createUnmarshaller(); long start = System.currentTimeMillis(); long memstart = Runtime.getRuntime().freeMemory();long memend = 0L;try { xmlr.nextTag(); xmlr.require(XMLStreamConstants.START_ELEMENT, null, "persons"); xmlr.nextTag();while (xmlr.getEventType() == XMLStreamConstants.START_ELEMENT) {JAXBElement<PersonType> pt = unmarshaller.unmarshal(xmlr,PersonType.class);if (xmlr.getEventType() == XMLStreamConstants.CHARACTERS) { xmlr.next(); } }memend = Runtime.getRuntime().freeMemory(); long end = System.currentTimeMillis(); LOG.info("Woodstox - (" + nbrRecords + "): Total memory used: " + (memstart - memend)); LOG.info("Woodstox - (" + nbrRecords + "): Time taken in ms: " + (end - start)); } finally { xmlr.close(); } }Note the following line: XMLInputFactory xmlif = XMLInputFactory2.newInstance();Where I pass in a STax2 XMLInputFactory. This uses the Woodstox implementation. The main loop Once the files are in place (you obtain this by passing -Dcreate.xml=true), the main performs the following: System.gc(); System.gc();for (int i = 0; i < 10; i++) { main.readLargeFileWithJaxb(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-10000.xml"), 10000); main.readLargeFileWithJaxb(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-100000.xml"), 100000); main.readLargeFileWithJaxb(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-1000000.xml"), 1000000); main.readLargeXmlWithStax(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-10000.xml"), 10000); main.readLargeXmlWithStax(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-100000.xml"), 100000); main.readLargeXmlWithStax(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-1000000.xml"), 1000000); main.readLargeXmlWithFasterStax(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-10000.xml"), 10000); main.readLargeXmlWithFasterStax(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-100000.xml"), 100000); main.readLargeXmlWithFasterStax(new File(OUTPUT_FOLDER + File.separatorChar + "large-person-1000000.xml"), 1000000); }It invites the GC to run, although as we know this is at the GC Thread discretion. It then executes each strategy 10 times, to normalise RAM and CPU consumption. The final data are then collected by running an average on the ten runs. The benchmark results for memory consumption Here follow some diagrams which show memory consumption across the different running environments, when unmarshalling 10,000 / 100,000 / 1,000,000 files. You will probably notice that memory consumption for STax-related strategies often shows a negative value. This means that there was more free memory after unmarshalling all elements than there was at the beginning of the unmarshalling loop; this, in turn, suggests that the GC ran a lot more with STax than with JAXB. This is logical if one thinks about it; since with STax we don’t keep all objects into memory there are more objects available for garbage collection. In this particular case I believe the PersonType object created in the while loop gets eligible for GC and enters the young generation area and then it gets reclamed by the GC. This, however, should have a minimum impact on performance, since we know that claiming objects from the young generation space is done very efficiently. Summary for 10,000 XML elementsSummary for 100,000 XML elementsSummary for 1,000,000 XML elementsThe benchmark results for processing speed Results for 10,000 elementsResults for 100,000 elementsResults for 1,000,000 elementsConclusions The results on all three different environments, although with some differences, all tell us the same story:If you are looking for performance (e.g. XML unmarshalling speed), choose JAXB If you are looking for low-memory usage (and are ready to sacrifice some performance speed), then use STax.My personal opinion is also that I wouldn’t go for Woodstox, but I’d choose either JAXB (if I needed processing power and could afford the RAM) or STax (if I didn’t need top speed and was low on infrastructure resources). Both these technologies are Java standards and part of the JDK starting from Java 6. Resources Benchmarker source codeZip version: Download Large-xml-parser-1.0.0-SNAPSHOT-project tar.gz version: Download Large-xml-parser-1.0.0-SNAPSHOT-project.tar tar.bz2 version: Download Large-xml-parser-1.0.0-SNAPSHOT-project.tarBenchmarker executables:Zip version: Download Large-xml-parser-1.0.0-SNAPSHOT-bin tar.gz version: Download Large-xml-parser-1.0.0-SNAPSHOT-bin.tar tar.bz2 version: Download Large-xml-parser-1.0.0-SNAPSHOT-bin.tarData files:Ubuntu 64-bit VM running environment: Download Stax-vs-jaxb-ubuntu-64-vm Ubuntu 32-bit running environment: Download Stax-vs-jaxb-ubuntu-32-bit Windows 7 Ultimate running environment: Download Stax-vs-jaxb-windows7Reference: XML unmarshalling benchmark in Java: JAXB vs STax vs Woodstox from our JCG partner Marco Tedone at the Marco Tedone’s blog blog....

The pursuit of protection: How much testing is “enough”?

I’m definitely not a testing expert. I’m a manager who wants to know when the software that we are building is finished, safe and ready to ship. Large-scale enterprise systems – the kinds of systems that I work on – are essentially hard to test. They have lots of rules and exceptions and lots of interfaces and lots of customization for different customers and partners, and lots of operational dependencies, and they deal with lots of data. We can’t test everything – there are tens of thousands or hundreds of thousands of different scenarios and different paths to follow. This gets easier and harder if you are working in Agile methods, building and releasing small pieces of work at a time. Most changes or new features are easy enough to understand and test by themselves. The bigger problem is in understanding the impact of each change on the rest of the system that has already been built, what side-effects the change may have, what might have broke. This gets harder if a change is introduced in small steps over several releases, so that some parts are incomplete or even invisible to the test team for a while. People who write flight control software or medical device controllers need to do exhaustive testing, but the rest of us can’t afford to, and there are clearly diminishing returns. So if you can’t or aren’t going to “test everything”, how do you know when you’re done testing? One answer is that you’re done testing when you run out of time to do any more testing. But that’s not good enough. You’re done testing when your testers say they’re done Another answer is that you’re done when the test team says they’re done. When all of the static analysis findings have been reviewed and corrected. When all of the automated tests pass. When the testers have made sure that all features that are supposed to be complete were completed and secure, finished their test checklists, made sure that the software is usable and checked for fit-and-finish, tested for performance and stability, made sure that the deployment and rollback steps work, and completed enough exploratory testing that they’ve stopped finding interesting bugs, and the bugs that they have found (the important ones at least) have all been fixed and re-checked. This of course assumes that they tested the right things – that they understood the business requirements and priorities, and found most of the interesting and important bugs in the system. But how do you know that they’ve done a good job? What a lot of testers do is black box testing, which falls into two different forms:Scripted functional and acceptance testing, manual and automated – how good the testing is depends on how complete and clear the requirements are (which is a challenge for small Agile teams working through informal requirements that keep changing), and how much time the testers have to plan out and run their tests. Unscripted behavioural or exploratory manual testing – depends on the experience and skill of the tester, and on their familiarity with the system and their understanding of the domain.With black box testing, you have to trust in the capabilities and care of the people doing the testing work. Even if they have taken a structured, methodical approach to defining and running tests they are still going to miss something. The question is – what, and how much? Using Code Coverage To know when you’ve tested enough, you have to stop testing in the dark. You have to look inside the code, using white box structural testing techniques to understand what code has been tested, and then look closer at the code to figure out how to test the code that wasn’t. A study at Microsoft over 5 years involving thousands of testers found that with scripted, structured functional testing, testers could cover as much as 83% of the code. With exploratory testing they could raise this a few percentage points, to as high as 86%. Then, by looking at code coverage and walking through what was tested and what wasn’t, they were able to come up with tests that brought coverage up above 90%. Using code coverage this way, instrumenting code under test and then looking into the code and reviewing and improving the tests that have already been written and figuring out what new tests to write, needs testers and developers to work together even more closely. How much code coverage is enough? If you’re measuring code coverage, the question that comes up is how much coverage is enough? What percentage of your code should be covered before you can ship? 100%? 90%? 80%? You will find a lot of different numbers in the literature and I have yet to find solid evidence showing that any given number is better than another. Cedric Beust, Breaking Away from the Unit Test Group Think In Continuous Delivery, Jez Humble and David Farley set 80% coverage as a target for each of automated unit testing, functional testing and acceptance testing. Based on their experience, this should provide comprehensive testing. Some TDD and XP advocates argue for 100% automated test coverage, which is a target to aim for if you are starting off from scratch and want to maintain high standards, especially for smaller systems. But 100% is unnecessarily expensive, and it’s a hopeless target for a large legacy system that doesn’t have extensive automated tests already in place. You’ll reach a point of diminishing returns as you continue to add tests, where each tests costs more to write and finds less. The more tests that you write, the more tests will be bad tests – duplicate tests that seem to test different things but don’t, tests that don’t test anything important (even if they help make the code coverage numbers look a little better), tests that don’t work but look like they do. All of these tests, good or bad, need to run continuously and need to be maintained and get in the way of making changes. The costs keep going up. How many shops can afford to achieve this level of coverage, and sustain it over a long period of time, or even want to? Making Code Coverage Work for You On the team that I manage now, we rely on automated unit and functional testing at around 70% (statement) coverage – higher in high-risk areas, lower in others. Obviously, automated coverage is also higher in areas that are easier to test with automated tools. We hit this level of coverage more than 3 years ago and it has held steady since then. There hasn’t been a good reason to push it higher – it gives us enough of a safety net for developers to make most changes safely, and it frees the test team up to focus on risks and exceptions. Of course with the other kinds of testing that we do, manual functional testing and exploratory testing and multi-player war games, semi-automated integration testing and performance testing, and operational system testing, coverage in the end is much higher than 70% for each release. We’ve instrumented some of our manual testing work, to see what code we are covering in our smoke tests and integration testing and exploratory testing work, but it hasn’t been practical so far to instrument all of the testing to get a final sum in a release. Defect Density, Defect Seeding and Capture/Recapture – Does anybody really do this? In an article in IEEE Software Best Practices from 1997, Steve McConnell talks about using statistical defect data to understand when you have done enough testing. The first approach is to use Defect Density data (# of defects per KLOC or some other common definition of size) from previous releases of the system, or even other systems that you have worked on. Add up how many defects were found in testing (assuming that you track this data – some Lean/Agile teams don’t, we do) and how many were found in production. Then measure the size of the change set for each of these releases to calculate the defect density. Do the same for the release that you are working on now, and compare the results. Assuming that your development approach hasn’t changed significantly, you should be able to predict how many more bugs still need to be found and fixed. The more data, of course, the better your predictions. Defect Seeding, also known as bebugging,is where someone inserts bugs on purpose and then you see how many of these bugs are found by other people in reviews and testing. The percentage of the known [seeded] bugs not found gives an indication of the real bugs that remain. Apparently some teams at IBM, HP and Motorola have used Defect Seeding, and it must come up a lot in interviews for software testing labs (Google “What is Defect Seeding?”), but it doesn’t look like a practical or safe way to estimate test coverage. First, you need to know that you’ve seeded the “right” kind of bugs, across enough of the code to be representative – you have to be good at making bugs on purpose, which isn’t as easy as it sounds. If you do a Mickey Mouse job of seeding the defects and make them too easy to find, you will get a false sense of confidence in your reviews and testing – if the team finds most or all of the seeded bugs, that doesn’t mean that they’ve found most or all of the real bugs. Bugs tend to be simple and obvious, or subtle and hard to find, and bugs tend to cluster in code that was badly designed or badly written, so the seeded bugs need to somehow represent this. And I don’t like the idea of putting bugs into code on purpose. As McConnell points out, you have to be careful in removing the seeded bugs and then do still more testing to make sure that you didn’t break anything. And finally, there is Capture/Re-Capture, an approach used to estimate wildlife populations (catch and tag fish in a lake, then see how many of the tagged fish you catch again later), which Watts Humphrey introduced to software engineering as part of TSP to estimate remaining defects from the results of testing or reviews. According to Michael Howard, this approach is sometimes used at Microsoft for security code reviews, so let’s explore this context. You have two reviewers. Both review the same code for the same kinds of problems. Add up the number of problems found by the first reviewer (A), the number found by the second reviewer (B), and separately count the common problems that both reviewers found, where they overlap (C). The total number of estimated defects: A*B/C. The total number of defects found: A+B-C. The total number of defects remaining: A*B/C – (A+B-C). Using Michael Howard’s example, if Reviewer A found 10 problems, and Reviewer B found 12 problems, and 4 of these problems were found by both reviewers in common, the total number of estimated defects is 10*12/4=30. The total number of defects found so far: 18. So there are 12 more defects still to be found. I’m not a statistician either, so this seems like magic to me, and to others. But like the other statistical techniques, I don’t see it scaling down effectively. You need enough people doing enough work over enough time to get useful stats. It works better for large teams working in Waterfall-style, with a long test-and-fix cycle before release. With a small number of people working in small, incremental batches, you get too much variability – a good reviewer or tester could find most or all of the problems that the other reviewers or testers found. But this doesn’t mean that you’ve found all of the bugs in the system. Your testing is good enough until a problem shows that it is not good enough In the end, as Martin Fowler points out, you won’t really know if your testing was good enough until you see what happens in production: The reason, of course, why people focus on coverage numbers is because they want to know if they are testing enough. Certainly low coverage numbers, say below half, are a sign of trouble. But high numbers don’t necessarily mean much, and lead to “ignorance-promoting dashboards”. Sufficiency of testing is a much more complicated attribute than coverage can answer. I would say you are doing enough testing if the following is true:You rarely get bugs that escape into production, and You are rarely hesitant to change some code for fear it will cause production bugs.Test everything that you can afford to. Release the code. When problems happen in production, fix them, then use Root Cause Analysis to find out why they happened and to figure out how you’re going to prevent problems in the future, how to improve the code and how to improve the way you write it and how you test it. Keep learning and keep going. Reference: The pursuit of protection: How much testing is “enough”? from our JCG partner Jim Bird at the Building Real Software blog....

Estimating the Unknown: Dates or Budgets, Part 1

Almost every manager I know wants to know when a project will be done. Some managers decree when a project will be done. Some managers think they can decree both the date and the feature set. There is one other tiny small subset, those managers who ask, “When can you finish this set of ranked features?” And, some managers want you to estimate the budget as well as the date. And now, you’re off into la-la land. Look, if you had any predictive power, you’d be off somewhere gambling, making a ton of money. But, you do have options. All of them require iterating on the estimates and the project. First, a couple of cautions:Never, ever, ever provide a single date for a project or a single point for a budget without a range or a confidence level. Expect to iterate on the release date and on the budget, and train your managers to expect that from you. If you get a ranked feature set, you can provide working product in the order in which your managers want the work done, while you keep refining your estimates. This has to be good for everyone. If you can say this without being patronizing, practice saying, “Remember, the definition of estimate is guess.”First, remember that a project is a system. And, a system has multiple aspects.If you’ve been managing projects for a while, you know that there is no iron triangle. Instead, there is more of a project pyramid. On the outside, there are the typical corporate constraints: Who will work on the project (the people and their capabilities), the work environment, and the cost to release. Most often, those are fixed by the organization. “Bud, we’ll give you 50 people, 5 months, and this pile of money to go do that project. OK?” Whether or not it’s ok, you’re supposed to nod your head like a bobble-headed doll. But, if your management has not thought about the constraints, they may be asking you to smush more features in insufficient time that the people can accomplish, given the requested time to release, with the expected number of low defects in the expected cost to release. The time to release is dependent on the number of people and their capabilities and the project environment. You can make anything work. And, there are delays with geographically distributed teams, lifecycles that do not include iteration with long lists of features. This is why estimation of the budget or the time to release is so difficult. So now that you know why it’s so difficult to estimate what do you do when someone asks you for an estimate? Preconditions for Estimation First, you ask a question back: “What’s most important to you? If it’s 3 weeks before the end of the project, and we haven’t finished all the features and we have ‘too many’ defects, what are you going to say?Release anyway? That says time to release is king. Are you going to say ‘these features better work’? or are you going to say, ‘these defects better not show up’?”You can have only one #1 priority in any given project or program. You might have a right-behind-it #2 priority and a right behind-that #3 priority, but you need to know where your degrees of freedom are. Remember that project pyramid from before? This is your chance to rank each of the vectors in the pyramid. If feature set is first, fine. If time to release is first, fine, if cost is first, fine. If low defects is first, fine. Whatever is first, you don’t really care, as long as you know and as long as you only have one #1 priority. You run into trouble on estimates when your management wants to fix two out of the six sides to the pyramid—or worse—more than two sides. When your managers say to you, “Here’s the team, here’s the office space, here’s the budget, here’s the feature set, and here’s the time,” you only have defects left to negotiate. And, we all know what happens. The defects go sky high, and you also de-scope at the end of the project because you run out of time. That’s because you have too many fixed constraints. Insist on a Ranked Backlog If you really want to estimate a date or a budget, here is how to do it. You have these preconditions:You must have a ranked backlog. You don’t need a final backlog. You can certainly accommodate a changing backlog. But you need a ranked backlog. This way, if the backlog changes, you know that you and the team are working on the work in the correct order. The team who will do the work is the team who is doing all the estimation. Only the team who is doing the work can estimate the work. Otherwise the estimate is not useful. Surrogate estimators are biased estimators. You report all estimates with a confidence range. If you report estimates as a single point in time, people think your estimates are accurate and precise. If you report them as a confidence range, people realize how inaccurate and imprecise your estimates are, and you have a shot of people treating them as real estimates.Once you’ve met the preconditions, you can estimate. And the reason I have projects or budgets in the title of these posts is that the same reasoning works for both project dates and budgets. Hang in there with me, all will be clear at the end. You have options for estimation, once you have met the preconditions. If you don’t have the feature set in a ranked order, you are in trouble. That’s because if you use any lifecycle other than an agile lifecycle, the feature order matters to your estimates, and the team will discuss the feature order in addition to the size of the estimates. That will make your estimation time take longer and your team will not agree. It all starts to get stickier and stickier. When You Have a Decreed Date It’s fine to live with a decreed date—that means you get to manage the features. Now, you have a choice. You can work in iterations or in flow (kanban). Let’s assume you work in iterations for now. Use Timeboxes, Better Your Estimate as You Proceed If you have worked on a project like this, with this exact team before, so that you can use this team’s velocity, go ahead and use this team’s velocity and estimate the entire backlog with the team. I would timebox this effort to no more than 2 hours total. It’s not worth spending any more time on it, because your estimate is bound to be wrong. Why? Because this is new work you have not done before. This estimate is the first date you cannot prove you cannot make. This is your most optimistic estimate. It is not the most likely estimate, nor is it the most pessimistic estimate. Well, unless you are all Eeyore-type people, in which case it might be the most pessimistic. But, I doubt it. I would take that estimate, and say to my manager, “Here is an estimate that I have about 50% confidence in. I will know more at the end of the third iteration.” The team tracks its velocity for three iterations and re-estimates the entire backlog again, and see what it has for an estimate again, and compares what it now knows with what it knew before. Now, you have something to compare. You now ask the team how much confidence they have in their estimate. Report that to management. Maybe they have 50% confidence, maybe they have less. Maybe they have more. Whatever they have, report that to management. Repeat estimating the remaining backlog until you get to 90% confidence. When You Have a Decreed Date and a Decreed Backlog Some of you are saying, “JR, my manager has also decreed the feature set.” Fine. As long as your manager has decreed the feature set in rank order, you can make this work. You still need to know in what order your manager wants the features in. Why? Because if you look back at the project pyramid and the preconditions in Part 1, several things can occur:Your customers/manager may not want all the features if you demo as you proceed Your customers/manager may not want to pay for all of features as you proceed, especially if you provide an estimate and demo You are getting dangerously close to having too many fixed constraints on this project, especially if you have a fixed number of people and a fixed working environment. Do you also have a fixed cost? You are in the danger zone! I can guarantee you that something will not be fixed once your management or customers see the number of defects.Obtain Data First, Then Argue If the manager has decreed the date and the feature set why are you estimating anything? Get to work! This is when using timeboxes or kanban and determining your true velocity and performing demos is useful to show progress so your management can see what you are doing. They have no idea if their decrees/wishes are reasonable. I don’t think there’s much point in fighting with them until you’ve accomplished half of the ranked backlog or worked through half of the schedule. Once you’ve done half of the backlog or half the schedule, now you have data and can see where you are. Now you can take your data, and use the previous option and provide estimates for the rest of the backlog with confidence ranges. When I’ve been the project manager for imposed dates and imposed backlogs, I’ve explained to management that we will do our best, but that we will maintain a reasonable pace from the beginning and when we are halfway through the time and the backlog I will report back to management where we are. Did they want to know where we are a quarter of the way instead, where we have more flexibility? That changes the conversation. Sometimes they do, and sometimes they don’t. It depends on how crazed the management is. I also protect the team from multitasking (none allowed). I am the Wall Around the Team, protecting the team from Management Mayhem. Check out the next part. Reference: Estimating the Unknown: Dates or Budgets, Part 1, Estimating the Unknown: Dates or Budgets, Part 2, Estimating the Unknown: Dates or Budgets, Part 3  from our JCG partner Johanna Rothman at the Managing Product Development blog....

Estimating the Unknown: Dates or Budgets, Part 2

In Part 1, you had some knowledge of the team’s velocity. This is the option of when you do not have knowledge of the team’s velocity, because this team has not worked together before, or has not worked on a project like this before. You are all coming in blind. Your Zeroth Best Bet: Wait to Estimate Until You Know How the Team Works If you have not worked on a project like this with this team, you have other problems. It’s not worth estimating the entire backlog at the beginning of the project, because the team members have no idea what relative estimation means to anyone else on the team. The team needs to work together. So, ask them to start together as quickly as possible. Yes, even before they estimate anything. They can work on anything—fixing defects, developing the stories for this product, anything at all. You all need data. Since you have a ranked backlog, the easiest approach might be to start with a kanban board so you can visualize any bottlenecks. If necessary, use kanban inside an iteration, so you have the rhythm of the iteration surrounding the visualization of the kanban. If you keep the iteration to one or two weeks, you will see if you have any bottlenecks. The shorter the iteration, the more often you will get feedback, the more valuable your data. Once the team has successfully integrated several features, now, you can start estimating together and your estimates will mean something. Use the confidence level and re-estimate until the team’s confidence reaches 90%. How long will that take? I don’t know. That’s why you have a kanban board and you’re using iterations. I have seen new-to-agile teams take 6-7 iterations before they have a velocity they can rely on at all. Your First Best Bet: Make Your Stories and Chunks Small If you cannot wait to estimate, because someone is breathing down your neck, demanding an estimate, look at your backlog. How small are the stories? Here’s my rule of thumb: If you eyeball the story and say, “Hmm, if we put everyone on the team on this story, and we think we can attack this story together and get it done in a day,” then the story is the right size. Now, you can add up those stories, which are about one team-day in size, give yourself a 50% confidence level, because you don’t really know, and proceed with “Use Timeboxes, Better Your Estimate as You Proceed” in Part 3. Now, if someone is breathing fire down your neck, chances are good that no one has taken the time to create a backlog of right-size stories. But, maybe you got lucky. Maybe you have a product owner who’s been waiting for you, as a team, to be available to work on this project for the last six months, and has been lovingly hand-crafting those stories. And, maybe I won the lottery. Your Second Best Bet: SWAG and Refine Assume your manager has asked you for a date and you did not get empirical data from the team, but instead you decide to develop a SWAG, a Scientific, Wild Tush Guess. SWAG Suggestions:If you must develop a SWAG, develop it with the team. Remember, a SWAG is a guess. It’s an educated guess, but it is a guess. You want to develop a SWAG the same way you estimate the stories, as a team. Develop a 3-point estimate: optimistic, likely, and pessimistic. Alternatively, develop a confidence level in the estimate. When you start with a SWAG, also start collecting data on the team’s performance that the team—and only the team—can use for the team to use to better their estimation. Refine the SWAG: Explain to your management that your original date was a SWAG, and that you need to refine the date. I like the word “refine,” as opposed to “update.” Refine sounds like you are going to give them a better date as in sooner. You may not, but you will give them a better date as in a more accurate date.SWAG No-No’sDo NOT SWAG alone. The team gets to SWAG. It’s their estimate, not yours, as a project manager. Do NOT let your manager SWAG for you. Unless the manager is going to do all the work, the manager gets no say. Oh, the manager can decree a date, but then you go back to Part 3 and manage the project and re-estimate reasonably. Do NOT report a SWAG without a confidence percentage or a range attached.So where does all of this get us with budgets and dates? In many ways, estimating project budgets or dates for agile projects turns out to be irrelevant. If you have a ranked backlog, and you finish features, you can always stop the project if you hit a particular date or cost. It does matter if you have a ranked backlog, if you use an agile approach, or if you work in flow (kanban), or if you use a lifecycle that allows you to finish features (an incremental lifecycle where you integrate as you proceed). That’s why I don’t get too perturbed when my managers try to fix the schedule and the feature set, and they rank the backlog. They can make the decision to stop the project if we run out of time or money. No problem. We are doing the best job we know how. I don’t have to sweat it. Because what matters is the ranked backlog. To those of you who have programs, which have large budgets: yes, you do not want to burn through large sums of money without getting value in return. I agree. However, sometimes you don’t know if you’re getting any value unless you start something and have a chance to evaluate it via a demo to know if you’re getting any value. Your mileage may vary. 1. Remember, the project is a system. We discussed this in Part 1. You have more degrees of freedom than just the feature set or the release date or the cost. You have the people on the project, the work environment, and the level of defects. If you are working on an agile project, expect to iterate on the end date or the budget. You can use rolling wave for agile projects or non-agile projects. Expect to iterate. Because the project is a system and you will iterate, remember to estimate with confidence levels, both on dates and budgets. 2. Determine your preconditions for estimation With a ranked backlog and knowing how to rank the vectors of your project pyramid, you can take a shot at a first cut at a date or a budget. Never assume you know what is #1 for your project, #2 and so on. Ask. Sometimes, release date is #1, sometimes it’s not. Sometimes cost is #1, sometimes it’s not. Just because your manager asks for a release date does not mean that is the top priority. Ask. If you are agile/lean and you do not have a ranked backlog, you are up the proverbial creek. Do not pitch a fit, but you cannot estimate. Explain that calmly and firmly. To everyone. Sure, you can start the project, assuming you have enough ranked stories for one iteration, or enough of a ranked backlog to start a kanban board. You don’t even have to estimate the project. Why? Because the order matters. You can use dinner as an example. If you eat dessert before dinner, you might not want dinner. Why bother estimating how long it will take to make dinner if you’re not going to eat it? In part 3, I suggested these options for when you had some idea of what was going on: 3. Use Timeboxes, Better Your Estimate as You Proceed If you are using timeboxes, track your velocity and as you gain more confidence in your estimate, re-estimate the backlog and report it as you gain more confidence in your estimate. Go re-read part 3 for the details. 4. Obtain Data First, Then Argue When you have a decreed end date and a decreed backlog, do not argue first.Do not bang your head against the wall. It hurts your head and does not change the situation. I love it when the people who are not working directly on the project think they know more than you do. This is why I’m teaching influence workshops this year, in preparation for my program management book :-) This kind of thing happens all the time in program management. Go re-read part 3 for the details. Part 4 was all about how to estimate when everything was new: 5. Your Zeroth Best Bet: Wait to Estimate Until You Know How the Team Works Can you estimate anything without knowing how this team will work on this project? I don’t think so. And, you should hedge your bet by keeping your iterations short. 6. Your First Best Bet: Make Your Stories and Chunks Small Make the stories small so they are easier to estimate. Make any tasks small so you can estimate them Make the iterations small so you get feedback faster. Small is beautiful, baby. If you have anything larger than team-day task, you are in Trouble. 7. Your Second Best Bet: SWAG and Refine Ok, you’ll fall for one of the oldest tricks in the book, but see if you can make it work. Estimate with the team, plan on refining the estimate. Please do not allow your estimate to be someone else’s commitment (an agile schedule game). Don’t forget to read the SWAG No-No’s. And those are my seven suggestions. Confidence percentages help a lot. You can use these ideas for dates or budgets. Substitute “budget” or “cost” for “date” and you will see that the ideas fit. I wish I could tell you there was a magic recipe or a crystal ball to tell you how to determine the unknown from no knowledge. There is not. You need data. But it doesn’t take long to get the data if you use an agile lifecycle. It takes a little longer with an incremental lifecycle. Yes, I will do a series on lifecycles soon. If you found this series helpful, please let me know. It was a lot of work. If you would like even more about estimation, please see Manage It! Your Guide to Modern, Pragmatic Project Management at the Prags where you can see excerpts or at Amazon where you can see more reviews. Yes, there is more about estimation. Astonishing, eh? Reference: Estimating the Unknown: Dates or Budgets, Part 4, Estimating the Unknown: Dates or Budgets, Part 5 from our JCG partner Johanna Rothman at the Managing Product Development blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: