Featured FREE Whitepapers

What's New Here?

javafx-logo

JavaFX 2 with Spring

I’m going to start this one with a bold statement: I always liked Java Swing, or applets for that matter. There, I said it. If I perform some self analysis, this admiration probably started when I got introduced to Java. Swing was (practically) the first thing I ever did with Java that gave some statisfactionary result and made me able to do something with the language at the time. When I was younger we build home-brew fat clients to manage our 3.5′ floppy/CD collection (written in VB and before that in basic) this probably also played a role. Anyway, enough about my personal rareness. Fact is that Swing has helped many build great applications but as we all know Swing has it drawbacks. For starters it hasn’t been evolving since well, a long time. It also requires a lot of boiler plate code if   you want to create high quality code. It comes shipped with some quirky design ‘flaws’, lacks out of the box patterns such as MVC. Styling is a bit of a limitation since you have to fall back on the limited L&F architecture, I18N is not build in by default and so on. One could say that developing Swing these days is well, basically going back into time. Fortunately Oracle tried to change this some years ago by Launching JavaFX. I recall getting introduced to JavaFX on Devoxx (or Javapolis as it was named back then). The nifty demo’s looked very promising, so I was glad to see that a Swing successor was finally on its way. This changed from the moment I saw its internals. One of its major drawbacks was that it was based on a dark new syntax (called JavaFX script). In case you have never seen JavaFX script; it looks like a bizarre breed between Java, JSON and JavaScript. Although it is compiled to Java byte-code, and you could use the Java API’s from it, integration with Java was never really good. The language itself (although pretty powerful) required you to spend a lot of time understanding the details, for ending up with, well, again source code, but this time less manageable and supported then plain Java code. As it turned out, I wasn’t the only one. A lot of people felt the same (for sure there were other reasons as well) and JavaFX never was a great success. However, a while ago Oracle changed the tide by introducing JavaFX 2. First of all they got rid of JavaFX script (which is no longer supported) and turned it into a real native Java SE API (JavaFX 2.2.3 is part of the Java 7 SE update 6) . The JavaFX API now looks more like the familiar Swing API, which is a good thing. It gives you layout managers lookalikes, event listeners, and all those other components you were so used to, but even better. So if you want you can code JavaFX like you did Swing you can, albeit with slightly different syntax and improved architecture. It is also possible now to intermix existing Java Swing applications with JavaFX. But there is more. They introduced an XML based markup language that allows you to describe the view. This has some advantages, first of all coding in XML works faster then Java. XML can be more easily be generated then Java and the syntax for describing a view is simply more compact. It is also more intuitive to express a view using some kind of markup, especially if you ever did some web development before. So, one can have the view described in FXML (thats how its called), the application controllers separate from the view, both in Java, and your styling in CSS (yeah, so no more L&F, CSS support is standard). You can still embed Java (or other languages) directly in the FXML; but this is probably not what you want (scriptlet anti-pattern). Another nice thing is support for binding. You can bind each component in your view to the application controller by putting an fx:id attribute on the view component and an @FXML annotation on the instance variable in the application controller. The corresponding element will then be auto injected, so you can change its data or behavior from inside your application controller. It also turns out that with some lines of code you can painlessly integrate the DI framework of your choice, isn’t that sweet? And what about the tooling? Well, first of all there is a plug-in for Eclipse (fxclipse) which will render you FXML on the fly. You can install it via Eclipse market place:The plug-in will render any adjustment you make immediately:Note that you need at least JDK7u6 for this plug-in to work. If your JDK is too old you’ll get an empty pane in eclipse. Also, if you create a JavaFX project I needed to put the jfxrt.jar manually on my build classpath. You’ll find this file in %JAVA_HOME%/jre/lib. Up until know the plug-in doesn’t help you visually (by drag& drop) but that there a separate IDE: scene builder. This builder is also integrated in Netbeans, for AFAIK there is no support for eclipse yet so you’ll have to run it separately if you want to use it. The builder lets you develop FXML the visual way, using drag&drop. Nice detail; scene builder is in fact written in JavaFX. Then you also have a separate application called scenic view which does introspection on a running JavaFX application and shows how it is build up. You get a graph with the different nodes and their hierarchical structure. For each node you can see its properties and so forth:Ok, so lets start with some code examples. The first thing I did was design my demo application in scene builder:I did this graphically by d&d the containers/controlers on to the view. I also gave the controls that I want to bind to my view and fx:id, you can do that also via scene builder:For the buttons in particular I also added an onAction (which is the method that should be executed on the controller once the button is clicked):Next I added the controller manually in the source view in eclipse. There can only be one controller per FXML and it should be declared in the top level element. I made two FXML’s, one that represents the main screen and one that acts as the menu bar. You probably want a division of your logic in multiple controllers, rather then stuffing to much in a single controller – single responsibility is a good design guideline here. The first FXML is “search.fxml” and represents the search criteria and result view: <?xml version="1.0" encoding="UTF-8"?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.control.*?> <?import javafx.scene.control.Label?> <?import javafx.scene.control.cell.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.paint.*?> <StackPane id="StackPane" maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" prefHeight="400.0" prefWidth="600.0" xmlns:fx="http://javafx.com/fxml" fx:controller="be.error.javafx.controller.SearchController"> <children> <SplitPane dividerPositions="0.39195979899497485" focusTraversable="true" orientation="VERTICAL" prefHeight="200.0" prefWidth="160.0"> <items> <GridPane fx:id="grid" prefHeight="91.0" prefWidth="598.0"> <children> <fx:include source="/menu.fxml"/> <GridPane prefHeight="47.0" prefWidth="486.0" GridPane.columnIndex="1" GridPane.rowIndex="5"> <children> <Button fx:id="clear" cancelButton="true" mnemonicParsing="false" onAction="#clear" text="Clear" GridPane.columnIndex="1" GridPane.rowIndex="1" /> <Button fx:id="search" defaultButton="true" mnemonicParsing="false" onAction="#search" text="Search" GridPane.columnIndex="2" GridPane.rowIndex="1" /> </children> <columnConstraints> <ColumnConstraints hgrow="SOMETIMES" maxWidth="338.0" minWidth="10.0" prefWidth="338.0" /> <ColumnConstraints hgrow="SOMETIMES" maxWidth="175.0" minWidth="0.0" prefWidth="67.0" /> <ColumnConstraints hgrow="SOMETIMES" maxWidth="175.0" minWidth="10.0" prefWidth="81.0" /> </columnConstraints> <rowConstraints> <RowConstraints maxHeight="110.0" minHeight="10.0" prefHeight="10.0" vgrow="SOMETIMES" /> <RowConstraints maxHeight="72.0" minHeight="10.0" prefHeight="40.0" vgrow="SOMETIMES" /> </rowConstraints> </GridPane> <Label alignment="CENTER_RIGHT" prefHeight="21.0" prefWidth="101.0" text="Product name:" GridPane.columnIndex="0" GridPane.rowIndex="1" /> <TextField fx:id="productName" prefWidth="200.0" GridPane.columnIndex="1" GridPane.rowIndex="1" /> <Label alignment="CENTER_RIGHT" prefWidth="101.0" text="Min price:" GridPane.columnIndex="0" GridPane.rowIndex="2" /> <Label alignment="CENTER_RIGHT" prefWidth="101.0" text="Max price:" GridPane.columnIndex="0" GridPane.rowIndex="3" /> <TextField fx:id="minPrice" prefWidth="200.0" GridPane.columnIndex="1" GridPane.rowIndex="2" /> <TextField fx:id="maxPrice" prefWidth="200.0" GridPane.columnIndex="1" GridPane.rowIndex="3" /> </children> <columnConstraints> <ColumnConstraints hgrow="SOMETIMES" maxWidth="246.0" minWidth="10.0" prefWidth="116.0" /> <ColumnConstraints fillWidth="false" hgrow="SOMETIMES" maxWidth="537.0" minWidth="10.0" prefWidth="482.0" /> </columnConstraints> <rowConstraints> <RowConstraints maxHeight="64.0" minHeight="10.0" prefHeight="44.0" vgrow="SOMETIMES" /> <RowConstraints maxHeight="68.0" minHeight="0.0" prefHeight="22.0" vgrow="SOMETIMES" /> <RowConstraints maxHeight="68.0" minHeight="10.0" prefHeight="22.0" vgrow="SOMETIMES" /> <RowConstraints maxHeight="68.0" minHeight="10.0" prefHeight="22.0" vgrow="SOMETIMES" /> <RowConstraints maxHeight="167.0" minHeight="10.0" prefHeight="14.0" vgrow="SOMETIMES" /> <RowConstraints maxHeight="167.0" minHeight="10.0" prefHeight="38.0" vgrow="SOMETIMES" /> </rowConstraints> </GridPane> <StackPane prefHeight="196.0" prefWidth="598.0"> <children> <TableView fx:id="table" prefHeight="200.0" prefWidth="200.0"> <columns> <TableColumn prefWidth="120.0" resizable="true" text="OrderId"> <cellValueFactory> <PropertyValueFactory property="orderId" /> </cellValueFactory> </TableColumn> <TableColumn prefWidth="120.0" text="CustomerId"> <cellValueFactory> <PropertyValueFactory property="customerId" /> </cellValueFactory> </TableColumn> <TableColumn prefWidth="120.0" text="#products"> <cellValueFactory> <PropertyValueFactory property="productsCount" /> </cellValueFactory> </TableColumn> <TableColumn prefWidth="120.0" text="Delivered"> <cellValueFactory> <PropertyValueFactory property="delivered" /> </cellValueFactory> </TableColumn> <TableColumn prefWidth="120.0" text="Delivery days"> <cellValueFactory> <PropertyValueFactory property="deliveryDays" /> </cellValueFactory> </TableColumn> <TableColumn prefWidth="150.0" text="Total order price"> <cellValueFactory> <PropertyValueFactory property="totalOrderPrice" /> </cellValueFactory> </TableColumn> </columns> </TableView> </children> </StackPane> </items> </SplitPane> </children> </StackPane> On line 11 you can see that I configured the application controller class that should be used with the view. On line 17 you can see the import of the separate menu.fxml which is shown here: <?xml version='1.0' encoding='UTF-8'?><?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.control.MenuItem?><Pane prefHeight='465.0' prefWidth='660.0' xmlns:fx='http://javafx.com/fxml' fx:controller='be.error.javafx.controller.FileMenuController'> <children> <MenuBar layoutX='0.0' layoutY='0.0'> <menus> <Menu mnemonicParsing='false' text='File'> <items> <MenuItem text='Exit' onAction='#exit' /> </items> </Menu> </menus> </MenuBar> </children> </Pane> On line 7 you can see that it uses a different controller. In Eclipse, if you open the fxclipse view from the plug-in you will get the same rendered view as in scene builder. Its however convenient if you want to make small changes in the code to see them directly reflected: The code for launching the application is pretty standard: package be.error.javafx;import javafx.application.Application; import javafx.scene.Parent; import javafx.scene.Scene; import javafx.stage.Stage;public class TestApplication extends Application {private static final SpringFxmlLoader loader = new SpringFxmlLoader();@Override public void start(Stage primaryStage) { Parent root = (Parent) loader.load('/search.fxml'); Scene scene = new Scene(root, 768, 480); primaryStage.setScene(scene); primaryStage.setTitle('JavaFX demo'); primaryStage.show(); }public static void main(String[] args) { launch(args); } } The only special thing to note is that we extend from Application. This is a bit boiler plate code which will for example make sure that creating of the UI happens on the JavaFX application thread. You might remember such stories from Swing, where every UI interaction needs to occur on the event dispatcher thread (EDT), this is the same with JavaFX. You are by default on the “right thread” when you are called back by the application (in for example action listeners alike methods). But if you start the application or perform long running tasks in separate threads you need to make sure you start UI interaction on the right thread. For swing you would use SwingUtilities.invokeLater() for JavaFX: Platform.runLater(). More special is our SpringFxmlLoader: package be.error.javafx;import java.io.IOException; import java.io.InputStream;import javafx.fxml.FXMLLoader; import javafx.util.Callback;import org.springframework.context.ApplicationContext; import org.springframework.context.annotation.AnnotationConfigApplicationContext;public class SpringFxmlLoader {private static final ApplicationContext applicationContext = new AnnotationConfigApplicationContext(SpringApplicationConfig.class);public Object load(String url) { try (InputStream fxmlStream = SpringFxmlLoader.class .getResourceAsStream(url)) { System.err.println(SpringFxmlLoader.class .getResourceAsStream(url)); FXMLLoader loader = new FXMLLoader(); loader.setControllerFactory(new Callback<Class<?>, Object>() { @Override public Object call(Class<?> clazz) { return applicationContext.getBean(clazz); } }); return loader.load(fxmlStream); } catch (IOException ioException) { throw new RuntimeException(ioException); } } } The highlighted lines show the custom ControllerFactory. Without setting this JavaFX will simply instantiate the class you specified as controller in the FXML without anything special. In that case the class will not be Spring managed (unless you would be using CTW/LTW AOP). By specifying a custom factory we can define how the controller should be instantiated. In this case we lookup the bean from the application context. Finally we have our two controllers, the SearchController: package be.error.javafx.controller;import java.math.BigDecimal; import java.net.URL; import java.util.ResourceBundle;import javafx.collections.FXCollections; import javafx.collections.ObservableList; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.control.Button; import javafx.scene.control.TableView; import javafx.scene.control.TextField;import org.apache.commons.lang.StringUtils; import org.springframework.beans.factory.annotation.Autowired;import be.error.javafx.model.Order; import be.error.javafx.model.OrderSearchCriteria; import be.error.javafx.model.OrderService;public class SearchController implements Initializable {@Autowired private OrderService orderService; @FXML private Button search; @FXML private TableView<Order> table; @FXML private TextField productName; @FXML private TextField minPrice; @FXML private TextField maxPrice;@Override public void initialize(URL location, ResourceBundle resources) { table.setColumnResizePolicy(TableView.CONSTRAINED_RESIZE_POLICY); }public void search() { OrderSearchCriteria orderSearchCriteria = new OrderSearchCriteria(); orderSearchCriteria.setProductName(productName.getText()); orderSearchCriteria .setMaxPrice(StringUtils.isEmpty(minPrice.getText()) ? null:new BigDecimal(minPrice.getText())); orderSearchCriteria .setMinPrice(StringUtils.isEmpty(minPrice.getText()) ? null: new BigDecimal(minPrice.getText())); ObservableList<Order> rows = FXCollections.observableArrayList(); rows.addAll(orderService.findOrders(orderSearchCriteria)); table.setItems(rows); }public void clear() { table.setItems(null); productName.setText(''); minPrice.setText(''); maxPrice.setText(''); } } The highlighted lines in respective order:Auto injection by Spring, this is our Spring managed service which we will use to lookup data from Auto injection by JavaFX, our controls that we need to manipulate or read from in our controller Special init method to initialize our table so columns will auto resize when the view is enlarged action listener style callback which is invoked when the search button is pressed action listener style callback which is invoked when the clear button is pressedFinally the FileMenuController which does nothing special besides closing our app: package be.error.javafx.controller;import javafx.application.Platform; import javafx.event.ActionEvent;public class FileMenuController {public void exit(ActionEvent actionEvent) { Platform.exit(); } } And finally the (not so exciting) result:After searching:Making view wider, also stretches the columns:The file menu allowing us the exit:After playing a bit with JavaFX2 I was pretty impressed. There are also more and more controls coming (I believe there is already a browser control and such). So I think we are on the right track here.   Reference: JavaFX 2 with Spring from our JCG partner Koen Serneels at the Koen Serneels – Technology blog blog. ...
software-development-2-logo

5 Strategies for Making Money with the Cloud

Everybody is hearing Cloud Computing on the television now. Operators will store your contacts in the Cloud. Hosting companies will host your website in the Cloud. Others will store your photos in the Cloud. However how do you make money with the Cloud? The first thing is to forget about infrastructure and virtualization. If you are thinking that in 2013, the world needs more IaaS providers then you haven’t seen what is currently on offer (Amazon, Microsoft, Google, Rackspace, Joyent, Verizon/Terramark, IBM, HP, etc.). So what are alternative strategies:   1) Rocket Internet SaaS Cloning Your best hope is SaaS and PaaS. The best markets are non-English speaking markets. We have seen an explosion of SaaS in the USA but most have not made it to the rest of the world yet. Only some bigger SaaS solutions (Webex, GoToMeeting, Office 365, etc.) and PaaS platforms (Salesforce, Workday, etc.) are available outside of the US and the UK. However most SaaS and PaaS solutions are currently still English-only. So the quickest solution to make some money is to just copy, translate and paste some successful English-only SaaS product. If you do not know how to copy dotcoms, take a look at how the Rocket Internet team is doing it. Of course you should always be open for those annoying problems everybody has that could use a new innovative solution and as such create your own SaaS. 2) SaaSification During the gold rush, be the restaurant, hotel or tool shop. While everybody is looking for the SaaS gold, offer solutions that will save gold diggers time and money. SaaSification allows others to focus on building their SaaS business, not on reinventing for the millionth time a web page, web store, email server, search, CRM, monthly subscription billing, reporting, BI, etc. Instead of a “Use Shopify to create your online store”, it should be “Use <YOUR PRODUCT> to create a SaaS Business”. 3) Mobile & Cloud Everybody is having, or at least thinking about buying, a Smartphone. However there are very few really good mobile services that fully exploit the Cloud. Yet I can get a shopping list app but most are just glorified to-do lists. None is recommending me where to go and buy based on current promotions and comparison with other buyers. None is helping me find products inside a large supermarket. None is learning from my shopping habits and suggesting items on the list. None is allowing me to take a number at the seafood queue. These are just examples for one mobile + cloud app. Think about any other field and you are sure to find great ideas. 4) Specialized IaaS I mentioned it before, IaaS is already overcrowded but there is one exception: specialized IaaS. You can focus on specialized hardware, e.g. virtualized GPU, DSP, mobile ARM processors. On network virtualization like SDN and Openflow. Mobile and tablet virtualization. Embedded device virtualization. Machine Learning IaaS. Car Software virtualization. 5) Disruptive Innovations + Cloud Selling disruptive innovations and offering them as Cloud services. Examples could be 3D printing services, wireless sensor networks / M2M, Big Data, Wearable Tech, Open Source Hardware, etc. The Cloud will lower your costs and give you a global elastically scalable solution.   Reference: 5 Strategies for Making Money with the Cloud from our JCG partner Maarten Ectors at the Telruptive blog. ...
apache-tomcat-logo

Most popular application servers

This is the second post in the series where we publish statistical data about the Java installations. The used dataset originates from free Plumbr installations out there totalling 1,024 different environments we have collected during the past six months. First post in the series analyzed the foundation – on what OS the JVM is run, whether it is a 32 or 62-bit infrastructure and what JVM vendor and version were used. In this post we are going to focus on the application servers used. It proved to be a bit more challenging task than originally expected – the best shot we had towards the goal was to extract it from the bootstrap classpath. With queries similar to “grep -i tomcat classpath.log”. Which was easy. As opposed to discovering that:     Out of the 1024 samples 92 did not contain a reference to bootstrap classpath at all. Which was our first surprise. Whether they were really ran without any entries to bootstrap classpath or our statistics just do not record all the entries properly – failed to trace the reason. But nevertheless, this left us with 932 data points. Out of the remaining 932 we were unable to link 256 reports to any of the application servers known to mankind. Before jumping to the conclusion that approx. 27% of the JVMs out there are running client side programs, we tried to dig further57 seemed to be launched using Maven plugins, which hides the actual runtime from us. But I can bet the vast majority of those are definitely not Swing applications. 11 environments were running on Play Framework, which is not using Java EE containers to run. 6 environments were launched with Scala runtime attached, so I assume these were also actually web applications. 54 had loaded either jgoodies or swing libraries which make them good candidates for being a desktop application 6 were running on Android. Which we don’t even support. If you guys can shed some light on how you managed to launch Plumbr with Android, let us know. And the remaining 122 – we just failed to categorize – they seemed to range from MQ solutions to batch processes to whatnot.But 676 reports did contain reference to the Java EE container used. And results are visible from the following diagram:The winner should not be a surprise to anyone – Apache Tomcat is being used in 43% of the installations. Other places on the podium are a bit more surprising – Jetty coming in second with 23% of the deployments and JBoss third with 16%. The expected result was exactly reversed, but apparently the gears have shifted during the last years. Next group contains Glassfish, Geronimo and Weblogic with 7, 6 and 3% of the deployment base respectively. Which is also somewhat surprising – having just 20 Weblogic installations and Websphere nowhere in sight at all – the remaining five containers altogether represent less than 2% of the installations. I guess all the pragmatic-lean-KISS-… approach is finally starting to pay off and we are moving towards tools developers actually enjoy.   Reference: Most popular application servers from our JCG partner Vladimir Sor at the Plumbr Blog blog. ...
java-logo

Cryptography Using JCA – Services In Providers

The Java Cryptography Architecture (JCA) is an extensible framework that enables you to use perform cryptographic operations. JCA also promotes implementation independence (program should not care about who’s providing the cryptographic service) and implementation interoperability (program should not be tied to a specific provider of a particular cryptographic service). JCA allows numerous cryptographic services e.g. ciphers, key generators, message digests to be bundled up in a java.security.Provider class, and registered declaratively in a special file (java.security) or programmatically via the java.security.Security class (method ‘addProvider’).   Although JCA is a standard, different JDKs implement JCA differently. Between Sun/Oracle and IBM JDKs, the IBM JDK is sort of more ‘orderly’ than Oracle’s. For instance, IBM’s uber provider (com.ibm.crypto.provider.IBMJCE) implements the following keystore formats: JCEKS, PKCS12KS (PKCS12), JKS. Oracle JDK ‘spreads’ the keystore format implementations into the following providers:sun.security.provider.Sun – JKS com.sun.crypto.provider.SunJCE – JCEKS com.sun.net.ssl.internal.ssl.Provider – PKCS12Despite the popular recommendation to write applications that do not point to a specific Provider class, there are some use cases that require an application/program to know exactly what services a Provider class is offering. This requirement becomes more prevalent when supporting multiple application servers that may be tightly coupled with a particular JDK e.g. WebSphere bundled with IBM JDK. I usually use Tomcat+Oracle JDK for development (more lightweight, faster), but my testing/production setup is WebSphere+IBM JDK. To further complicate matters, my project needs the use of a hardware security module (HSM) which uses the JCA API via the provider class com.ncipher.provider.km.nCipherKM. So, when I am at home (without access to the HSM), I would want to continue writing code but at least get the codes tested on a JDK provider. I can then switch to use the nCipherKM provider for another round of unit testing before committing the code to source control. The usual assumption is that one Provider class is enough e.g. IBMJCE for IBM JDKs, SunJCE for Oracle JDKs. So the usual solution is to implement a class that specifies one provider, using reflection to avoid compile errors due to ‘Class Not Found': //For nShield HSM Class c = Class.forName('com.ncipher.provider.km.nCipherKM'); Provider provider = (Provider)c.newInstance();//For Oracle JDK Class c = Class.forName('com.sun.crypto.provider.SunJCE'); Provider provider = (Provider)c.newInstance();//For IBM JDK Class c = Class.forName('com.ibm.crypto.provider.IBMJCE'); Provider provider = (Provider)c.newInstance(); This design was OK, until I encountered a NoSuchAlgorithmException error running some unit test cases on Oracle JDK. And the algorithm I was using is RSA, a common algorithm! How can this be, the documentation says that RSA is supported! The same test cases worked fine on IBM JDK. Upon further investigation, I realised that much to my dismay, the SunJCE provider does not have an implementation for the KeyPairGenerator service for RSA. An implementation however is found in the provider class sun.security.rsa.SunRsaSign. So the assumption of ‘1 provider to provide them all’ is broken. But thanks to JCA’s open API, a Provider object can be passed in when requesting for a Service instance e.g. KeyGenerator kgen = KeyGenerator.getInstance('AES', provider); To help with my inspection of the various Provider objects, I’ve furnished a JUnit test to pretty-print out the various services of each registered Provider instance in a JDK. package org.gizmo.jca;import java.security.Provider; import java.security.Provider.Service; import java.security.Security; import java.util.Comparator; import java.util.SortedSet; import java.util.TreeSet;import javax.crypto.KeyGenerator;import org.bouncycastle.jce.provider.BouncyCastleProvider; import org.junit.Test;public class CryptoTests {@Test public void testBouncyCastleProvider() throws Exception { Provider p = new BouncyCastleProvider(); String info = p.getInfo(); System.out.println(p.getClass() + ' - ' + info); printServices(p); }@Test public void testProviders() throws Exception {Provider[] providers = Security.getProviders(); for(Provider p : providers) { String info = p.getInfo(); System.out.println(p.getClass() + ' - ' + info); printServices(p); } }private void printServices(Provider p) { SortedSetservices = new TreeSet(new ProviderServiceComparator()); services.addAll(p.getServices());for(Service service : services) { String algo = service.getAlgorithm(); System.out.println('==> Service: ' + service.getType() + ' - ' + algo); } }/** * This is to sort the various Services to make it easier on the eyes... */ private class ProviderServiceComparator implements Comparator{@Override public int compare(Service object1, Service object2) { String s1 = object1.getType() + object1.getAlgorithm(); String s2 = object2.getType() + object2.getAlgorithm();;return s1.compareTo(s2); }} } Anyway, if the algorithms you use are common and strong enough for your needs, the BouncyCastle provider can be used. It works well across JDKs (tested against IBM & Oracle). BouncyCastle does not support JKS or JCEKS keystore formats, but if you are not fussy, the BC keystore format works just fine. BouncyCastle is also open source and can be freely included in your applications. Tip: JKS keystores cannot store SecretKeys. You can try it as your homework Hope this post will enlighten you to explore JCA further, or at least be aware of the pitfalls of ‘blissful ignorance’ when working with JCA.   Reference: Cryptography Using JCA – Services In Providers from our JCG partner Allen Julia at the YK’s Workshop blog. ...
software-development-2-logo

What’s in a name : Reason behind naming of few great projects

This is in conitunuation of my previous post where i have listed the reason behind naming of several great projects.I have found some more languages , product and organization . Why Name AMAZON: Amazon[need no introduction] was founded by Jeff Bezos. Bezos wanted a name for his company that began with “A” so that it would appear early in alphabetic order. He began looking through the dictionary and settled on “Amazon” because it was a place that was “exotic and different” and it was one of the biggest rivers in the world, as he hoped his company would be! (Source) Why Name Geronimo : Apache Geronimo is an open source server runtime that integrates the best open source projects to create Java/OSGi server runtime. Geronimo was an Apache leader who fought with US and mexico army.There is also a controversy that the U.S. operation to kill Osama bin Laden also used the code name “Geronimo”. Why Name Selenium : The open source Selenium web testing tool was named as a jab at its ostensible commercial rival, Mercury QuickTest Pro (Mercury was later bought by HP). Selenium mineral supplements are used as an antidote to mercury poisoning, and so was the test tool meant as an antidote to QTP! (Source) Why Name DJango: Django is a high-level Python Web framework and named after the jazz guitarist Django Reinhardt. Why Name Perl: Programming language Perl [created by Larry Wall]was originally named “Pearl”. Wall wanted to give the language a short name with positive connotations; he claims that he considered (and rejected) every three- and four-letter word in the dictionary. He also considered naming it after his wife Gloria. Wall discovered the existing PEARL programming language before Perl’s official release and changed the spelling of the name. (Ref) Why Name Ruby: Ruby was conceived on February 24, 1993 by Yukihiro Matsumoto who wished to create a new language that was more powerful than Perl, and more object-oriented than Python.The main factor in choosing the name “Ruby” was because it was the birthstone of one of his colleagues. (Ref) Why Name Mozilla : Mozilla was the mascot of the now disbanded Netscape Communications Corporation.The name “Mozilla” was already in use at Netscape as the codename for Netscape Navigator 1.0. The term came from a combination of “Mosaic killer” (as Netscape wanted to displace NCSA Mosaic as the world’s number one web browser) and Godzilla. Apparently , Firefox , the flagship product of mozilla went through several name changes . Originally titled Phoenix , then changed to firebird and now firefox. (Source 1 Source 2) Why Name Yahoo! The word “Yahoo” was invented by Jonathan Swift for the Travels.The name Yahoo! purportedly stands for “Yet Another Hierarchical Officious Oracle,” but Jerry Yang and David Filo insist they selected the name because they considered themselves yahoos.The very first name of yahoo was “Akebono”[name of legendary Hawaiian sumo wrestlers].Yahoo name was already registered with someone else so a exclamation mark was put and made it yahoo!. (Source) Why Name Windows: The name Windows fits into that philosophy. At the time of its original release late in 1985, most operating systems were single-tasking, text-only, and ran from a command line–like DOS if you remember that. Graphic user interfaces (GUIs) were still new. The Mac, less than two years old at that time, was the only GUI-based system enjoying commercial success. The word windows simply described one of the most obvious differences between a GUI and a command-line interface. (Source) Why Name Pramati For those who don’t know , pramati build application servers , just like JBOSS , APACHE etc. Pramati is a Sanskrit word which means “Exceptional Minds”. I worked there as a java developer. Why Name Scala The name Scala is a blend of “scalable” and “language”, signifying that it is designed to grow with the demands of its users. James Strachan, the creator of Groovy, described Scala as a possible successor to Java . (Source)   Reference: What’s in a name : Reason behind naming of few great projects from our JCG partner Abhishek Somani at the Java , J2EE , Server blog. ...
oracle-glassfish-logo

Multiple Methods for Monitoring and Managing GlassFish 3

GlassFish 3 supports multiple methods of monitoring and management. In this post, I look briefly at the approaches GlassFish provides for administration, monitoring, and management. GlassFish Admin Console GlassFish’s web-based Admin Console GUI is probably the best-known interface for GlassFish administration. By default, it is accessed via the URL http://localhost:4848/ once GlassFish is running. The two screen snapshots below provide a taste of this approach, but I don’t look any deeper at this option here as this is a fairly easy to understand interface that is fairly easy to learn and use once logged into the website.GlassFish Admin Command Line Interface The GlassFish Admin Console GUI offers advantages of a GUI such as ease of learning and using, but also comes with the drawbacks of a GUI (can take longer to get through the ‘overhead’ of using the GUI approach for things that are easily done from the command line and does not work as well in scripts and headless environments). In some cases, a command-line approach is preferred and GlassFish supports command-line administration with the GlassFish Admin Command Line Interface. Running asadmin start-domain is used to start a Domain in GlassFish. The command asadmin help can be used to learn more about the available commands. A very small snippet from the top of this help output is shown next: Utility Commands asadmin(1m)NAME asadmin - utility for performing administrative tasks for Oracle GlassFish ServerSYNOPSIS asadmin [--host host] [--port port] [--user admin-user] [--passwordfile filename] [--terse={true|false}] [--secure={false|true}] [--echo={true|false}] [--interactive={true|false}] [--help] [subcommand [options] [operands]]DESCRIPTION Use the asadmin utility to perform administrative tasks for Oracle GlassFish Server. You can use this utility instead of the Administration Console interface. As the beginning of the asadmin help indicates, the asadmin utility is an alternative to the GUI-based ‘Administration Console interface.’ There are numerous sub-commands available and some of those are listed here:list-applications to list deployed applications deploy and other deployment subcommands version to see version of GlassFish (shown in the screen snapshot below) list-commands (lists available commands) [portion of output shown in the screen snapshot below]Additional information regarding the GlassFish Admin Command Line Interface is available in Learning GlassFish v3 Command Line Administration Interface (CLI). GlassFish JMX/AMX The two approaches shown in this post so far for monitoring and managing GlassFish (web-based Admin Console GUI and GlassFish Admin Command Line Interface) are specific to GlassFish. GlassFish also supports monitoring and management via Java Management Extensions (JMX), including JSR 77 (‘J2EE Management‘) as I have blogged about before in my post Simple Remote JMX with GlassFish. Because GlassFish supports a JMX interface, it can be easily monitored and managed with readily available tools such as JConsole and JVisualVM. Besides the MBeans that GlassFish exposes itself, the JVM has built-in MBeans since J2SE 5 that can be monitored in relation to the hosted GlassFish instances as well. The next set of images demonstrates using JConsole to view MBeans exposed via GlassFish and the JVM. The first image shows the standard JVM Platform MBeans available and the images following that one show GlassFish-specific MBeans including the amx-support and jmxremote domains. When the bootAMX operation of the boot-amx MBean (amx-support domain) is clicked on that latter MBean, the full complement of AMX MBeans is available as shown in the remainder of the images.GlassFish REST The Oracle GlassFish Server 3.1 Administration Guide includes a section called ‘Using REST Interfaces to Administer GlassFish Server‘ that states that the ‘GlassFish Server provides representational state transfer (REST) interfaces to enable you to access monitoring and configuration data for GlassFish Server.’ It goes on to suggest that clients applications such as web browsers, cURL, and GNU Wget can be used to interact with GlassFish via the Jersey-based REST interfaces. Of course, as this page also points out, any tool written in any language that handles REST-based interfaces can be used in conjunction with GlassFish’s REST support. Not surprisingly, the GlassFish REST APIs are exposed via URLs over HTTP. The previously cited Admin Guide states that configuration/management operations are accessed via URLs of form http://host:port/management/domain/path and monitoring operations are accessed via URLs of form http://host:port/monitoring/domain/path. One of the easiest ways to use GlassFish’s REST interfaces is via web browser using the URLs mentioned earlier (http://localhost:4848/management/domain/ and http://localhost:4848/monitoring/domain/ for example). The next three screen snapshots attempt to give a taste of this style of access. The middle image shows that the monitoring needs to be enabled in GlassFish.Using a web browser to interact with GlassFish for management and monitoring is easy, but this can be done with the Web Admin Console I covered at the beginning of this blog post. The real advantage of the REST-based interface is the ability to call it from other client tools, especially custom-built tools and scripts. For example, one can write scripts in Groovy, Python, Ruby, and other scripting languages to interact with GlassFish. Like GlassFish’s JMX-exposed APIs, GlassFish’s REST-exposed APIs allow custom scripts and tools to be used or even written to manage and monitor GlassFish. Jason Lee has posted several posts on using GlassFish’s REST management/monitoring APIs such as RESTful GlassFish Monitoring, Deploying Applications to GlassFish Using curl, and GlassFish Administration: The REST of the Story. Ant Tasks GlassFish provides several Ant tasks that allow Ant to be used for starting and stopping the GlassFish server, for deploying applications, and for performing other management tasks. A StackOverflow Thread covers this approach. The next two screen snapshots demonstrate using the GlassFish Web Admin Console’s Update Tool -> Available Add-Ons feature to select the Ant Tasks for installation and the contents of the ant-tasks.jar that is made available upon this selection.With the ant-tasks.jar JAR available, it can be placed on the Ant build’s classpath to script certain GlassFish actions via an Ant build. Conclusion The ability to manage and monitor an application server is one of its highly important features. This post has looked at several of the most common methods GlassFish supports for its management, monitoring, and general administration.   Reference: Multiple Methods for Monitoring and Managing GlassFish 3 from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...
apache-openjpa-logo

OpenJPA: Memory Leak Case Study

This article will provide the complete root cause analysis details and resolution of a Java heap memory leak (Apache OpenJPA leak) affecting an Oracle Weblogic server 10.0 production environment. This post will also demonstrate the importance to follow the Java Persistence API best practices when managing the javax.persistence.EntityManagerFactory lifecycle.             Environment specificationsJava EE server: Oracle Weblogic Portal 10.0 OS: Solaris 10 JDK: Oracle/Sun HotSpot JVM 1.5 32-bit @2 GB capacity Java Persistence API: Apache OpenJPA 1.0.x (JPA 1.0 specifications) RDBMS: Oracle 10g Platform type: Web PortalTroubleshooting toolsQuest Foglight for Java (Java heap monitoring) MAT (Java heap dump analysis)Problem description & observations The problem was initially reported by our Weblogic production support team following production outages. An initial root cause analysis exercise did reveal the following facts and observations:Production outages were observed on regular basis after ~2 weeks of traffic. The failures were due to Java heap (OldGen) depletion e.g. OutOfMemoryError: Java heap space error found in the Weblogic logs. A Java heap memory leak was confirmed after reviewing the Java heap OldGen space utilization over time from Foglight monitoring tool along with the Java verbose GC historical data.Following the discovery of the above problems, the decision was taken to move to the next phase of the RCA and perform a JVM heap dump analysis of the affected Weblogic (JVM) instances. JVM heap dump analysis ** A video explaining the following JVM Heap Dump analysis is now available here. In order to generate a JVM heap dump the supported team did use the HotSpot 1.5 jmap utility which generated a heap dump file (heap.bin) of about ~1.5 GB. The heap dump file was then analyzed using the Eclipse Memory Analyzer Tool. Now let’s review the heap dump analysis so we can understand the source of the OldGen memory leak. MAT provides an initial Leak Suspects report which can be very useful to highlight your high memory contributors. For our problem case, MAT was able to identify a leak suspect contributing to almost 600 MB or 40% of the total OldGen space capacity.At this point we found one instance of java.util.LinkedList using almost 600 MB of memory and loaded to one of our application parent class loader (@ 0x7e12b708). The next step was to understand the leaking objects along with the source of retention. MAT allows you to inspect any class loader instance of your application, providing you with capabilities to inspect the loaded classes & instances. Simply search for the desired object by providing the address e.g. 0x7e12b708 and then inspect the loaded classes & instances by selecting List Objects > with outgoing references.As you can see from the above snapshot, the analysis was quite revealing. What we found was one instance of org.apache.openjpa.enhance.PCRegistry at the source of the memory retention; more precisely the culprit was the _listeners field implemented as a LinkedList. For your reference, the Apache OpenJPA PCRegistry is used internally to track the registered persistence-capable classes. Find below a snippet of the PCRegistry source code from Apache OpenJPA version 1.0.4 exposing the _listeners field. /** * Tracks registered persistence-capable classes. * * @since 0.4.0 * @author Abe White */ public class PCRegistry { // DO NOT ADD ADDITIONAL DEPENDENCIES TO THIS CLASSprivate static final Localizer _loc = Localizer.forPackage (PCRegistry.class);// map of pc classes to meta structs; weak so the VM can GC classes private static final Map _metas = new ConcurrentReferenceHashMap (ReferenceMap.WEAK, ReferenceMap.HARD);// register class listeners private static final Collection _listeners = new LinkedList(); Now the question is why is the memory footprint of this internal data structure so big and potentially leaking over time? The next step was to deep dive into the _listeners LinkedLink instance in order to review the leaking objects.We finally found that the leaking objects were actually the JDBC & SQL mapping definitions (metadata) used by our application in order to execute various queries against our Oracle database. A review of the JPA specifications, OpenJPA documentation and source did confirm that the root cause was associated with a wrong usage of the javax.persistence.EntityManagerFactory such of lack of closure of a newly created EntityManagerFactory instance.If you look closely at the above code snapshot, you will realize that the close() method is indeed responsible to cleanup any recently used metadata repository instance. It did also raise another concern, why are we creating such Factory instances over and over… The next step of the investigation was to perform a code walkthrough of our application code, especially around the life cycle management of the JPA EntityManagerFactory and EntityManager objects. Root cause and solution A code walkthrough of the application code did reveal that the application was creating a new instance of EntityManagerFactory on each single request and not closing it properly. public class Application { @Resource private UserTransaction utx = null; // Initialized on each application request and not closed! @PersistenceUnit(unitName = "UnitName") private EntityManagerFactory emf = Persistence.createEntityManagerFactory("PersistenceUnit");public EntityManager getEntityManager() { return this.emf.createEntityManager(); } public void businessMethod() { // Create a new EntityManager instance via from the newly created EntityManagerFactory instance // Do something... // Close the EntityManager instance } } This code defect and improver use of JPA EntityManagerFactory was causing a leak or accumulation of metadata repository instances within the OpenJPA _listeners data structure demonstrated from the earlier JVM heap dump analysis. The solution of the problem was to centralize the management & life cycle of the thread safe javax.persistence.EntityManagerFactory via the Singleton pattern. The final solution was implemented as per below:Create and maintain only one static instance of javax.persistence.EntityManagerFactory per application class loader and implemented via the Singleton Pattern. Create and dispose new instances of EntityManager for each application request.Please review this discussion from Stackoverflow as the solution we implemented is quite similar. Following the implementation of the solution to our production environment, no more Java heap OldGen memory leak is observed.   Reference: OpenJPA: Memory Leak Case Study from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog. ...
android-logo

Android Game Development with libgdx – Collision Detection, Part 4

This is the fourth part of the libgdx tutorial in which we create a 2d platformer prototype modeled after Star Guard. You can read up on the previous articles if you are interested in how we got here.Part 1a Part 1b Part 2 Part 3Following the tutorial so far we managed to have a tiny world consisting of some blocks, our hero called Bob who can move around in a nice way but the problem is, he   doesn’t have any interaction with the world. If we switch the tile rendering back we would see Bob happily walking and jumping around without the blocks impending him. All the blocks get ignored. This happens because we never check if Bob actually collides with the blocks. Collision detection is nothing more than detecting when two or more objects collide. In our case we need to detect when Bob collides with the blocks. What exactly is being checked is if Bob’s bounding box intersects with the bounding boxes of their respective blocks. In case it does, we have detected a collision. We take note of the objects (Bob and the block(s)) and act accordingly. In our case we need to stop Bob from advancing, falling or jumping, depending with which side of the block Bob collided with. The quick and dirty way The easy and quick way to do it is to iterate through all the blocks in the world and check if the blocks collide with Bob’s current bounding box. This works well in our tiny 10×7 world but if we have a huge world with thousands of blocks, doing the detection every frame becomes impossible without affecting performance. A better way To optimise the above solution we will selectively pick the tiles that are potential candidates for collision with Bob. By design, the game world consists of blocks whose bounding boxes are axis aligned and their width and height are both 1 unit. In this case our world looks like the following image (all the blocks/tiles are in unit blocks):The red squares represent the bounds where the blocks would have been placed if any. The yellow ones are placed blocks. Now we can pick a simple 2 dimensional array (matrix) for our world and each cell will hold a Block or null if there is none. This is the map container. We always know where Bob is so it is easy to work out in which cell we are. The easy and lazy way to get the block candidates that Bob can collide with is to pick all the surrounding cells and check if Bob’s current bounding box in overlaps with one of the tiles that has a block.Because we also control Bob’s movement we have access to his direction and movement speed. This narrows our options down even further. For example if Bob is heading left we have the following scenario:The above image gives us 2 candidate cells (tiles) to check if the objects in those cells collide with Bob. Remember that gravity is constantly pulling Bob down so we will always have to check for tiles on the Y axis. Based on the vertical velocity’s sign we know when Bob is jumping or falling. If Bob is jumping, the candidate will be the tile (cell) above him. A negative vertical velocity means that Bob is falling so we pick the tile from underneath him as a candidate. If he is heading left (his velocity is < 0) then we pick the candidate on his left. If he’s heading right (velocity > 0) then we pick the tile to his right. If the horizontal velocity is 0 that means we don’t need to bother with the horizontal candidates. We need to make it optimal because we will be doing this every frame and we will have to do this for every enemy, bullet and whatever collideable entities the game will have. What happens upon collision? This is very simple in our case. Bob’s movement on that axis stops. His velocity on that axis will be set to 0. This can be done only if the 2 axis are checked separately. We will check for the horizontal collision first and if Bob collides, then we stop his horizontal movement. We do the exact same thing on the vertical (Y) axis. It is simple as that. Simulate first and render after We need to be careful when we check for collision. We humans tend to think before we act. If we are facing a wall, we don’t just walk into it, we see and we estimate the distance and we stop before we hit the wall. Imagine if you were blind. You would need a different sensor than your eye. You would use your arm to reach out and if you feel the wall, you’d stop before you walked into it. We can translate this to Bob, but instead of his arm we will use his bounding box. First we displace his bounding box on the X axis by the distance it would have taken Bob to move according to his velocity and check if the new position would hit the wall (if the bounding box intersects with the block’s bounding box). If yes, then a collision has been detected. Bob might have been some distance away from the wall and in that frame he would have covered the distance to the wall and some more. If that’s the case, we will simply position Bob next to the wall and align his bounding box with the current position. We also set Bob’s speed to 0 on that axis. The following diagram is an attempt to show just what I have described.The green box is where Bob currently stands. The displaced blue box is where Bob should be after this frame. The purple are is how much Bob is into the wall. That is the distance we need to push Bob back so he stands next to the wall. We just set his position next to the wall to achieve this without too much computation. The code for collision detection is actually very simple. It all resides in the BobController.java. There are a few other changes too which I should mention prior to the controller. The World.java has the following changes: public class World {/** Our player controlled hero **/ Bob bob; /** A world has a level through which Bob needs to go through **/ Level level;/** The collision boxes **/ Array<Rectangle> collisionRects = new Array<Rectangle>();// Getters -----------public Array<Rectangle> getCollisionRects() { return collisionRects; } public Bob getBob() { return bob; } public Level getLevel() { return level; } /** Return only the blocks that need to be drawn **/ public List<Block> getDrawableBlocks(int width, int height) { int x = (int)bob.getPosition().x - width; int y = (int)bob.getPosition().y - height; if (x < 0) { x = 0; } if (y < 0) { y = 0; } int x2 = x + 2 * width; int y2 = y + 2 * height; if (x2 > level.getWidth()) { x2 = level.getWidth() - 1; } if (y2 > level.getHeight()) { y2 = level.getHeight() - 1; }List<Block> blocks = new ArrayList<Block>(); Block block; for (int col = x; col <= x2; col++) { for (int row = y; row <= y2; row++) { block = level.getBlocks()[col][row]; if (block != null) { blocks.add(block); } } } return blocks; }// -------------------- public World() { createDemoWorld(); }private void createDemoWorld() { bob = new Bob(new Vector2(7, 2)); level = new Level(); } } #09 – collisionRects is just a simple array where I will put the rectangles Bob is colliding with in that particular frame. This is only for debug purposes and to show the boxes on the screen. It can and will be removed from the final game. #13 – Just provides access to the collision boxes #23 – getDrawableBlocks(int width, int height) is the method that returns the list of Block objects that are in the camera’s window and will be rendered. This method is just to prepare the application to render huge worlds without performance loss. It’s a very simple algorithm. Get the blocks surrounding Bob within a distance and return those to render. It’s an optimisation. #61 – Creates the Level declared in line #06. It’s good to move out the level from the world as we want our game to have multiple levels. This is the obvious first step. The Level.java can be found here. As I mentioned before, the actual collision detection is in BobController.java public class BobController { // ... code omitted ... // private Array<Block> collidable = new Array<Block>(); // ... code omitted ... //public void update(float delta) { processInput(); if (grounded && bob.getState().equals(State.JUMPING)) { bob.setState(State.IDLE); } bob.getAcceleration().y = GRAVITY; bob.getAcceleration().mul(delta); bob.getVelocity().add(bob.getAcceleration().x, bob.getAcceleration().y); checkCollisionWithBlocks(delta); bob.getVelocity().x *= DAMP; if (bob.getVelocity().x > MAX_VEL) { bob.getVelocity().x = MAX_VEL; } if (bob.getVelocity().x < -MAX_VEL) { bob.getVelocity().x = -MAX_VEL; } bob.update(delta); }private void checkCollisionWithBlocks(float delta) { bob.getVelocity().mul(delta); Rectangle bobRect = rectPool.obtain(); bobRect.set(bob.getBounds().x, bob.getBounds().y, bob.getBounds().width, bob.getBounds().height); int startX, endX; int startY = (int) bob.getBounds().y; int endY = (int) (bob.getBounds().y + bob.getBounds().height); if (bob.getVelocity().x < 0) { startX = endX = (int) Math.floor(bob.getBounds().x + bob.getVelocity().x); } else { startX = endX = (int) Math.floor(bob.getBounds().x + bob.getBounds().width + bob.getVelocity().x); } populateCollidableBlocks(startX, startY, endX, endY); bobRect.x += bob.getVelocity().x; world.getCollisionRects().clear(); for (Block block : collidable) { if (block == null) continue; if (bobRect.overlaps(block.getBounds())) { bob.getVelocity().x = 0; world.getCollisionRects().add(block.getBounds()); break; } } bobRect.x = bob.getPosition().x; startX = (int) bob.getBounds().x; endX = (int) (bob.getBounds().x + bob.getBounds().width); if (bob.getVelocity().y < 0) { startY = endY = (int) Math.floor(bob.getBounds().y + bob.getVelocity().y); } else { startY = endY = (int) Math.floor(bob.getBounds().y + bob.getBounds().height + bob.getVelocity().y); } populateCollidableBlocks(startX, startY, endX, endY); bobRect.y += bob.getVelocity().y; for (Block block : collidable) { if (block == null) continue; if (bobRect.overlaps(block.getBounds())) { if (bob.getVelocity().y < 0) { grounded = true; } bob.getVelocity().y = 0; world.getCollisionRects().add(block.getBounds()); break; } } bobRect.y = bob.getPosition().y; bob.getPosition().add(bob.getVelocity()); bob.getBounds().x = bob.getPosition().x; bob.getBounds().y = bob.getPosition().y; bob.getVelocity().mul(1 / delta); }private void populateCollidableBlocks(int startX, int startY, int endX, int endY) { collidable.clear(); for (int x = startX; x <= endX; x++) { for (int y = startY; y <= endY; y++) { if (x >= 0 && x < world.getLevel().getWidth() && y >=0 && y < world.getLevel().getHeight()) { collidable.add(world.getLevel().get(x, y)); } } } } // ... code omitted ... // } The full source code is on github and I have tried to document it but I will go through the important bits here. #03 – the collidable array will hold each frame the blocks that are the candidates for collision with Bob. The update method is more concise now. #07 – processing the input as usual and nothing changed there #08 – #09 – resets Bob’s state if he’s not in the air. #12 – Bob’s acceleration is transformed to the frame time. This is important as a frame can be very small (usually 1/60 second) and we want to do this conversion just once in a frame. #13 – compute the velocity in frame time #14 – is highlighted because this is where the collision detection is happening. I’ll go through that method in a bit. #15 – #22 – Applies the DAMP to Bob to stop him and makes sure that Bob is not exceeding his maximum velocity. #25 – the checkCollisionWithBlocks(float delta) method which sets Bob’s states, position and other parameters based on his collision or not with the blocks in the level. #26 – transform velocity to frame time #27 – #28 – We use a Pool to obtain a Rectangle which is a copy of Bob’s current bounding box. This rectangle will be displaced where bob should be this frame and checked against the candidate blocks. #29 – #36 – These lines identify the start and end coordinates in the level matrix that are to be checked for collision. The level matrix is just a 2 dimensional array and each cell represents one unit so can hold one block. Check Level.java #31 – The Y coordinate is set since we only look for the horizontal for now. #32 – checks if Bob is heading left and if so, it identifies the tile to his left. The math is straight forward and I used this approach so if I decide that I need some other measurements for cells, this will still work. #37 – populates the collidable array with the blocks within the range provided. In this case is either the tile on the left or on the right, depending on Bob’s bearing. Also note that if there is no block in that cell, the result is null. #38 – this is where we displace the copy of Bob’s bounding box. The new position of bobRec is where Bob should be in normal circumstances. But only on the X axis. #39 – remember the collisionRects from the world for debugging? We clear that array now so we can populate it with the rectangles that Bob is colliding with. #40 – #47 – This is where the actual collision detection on the X axis is happening. We iterate through all the candidate blocks (in our case will be 1) and check if the block’s bounding box intersects Bob’s displaced bounding box. We use the bobRect.overlaps method which is part of the Rectangle class in libgdx and returns true if the 2 rectangles overlap. If there is an overlap, we have a collision so we set Bob’s velocity to 0 (line #43 add the rectangle to the world.collisionRects and break out of the detection. #48 – We reset the bounding box’s position because we are moving to check collision on the Y axis disregarding the X. #49 – #68 – is exactly the same as before but it happens on the Y axis. There is one additional instruction #61 – #63 and that sets the grounded state to true if a collision was detected when Bob was falling. #69 – Bob’s rectangle copy is reset #70 – Bob’s new velocity is being set which will be used to compute Bob’s new position. #71 – #72 – Bob’s real bounds’ position is updated #73 – We transform the velocity back to the base measurement units. This is very important. And that is all for the collision of Bob with the tiles. Of course we will evolve this as more entities are added but for now is as good as it gets. We cheated here a bit as in the diagram I stated that I will place Bob next to the Block when colliding but in the code I completely ignore the replacing. Because the distance is so tiny that we can’t even see it, it’s OK. It can be added, it won’t make much difference. If you decide to add it, make sure sure you set Bob’s position next next to the Block, a tiny bit farther so the overlap function will result false. There is a small addition to the WorldRenderer.java too. public class WorldRenderer { // ... code omitted ... // public void render() { spriteBatch.begin(); drawBlocks(); drawBob(); spriteBatch.end(); drawCollisionBlocks(); if (debug) drawDebug(); }private void drawCollisionBlocks() { debugRenderer.setProjectionMatrix(cam.combined); debugRenderer.begin(ShapeType.FilledRectangle); debugRenderer.setColor(new Color(1, 1, 1, 1)); for (Rectangle rect : world.getCollisionRects()) { debugRenderer.filledRect(rect.x, rect.y, rect.width, rect.height); } debugRenderer.end(); } // ... code omitted ... // } The addition of the drawCollisionBlocks() method which draws a white box wherever the collision is happening. It’s all for your viewing pleasure. The result of the work we put in so far should be similar to this video:This article should wrap up basic collision detection. Next we will look at extending the world, camera movement, creating enemies, using weapons, adding sound. Please share your ideas what should come first as all are important. The source code for this project can be found here: https://github.com/obviam/star-assault. You need to checkout the branch part4. To check it out with git: git clone -b part4 git@github.com:obviam/star-assault.git. You can also download it as a zip file. There is also a nice platformer in the libgdx tests directory. SuperKoalio. It demonstrates a lot of things I have covered so far and it’s much shorter and for the ones with some libgdx experience it is very helpful.   Reference: Android Game Development with libgdx – Collision Detection, Part 4 from our JCG partner Impaler at the Against the Grain blog. ...
apache-pig-logo

Herding Apache Pig – using Pig with Perl and Python

The past week or so we got some new data that we had to process quickly . There are quite a few technologies out there to quickly churn map/reduce jobs on Hadoop (Cascading, Hive, Crunch, Jaql to name a few of many) , my personal favorite is Apache Pig. I find that the imperative nature of pig makes it relatively easy to understand what’s going on and where the data is going and that it produces efficient enough map/reduces. On the down side pig lacks control structures so working with pig also mean you need to extend it with user defined functions (UDFs) or Hadoop streaming. Usually I use Java or Scala for writing UDFs but it is always nice to try something new so we decided to checkout some other technologies – namely perl and python. This post highlights some of the pitfalls we met and how to work around them.     Yuval, who was working with me on this mini-project likes perl (to each his own, I suppose) so we started with that. searching for pig and perl examples, we found something like the following: A = LOAD 'data'; B = STREAM A THROUGH `stream.pl`; The first pitfall here is that the perl script name is surrounded by a backtick (the character on the tilde (~) key) and not a single quote (so in the script above ’data’ is surrounded by single quotes and `stream.pl` is surrounded by backticks ). The second pitfall was that the code above works nicely when you use pig in local mode (pig -x local) but it failed when we tried to run it on the cluster. It took some head scratching and some trial and error but eventually Yuval came with the following: DEFINE CMD `perl stream.pl` ship ('/PATH/stream.pl'); A = LOAD 'data' B = STREAM A THROUGH CMD; Basically we’re telling pig to copy the pig script to HDFS so that it would be accessible on all the nodes. So, perl worked pretty well, but since we’re using Hadoop Streaming and get the data via stdin we lose all the context of the data that pig knows. We also need to emulate the textual representations of bags and tuples so the returned data will be available to pig for further work. This is all workable but not fun to work with (in my opinion anyway). I decided to write pig UDFs in python. python can be used with Apache streaming, like perl above, but it also integrates more tightly with Pig via jython (i.e the python UDF is compiled into java and ships to the cluster as part of the jar pig generates for the map/reduce anyway). Pig UDFs are better than streaming as you get Pig’s schema for the parameters and you can tell Pig the schema you return for your output. UDFs in python are especially nice as the code is almost 100% regular python and Pig does the mapping for you (for instance a bag of tuples in pig is translated to a list of tuples in python etc.). Actually the only difference is that if you want Pig to know about the data types you return from the python code you need to annotate the method with @outputSchema e.g. a simple UDF that gets the month as an int from a date string in the format YYYY-MM-DD HH:MM:SS @outputSchema('num:int') def getMonth(strDate): try: dt, _, _ = strDate.partition('.') return datetime.strptime(dt, '%Y-%m-%d %H:%M:%S').month except AttributeError: return 0 except IndexError: return 0 except ValueError: return 0 Using the PDF is as simple as declaring the python file where the UDF is defined. Assuming our UDF is ina a file called utils.py, it would be declared as follows: Register utils.py using jython as utils; And then using that UDF would go something like: A = LOAD 'data' using PigStorage('|') as (dateString:chararray); B = FOREACH A GENERATE utils.getMonth(dateString) as month; Again, like in the perl case there are a few pitfalls here. for one the python script and the pig script need to be in the same directory (relative paths only work in in the local mode). The more annoying pitfall hit me when I wanted to import some python libs (e.g. datetime in the example which is imported using “from datetime import datetime”). There was no way I could come up with to make this work. The solution I did come up with eventually was to take a jyhton standalone .jar (a jar with a the common python libraries included) and replace Pig’s jython Jar (in the pig lib directory) with the stanalone one. There’s probably a nicer way to do this (and I’d be happy to hear about it) but this worked for me. It only has to be done on the machine where you run the pig script as the python code gets compiled and shipped to the cluster as part of the jar file Pig generates anyway. Working with Pig and python has been really nice. I liked writing pig UDFs in python much more than writing them in Java or Scala for that matter. The two main reasons for that is that a lot of java cruft for integrating with pig is just not there so I can focus on just solving the business problem and the other reason is that with both Pig and Python being “scripts” the feedback loop from making a change to seing it work is much shorter. Anyway, Pig also supports Javascript and Ruby UDFs but these would have to wait for next time.   Reference: Herding Apache Pig – using Pig with Perl and Python from our JCG partner Arnon Rotem-Gal-Oz at the Cirrus Minor blog. ...
agile-logo

The Good, the Bad, and the Ugly Backlog

The product backlog is an important tool: It lists ideas, requirements, and new insights. But is it always the right tool to use? This post discusses the strengths of a traditional product backlog together with its limitations. It provides advice on when to use the backlog, and when other tools may be better suited. The Good A traditional product backlog lists the outstanding work necessary to create a product. This includes ideas and requirements, architectural refactoring work, and defects. I find its greatest strength its simplicity, which makes it incredibly flexible to use: Teams can work with the product   backlog in the way that’s best for their product. Items can be described as user stories or as use cases, for instance, and different prioritisation techniques can be applied. This flexibility makes it possible to use the backlog for a wide range of products, from mobile apps to mainframe systems. The second great benefit is the backlog’s ability to support sprint and release planning. This is achieved by ordering its items from top to bottom, and by detailing the items according to their priority. Small, detailed, and prioritised items at the top are the right input for the sprint planning meeting. Having the reminder of the backlog ordered makes it possible to anticipate when the items are likely to be delivered (if a release burndown chart is also used). The Bad While simplicity is its greatest strengths, I also find it a weakness: Personas to capture the users and customers with their needs don’t fit into a list, nor do scenarios and storyboards. The same is true for the user interface design, and operational qualities such performance or interoperability. As a consequence, these artefacts are kept separately, for instance, on a wiki or in a project management tool, or they are overlooked in my experience. While the latter can be very problematic, the former isn’t great either: information that belongs together is stored separately. This makes it more difficult to keep the various artefacts in synch, and it can cause inconsistencies and errors. Similarly, working with a product backlog that consists of a list makes sense when release planning is feasible and desirable. For brand-new products and major product updates, however, the backlog items have to emerge: Some will be missing initially and are discovered through stakeholder feedback, others are too sketchy or are likely to change significantly. To make things worse, a team working on a new product may not be able to estimate all product backlog items at the outset, as the team members may have to find out how to best implement the software. The Ugly I have seen quite a few ugly product backlogs in my work including disguised requirements specifications with too much detail, long wish lists containing many hundred items, and “dessert backlogs” consisting only of a handful of loosely related stories. While that’s not the fault of the product backlog, I believe that its simplicity does not always give teams the support they need, particularly when a new product is developed. Conclusion A traditional, linear product backlog works best when the personas, the user interaction, the user interface design, and the operational qualities are known, and don’t not have to be stated. This is usually the case for incremental product updates. For new products and major updates, however, I find that a traditional product backlog can be limiting, and I prefer to use my Product Canvas. (But the canvas would most likely be an overkill for an incremental product update or a maintenance release!)   Reference: The Good, the Bad, and the Ugly Backlog from our JCG partner Roman Pichler at the Pichler’s blog blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close