Featured FREE Whitepapers

What's New Here?

apache-tomcat-logo

Most popular application servers

This is the second post in the series where we publish statistical data about the Java installations. The used dataset originates from free Plumbr installations out there totalling 1,024 different environments we have collected during the past six months. First post in the series analyzed the foundation – on what OS the JVM is run, whether it is a 32 or 62-bit infrastructure and what JVM vendor and version were used. In this post we are going to focus on the application servers used. It proved to be a bit more challenging task than originally expected – the best shot we had towards the goal was to extract it from the bootstrap classpath. With queries similar to “grep -i tomcat classpath.log”. Which was easy. As opposed to discovering that:     Out of the 1024 samples 92 did not contain a reference to bootstrap classpath at all. Which was our first surprise. Whether they were really ran without any entries to bootstrap classpath or our statistics just do not record all the entries properly – failed to trace the reason. But nevertheless, this left us with 932 data points. Out of the remaining 932 we were unable to link 256 reports to any of the application servers known to mankind. Before jumping to the conclusion that approx. 27% of the JVMs out there are running client side programs, we tried to dig further57 seemed to be launched using Maven plugins, which hides the actual runtime from us. But I can bet the vast majority of those are definitely not Swing applications. 11 environments were running on Play Framework, which is not using Java EE containers to run. 6 environments were launched with Scala runtime attached, so I assume these were also actually web applications. 54 had loaded either jgoodies or swing libraries which make them good candidates for being a desktop application 6 were running on Android. Which we don’t even support. If you guys can shed some light on how you managed to launch Plumbr with Android, let us know. And the remaining 122 – we just failed to categorize – they seemed to range from MQ solutions to batch processes to whatnot.But 676 reports did contain reference to the Java EE container used. And results are visible from the following diagram:The winner should not be a surprise to anyone – Apache Tomcat is being used in 43% of the installations. Other places on the podium are a bit more surprising – Jetty coming in second with 23% of the deployments and JBoss third with 16%. The expected result was exactly reversed, but apparently the gears have shifted during the last years. Next group contains Glassfish, Geronimo and Weblogic with 7, 6 and 3% of the deployment base respectively. Which is also somewhat surprising – having just 20 Weblogic installations and Websphere nowhere in sight at all – the remaining five containers altogether represent less than 2% of the installations. I guess all the pragmatic-lean-KISS-… approach is finally starting to pay off and we are moving towards tools developers actually enjoy.   Reference: Most popular application servers from our JCG partner Vladimir Sor at the Plumbr Blog blog. ...
java-logo

Cryptography Using JCA – Services In Providers

The Java Cryptography Architecture (JCA) is an extensible framework that enables you to use perform cryptographic operations. JCA also promotes implementation independence (program should not care about who’s providing the cryptographic service) and implementation interoperability (program should not be tied to a specific provider of a particular cryptographic service). JCA allows numerous cryptographic services e.g. ciphers, key generators, message digests to be bundled up in a java.security.Provider class, and registered declaratively in a special file (java.security) or programmatically via the java.security.Security class (method ‘addProvider’).   Although JCA is a standard, different JDKs implement JCA differently. Between Sun/Oracle and IBM JDKs, the IBM JDK is sort of more ‘orderly’ than Oracle’s. For instance, IBM’s uber provider (com.ibm.crypto.provider.IBMJCE) implements the following keystore formats: JCEKS, PKCS12KS (PKCS12), JKS. Oracle JDK ‘spreads’ the keystore format implementations into the following providers:sun.security.provider.Sun – JKS com.sun.crypto.provider.SunJCE – JCEKS com.sun.net.ssl.internal.ssl.Provider – PKCS12Despite the popular recommendation to write applications that do not point to a specific Provider class, there are some use cases that require an application/program to know exactly what services a Provider class is offering. This requirement becomes more prevalent when supporting multiple application servers that may be tightly coupled with a particular JDK e.g. WebSphere bundled with IBM JDK. I usually use Tomcat+Oracle JDK for development (more lightweight, faster), but my testing/production setup is WebSphere+IBM JDK. To further complicate matters, my project needs the use of a hardware security module (HSM) which uses the JCA API via the provider class com.ncipher.provider.km.nCipherKM. So, when I am at home (without access to the HSM), I would want to continue writing code but at least get the codes tested on a JDK provider. I can then switch to use the nCipherKM provider for another round of unit testing before committing the code to source control. The usual assumption is that one Provider class is enough e.g. IBMJCE for IBM JDKs, SunJCE for Oracle JDKs. So the usual solution is to implement a class that specifies one provider, using reflection to avoid compile errors due to ‘Class Not Found': //For nShield HSM Class c = Class.forName('com.ncipher.provider.km.nCipherKM'); Provider provider = (Provider)c.newInstance();//For Oracle JDK Class c = Class.forName('com.sun.crypto.provider.SunJCE'); Provider provider = (Provider)c.newInstance();//For IBM JDK Class c = Class.forName('com.ibm.crypto.provider.IBMJCE'); Provider provider = (Provider)c.newInstance(); This design was OK, until I encountered a NoSuchAlgorithmException error running some unit test cases on Oracle JDK. And the algorithm I was using is RSA, a common algorithm! How can this be, the documentation says that RSA is supported! The same test cases worked fine on IBM JDK. Upon further investigation, I realised that much to my dismay, the SunJCE provider does not have an implementation for the KeyPairGenerator service for RSA. An implementation however is found in the provider class sun.security.rsa.SunRsaSign. So the assumption of ‘1 provider to provide them all’ is broken. But thanks to JCA’s open API, a Provider object can be passed in when requesting for a Service instance e.g. KeyGenerator kgen = KeyGenerator.getInstance('AES', provider); To help with my inspection of the various Provider objects, I’ve furnished a JUnit test to pretty-print out the various services of each registered Provider instance in a JDK. package org.gizmo.jca;import java.security.Provider; import java.security.Provider.Service; import java.security.Security; import java.util.Comparator; import java.util.SortedSet; import java.util.TreeSet;import javax.crypto.KeyGenerator;import org.bouncycastle.jce.provider.BouncyCastleProvider; import org.junit.Test;public class CryptoTests {@Test public void testBouncyCastleProvider() throws Exception { Provider p = new BouncyCastleProvider(); String info = p.getInfo(); System.out.println(p.getClass() + ' - ' + info); printServices(p); }@Test public void testProviders() throws Exception {Provider[] providers = Security.getProviders(); for(Provider p : providers) { String info = p.getInfo(); System.out.println(p.getClass() + ' - ' + info); printServices(p); } }private void printServices(Provider p) { SortedSetservices = new TreeSet(new ProviderServiceComparator()); services.addAll(p.getServices());for(Service service : services) { String algo = service.getAlgorithm(); System.out.println('==> Service: ' + service.getType() + ' - ' + algo); } }/** * This is to sort the various Services to make it easier on the eyes... */ private class ProviderServiceComparator implements Comparator{@Override public int compare(Service object1, Service object2) { String s1 = object1.getType() + object1.getAlgorithm(); String s2 = object2.getType() + object2.getAlgorithm();;return s1.compareTo(s2); }} } Anyway, if the algorithms you use are common and strong enough for your needs, the BouncyCastle provider can be used. It works well across JDKs (tested against IBM & Oracle). BouncyCastle does not support JKS or JCEKS keystore formats, but if you are not fussy, the BC keystore format works just fine. BouncyCastle is also open source and can be freely included in your applications. Tip: JKS keystores cannot store SecretKeys. You can try it as your homework Hope this post will enlighten you to explore JCA further, or at least be aware of the pitfalls of ‘blissful ignorance’ when working with JCA.   Reference: Cryptography Using JCA – Services In Providers from our JCG partner Allen Julia at the YK’s Workshop blog. ...
software-development-2-logo

What’s in a name : Reason behind naming of few great projects

This is in conitunuation of my previous post where i have listed the reason behind naming of several great projects.I have found some more languages , product and organization . Why Name AMAZON: Amazon[need no introduction] was founded by Jeff Bezos. Bezos wanted a name for his company that began with “A” so that it would appear early in alphabetic order. He began looking through the dictionary and settled on “Amazon” because it was a place that was “exotic and different” and it was one of the biggest rivers in the world, as he hoped his company would be! (Source) Why Name Geronimo : Apache Geronimo is an open source server runtime that integrates the best open source projects to create Java/OSGi server runtime. Geronimo was an Apache leader who fought with US and mexico army.There is also a controversy that the U.S. operation to kill Osama bin Laden also used the code name “Geronimo”. Why Name Selenium : The open source Selenium web testing tool was named as a jab at its ostensible commercial rival, Mercury QuickTest Pro (Mercury was later bought by HP). Selenium mineral supplements are used as an antidote to mercury poisoning, and so was the test tool meant as an antidote to QTP! (Source) Why Name DJango: Django is a high-level Python Web framework and named after the jazz guitarist Django Reinhardt. Why Name Perl: Programming language Perl [created by Larry Wall]was originally named “Pearl”. Wall wanted to give the language a short name with positive connotations; he claims that he considered (and rejected) every three- and four-letter word in the dictionary. He also considered naming it after his wife Gloria. Wall discovered the existing PEARL programming language before Perl’s official release and changed the spelling of the name. (Ref) Why Name Ruby: Ruby was conceived on February 24, 1993 by Yukihiro Matsumoto who wished to create a new language that was more powerful than Perl, and more object-oriented than Python.The main factor in choosing the name “Ruby” was because it was the birthstone of one of his colleagues. (Ref) Why Name Mozilla : Mozilla was the mascot of the now disbanded Netscape Communications Corporation.The name “Mozilla” was already in use at Netscape as the codename for Netscape Navigator 1.0. The term came from a combination of “Mosaic killer” (as Netscape wanted to displace NCSA Mosaic as the world’s number one web browser) and Godzilla. Apparently , Firefox , the flagship product of mozilla went through several name changes . Originally titled Phoenix , then changed to firebird and now firefox. (Source 1 Source 2) Why Name Yahoo! The word “Yahoo” was invented by Jonathan Swift for the Travels.The name Yahoo! purportedly stands for “Yet Another Hierarchical Officious Oracle,” but Jerry Yang and David Filo insist they selected the name because they considered themselves yahoos.The very first name of yahoo was “Akebono”[name of legendary Hawaiian sumo wrestlers].Yahoo name was already registered with someone else so a exclamation mark was put and made it yahoo!. (Source) Why Name Windows: The name Windows fits into that philosophy. At the time of its original release late in 1985, most operating systems were single-tasking, text-only, and ran from a command line–like DOS if you remember that. Graphic user interfaces (GUIs) were still new. The Mac, less than two years old at that time, was the only GUI-based system enjoying commercial success. The word windows simply described one of the most obvious differences between a GUI and a command-line interface. (Source) Why Name Pramati For those who don’t know , pramati build application servers , just like JBOSS , APACHE etc. Pramati is a Sanskrit word which means “Exceptional Minds”. I worked there as a java developer. Why Name Scala The name Scala is a blend of “scalable” and “language”, signifying that it is designed to grow with the demands of its users. James Strachan, the creator of Groovy, described Scala as a possible successor to Java . (Source)   Reference: What’s in a name : Reason behind naming of few great projects from our JCG partner Abhishek Somani at the Java , J2EE , Server blog. ...
oracle-glassfish-logo

Multiple Methods for Monitoring and Managing GlassFish 3

GlassFish 3 supports multiple methods of monitoring and management. In this post, I look briefly at the approaches GlassFish provides for administration, monitoring, and management. GlassFish Admin Console GlassFish’s web-based Admin Console GUI is probably the best-known interface for GlassFish administration. By default, it is accessed via the URL http://localhost:4848/ once GlassFish is running. The two screen snapshots below provide a taste of this approach, but I don’t look any deeper at this option here as this is a fairly easy to understand interface that is fairly easy to learn and use once logged into the website.GlassFish Admin Command Line Interface The GlassFish Admin Console GUI offers advantages of a GUI such as ease of learning and using, but also comes with the drawbacks of a GUI (can take longer to get through the ‘overhead’ of using the GUI approach for things that are easily done from the command line and does not work as well in scripts and headless environments). In some cases, a command-line approach is preferred and GlassFish supports command-line administration with the GlassFish Admin Command Line Interface. Running asadmin start-domain is used to start a Domain in GlassFish. The command asadmin help can be used to learn more about the available commands. A very small snippet from the top of this help output is shown next: Utility Commands asadmin(1m)NAME asadmin - utility for performing administrative tasks for Oracle GlassFish ServerSYNOPSIS asadmin [--host host] [--port port] [--user admin-user] [--passwordfile filename] [--terse={true|false}] [--secure={false|true}] [--echo={true|false}] [--interactive={true|false}] [--help] [subcommand [options] [operands]]DESCRIPTION Use the asadmin utility to perform administrative tasks for Oracle GlassFish Server. You can use this utility instead of the Administration Console interface. As the beginning of the asadmin help indicates, the asadmin utility is an alternative to the GUI-based ‘Administration Console interface.’ There are numerous sub-commands available and some of those are listed here:list-applications to list deployed applications deploy and other deployment subcommands version to see version of GlassFish (shown in the screen snapshot below) list-commands (lists available commands) [portion of output shown in the screen snapshot below]Additional information regarding the GlassFish Admin Command Line Interface is available in Learning GlassFish v3 Command Line Administration Interface (CLI). GlassFish JMX/AMX The two approaches shown in this post so far for monitoring and managing GlassFish (web-based Admin Console GUI and GlassFish Admin Command Line Interface) are specific to GlassFish. GlassFish also supports monitoring and management via Java Management Extensions (JMX), including JSR 77 (‘J2EE Management‘) as I have blogged about before in my post Simple Remote JMX with GlassFish. Because GlassFish supports a JMX interface, it can be easily monitored and managed with readily available tools such as JConsole and JVisualVM. Besides the MBeans that GlassFish exposes itself, the JVM has built-in MBeans since J2SE 5 that can be monitored in relation to the hosted GlassFish instances as well. The next set of images demonstrates using JConsole to view MBeans exposed via GlassFish and the JVM. The first image shows the standard JVM Platform MBeans available and the images following that one show GlassFish-specific MBeans including the amx-support and jmxremote domains. When the bootAMX operation of the boot-amx MBean (amx-support domain) is clicked on that latter MBean, the full complement of AMX MBeans is available as shown in the remainder of the images.GlassFish REST The Oracle GlassFish Server 3.1 Administration Guide includes a section called ‘Using REST Interfaces to Administer GlassFish Server‘ that states that the ‘GlassFish Server provides representational state transfer (REST) interfaces to enable you to access monitoring and configuration data for GlassFish Server.’ It goes on to suggest that clients applications such as web browsers, cURL, and GNU Wget can be used to interact with GlassFish via the Jersey-based REST interfaces. Of course, as this page also points out, any tool written in any language that handles REST-based interfaces can be used in conjunction with GlassFish’s REST support. Not surprisingly, the GlassFish REST APIs are exposed via URLs over HTTP. The previously cited Admin Guide states that configuration/management operations are accessed via URLs of form http://host:port/management/domain/path and monitoring operations are accessed via URLs of form http://host:port/monitoring/domain/path. One of the easiest ways to use GlassFish’s REST interfaces is via web browser using the URLs mentioned earlier (http://localhost:4848/management/domain/ and http://localhost:4848/monitoring/domain/ for example). The next three screen snapshots attempt to give a taste of this style of access. The middle image shows that the monitoring needs to be enabled in GlassFish.Using a web browser to interact with GlassFish for management and monitoring is easy, but this can be done with the Web Admin Console I covered at the beginning of this blog post. The real advantage of the REST-based interface is the ability to call it from other client tools, especially custom-built tools and scripts. For example, one can write scripts in Groovy, Python, Ruby, and other scripting languages to interact with GlassFish. Like GlassFish’s JMX-exposed APIs, GlassFish’s REST-exposed APIs allow custom scripts and tools to be used or even written to manage and monitor GlassFish. Jason Lee has posted several posts on using GlassFish’s REST management/monitoring APIs such as RESTful GlassFish Monitoring, Deploying Applications to GlassFish Using curl, and GlassFish Administration: The REST of the Story. Ant Tasks GlassFish provides several Ant tasks that allow Ant to be used for starting and stopping the GlassFish server, for deploying applications, and for performing other management tasks. A StackOverflow Thread covers this approach. The next two screen snapshots demonstrate using the GlassFish Web Admin Console’s Update Tool -> Available Add-Ons feature to select the Ant Tasks for installation and the contents of the ant-tasks.jar that is made available upon this selection.With the ant-tasks.jar JAR available, it can be placed on the Ant build’s classpath to script certain GlassFish actions via an Ant build. Conclusion The ability to manage and monitor an application server is one of its highly important features. This post has looked at several of the most common methods GlassFish supports for its management, monitoring, and general administration.   Reference: Multiple Methods for Monitoring and Managing GlassFish 3 from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...
apache-openjpa-logo

OpenJPA: Memory Leak Case Study

This article will provide the complete root cause analysis details and resolution of a Java heap memory leak (Apache OpenJPA leak) affecting an Oracle Weblogic server 10.0 production environment. This post will also demonstrate the importance to follow the Java Persistence API best practices when managing the javax.persistence.EntityManagerFactory lifecycle.             Environment specificationsJava EE server: Oracle Weblogic Portal 10.0 OS: Solaris 10 JDK: Oracle/Sun HotSpot JVM 1.5 32-bit @2 GB capacity Java Persistence API: Apache OpenJPA 1.0.x (JPA 1.0 specifications) RDBMS: Oracle 10g Platform type: Web PortalTroubleshooting toolsQuest Foglight for Java (Java heap monitoring) MAT (Java heap dump analysis)Problem description & observations The problem was initially reported by our Weblogic production support team following production outages. An initial root cause analysis exercise did reveal the following facts and observations:Production outages were observed on regular basis after ~2 weeks of traffic. The failures were due to Java heap (OldGen) depletion e.g. OutOfMemoryError: Java heap space error found in the Weblogic logs. A Java heap memory leak was confirmed after reviewing the Java heap OldGen space utilization over time from Foglight monitoring tool along with the Java verbose GC historical data.Following the discovery of the above problems, the decision was taken to move to the next phase of the RCA and perform a JVM heap dump analysis of the affected Weblogic (JVM) instances. JVM heap dump analysis ** A video explaining the following JVM Heap Dump analysis is now available here. In order to generate a JVM heap dump the supported team did use the HotSpot 1.5 jmap utility which generated a heap dump file (heap.bin) of about ~1.5 GB. The heap dump file was then analyzed using the Eclipse Memory Analyzer Tool. Now let’s review the heap dump analysis so we can understand the source of the OldGen memory leak. MAT provides an initial Leak Suspects report which can be very useful to highlight your high memory contributors. For our problem case, MAT was able to identify a leak suspect contributing to almost 600 MB or 40% of the total OldGen space capacity.At this point we found one instance of java.util.LinkedList using almost 600 MB of memory and loaded to one of our application parent class loader (@ 0x7e12b708). The next step was to understand the leaking objects along with the source of retention. MAT allows you to inspect any class loader instance of your application, providing you with capabilities to inspect the loaded classes & instances. Simply search for the desired object by providing the address e.g. 0x7e12b708 and then inspect the loaded classes & instances by selecting List Objects > with outgoing references.As you can see from the above snapshot, the analysis was quite revealing. What we found was one instance of org.apache.openjpa.enhance.PCRegistry at the source of the memory retention; more precisely the culprit was the _listeners field implemented as a LinkedList. For your reference, the Apache OpenJPA PCRegistry is used internally to track the registered persistence-capable classes. Find below a snippet of the PCRegistry source code from Apache OpenJPA version 1.0.4 exposing the _listeners field. /** * Tracks registered persistence-capable classes. * * @since 0.4.0 * @author Abe White */ public class PCRegistry { // DO NOT ADD ADDITIONAL DEPENDENCIES TO THIS CLASSprivate static final Localizer _loc = Localizer.forPackage (PCRegistry.class);// map of pc classes to meta structs; weak so the VM can GC classes private static final Map _metas = new ConcurrentReferenceHashMap (ReferenceMap.WEAK, ReferenceMap.HARD);// register class listeners private static final Collection _listeners = new LinkedList(); Now the question is why is the memory footprint of this internal data structure so big and potentially leaking over time? The next step was to deep dive into the _listeners LinkedLink instance in order to review the leaking objects.We finally found that the leaking objects were actually the JDBC & SQL mapping definitions (metadata) used by our application in order to execute various queries against our Oracle database. A review of the JPA specifications, OpenJPA documentation and source did confirm that the root cause was associated with a wrong usage of the javax.persistence.EntityManagerFactory such of lack of closure of a newly created EntityManagerFactory instance.If you look closely at the above code snapshot, you will realize that the close() method is indeed responsible to cleanup any recently used metadata repository instance. It did also raise another concern, why are we creating such Factory instances over and over… The next step of the investigation was to perform a code walkthrough of our application code, especially around the life cycle management of the JPA EntityManagerFactory and EntityManager objects. Root cause and solution A code walkthrough of the application code did reveal that the application was creating a new instance of EntityManagerFactory on each single request and not closing it properly. public class Application { @Resource private UserTransaction utx = null; // Initialized on each application request and not closed! @PersistenceUnit(unitName = "UnitName") private EntityManagerFactory emf = Persistence.createEntityManagerFactory("PersistenceUnit");public EntityManager getEntityManager() { return this.emf.createEntityManager(); } public void businessMethod() { // Create a new EntityManager instance via from the newly created EntityManagerFactory instance // Do something... // Close the EntityManager instance } } This code defect and improver use of JPA EntityManagerFactory was causing a leak or accumulation of metadata repository instances within the OpenJPA _listeners data structure demonstrated from the earlier JVM heap dump analysis. The solution of the problem was to centralize the management & life cycle of the thread safe javax.persistence.EntityManagerFactory via the Singleton pattern. The final solution was implemented as per below:Create and maintain only one static instance of javax.persistence.EntityManagerFactory per application class loader and implemented via the Singleton Pattern. Create and dispose new instances of EntityManager for each application request.Please review this discussion from Stackoverflow as the solution we implemented is quite similar. Following the implementation of the solution to our production environment, no more Java heap OldGen memory leak is observed.   Reference: OpenJPA: Memory Leak Case Study from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog. ...
android-logo

Android Game Development with libgdx – Collision Detection, Part 4

This is the fourth part of the libgdx tutorial in which we create a 2d platformer prototype modeled after Star Guard. You can read up on the previous articles if you are interested in how we got here.Part 1a Part 1b Part 2 Part 3Following the tutorial so far we managed to have a tiny world consisting of some blocks, our hero called Bob who can move around in a nice way but the problem is, he   doesn’t have any interaction with the world. If we switch the tile rendering back we would see Bob happily walking and jumping around without the blocks impending him. All the blocks get ignored. This happens because we never check if Bob actually collides with the blocks. Collision detection is nothing more than detecting when two or more objects collide. In our case we need to detect when Bob collides with the blocks. What exactly is being checked is if Bob’s bounding box intersects with the bounding boxes of their respective blocks. In case it does, we have detected a collision. We take note of the objects (Bob and the block(s)) and act accordingly. In our case we need to stop Bob from advancing, falling or jumping, depending with which side of the block Bob collided with. The quick and dirty way The easy and quick way to do it is to iterate through all the blocks in the world and check if the blocks collide with Bob’s current bounding box. This works well in our tiny 10×7 world but if we have a huge world with thousands of blocks, doing the detection every frame becomes impossible without affecting performance. A better way To optimise the above solution we will selectively pick the tiles that are potential candidates for collision with Bob. By design, the game world consists of blocks whose bounding boxes are axis aligned and their width and height are both 1 unit. In this case our world looks like the following image (all the blocks/tiles are in unit blocks):The red squares represent the bounds where the blocks would have been placed if any. The yellow ones are placed blocks. Now we can pick a simple 2 dimensional array (matrix) for our world and each cell will hold a Block or null if there is none. This is the map container. We always know where Bob is so it is easy to work out in which cell we are. The easy and lazy way to get the block candidates that Bob can collide with is to pick all the surrounding cells and check if Bob’s current bounding box in overlaps with one of the tiles that has a block.Because we also control Bob’s movement we have access to his direction and movement speed. This narrows our options down even further. For example if Bob is heading left we have the following scenario:The above image gives us 2 candidate cells (tiles) to check if the objects in those cells collide with Bob. Remember that gravity is constantly pulling Bob down so we will always have to check for tiles on the Y axis. Based on the vertical velocity’s sign we know when Bob is jumping or falling. If Bob is jumping, the candidate will be the tile (cell) above him. A negative vertical velocity means that Bob is falling so we pick the tile from underneath him as a candidate. If he is heading left (his velocity is < 0) then we pick the candidate on his left. If he’s heading right (velocity > 0) then we pick the tile to his right. If the horizontal velocity is 0 that means we don’t need to bother with the horizontal candidates. We need to make it optimal because we will be doing this every frame and we will have to do this for every enemy, bullet and whatever collideable entities the game will have. What happens upon collision? This is very simple in our case. Bob’s movement on that axis stops. His velocity on that axis will be set to 0. This can be done only if the 2 axis are checked separately. We will check for the horizontal collision first and if Bob collides, then we stop his horizontal movement. We do the exact same thing on the vertical (Y) axis. It is simple as that. Simulate first and render after We need to be careful when we check for collision. We humans tend to think before we act. If we are facing a wall, we don’t just walk into it, we see and we estimate the distance and we stop before we hit the wall. Imagine if you were blind. You would need a different sensor than your eye. You would use your arm to reach out and if you feel the wall, you’d stop before you walked into it. We can translate this to Bob, but instead of his arm we will use his bounding box. First we displace his bounding box on the X axis by the distance it would have taken Bob to move according to his velocity and check if the new position would hit the wall (if the bounding box intersects with the block’s bounding box). If yes, then a collision has been detected. Bob might have been some distance away from the wall and in that frame he would have covered the distance to the wall and some more. If that’s the case, we will simply position Bob next to the wall and align his bounding box with the current position. We also set Bob’s speed to 0 on that axis. The following diagram is an attempt to show just what I have described.The green box is where Bob currently stands. The displaced blue box is where Bob should be after this frame. The purple are is how much Bob is into the wall. That is the distance we need to push Bob back so he stands next to the wall. We just set his position next to the wall to achieve this without too much computation. The code for collision detection is actually very simple. It all resides in the BobController.java. There are a few other changes too which I should mention prior to the controller. The World.java has the following changes: public class World {/** Our player controlled hero **/ Bob bob; /** A world has a level through which Bob needs to go through **/ Level level;/** The collision boxes **/ Array<Rectangle> collisionRects = new Array<Rectangle>();// Getters -----------public Array<Rectangle> getCollisionRects() { return collisionRects; } public Bob getBob() { return bob; } public Level getLevel() { return level; } /** Return only the blocks that need to be drawn **/ public List<Block> getDrawableBlocks(int width, int height) { int x = (int)bob.getPosition().x - width; int y = (int)bob.getPosition().y - height; if (x < 0) { x = 0; } if (y < 0) { y = 0; } int x2 = x + 2 * width; int y2 = y + 2 * height; if (x2 > level.getWidth()) { x2 = level.getWidth() - 1; } if (y2 > level.getHeight()) { y2 = level.getHeight() - 1; }List<Block> blocks = new ArrayList<Block>(); Block block; for (int col = x; col <= x2; col++) { for (int row = y; row <= y2; row++) { block = level.getBlocks()[col][row]; if (block != null) { blocks.add(block); } } } return blocks; }// -------------------- public World() { createDemoWorld(); }private void createDemoWorld() { bob = new Bob(new Vector2(7, 2)); level = new Level(); } } #09 – collisionRects is just a simple array where I will put the rectangles Bob is colliding with in that particular frame. This is only for debug purposes and to show the boxes on the screen. It can and will be removed from the final game. #13 – Just provides access to the collision boxes #23 – getDrawableBlocks(int width, int height) is the method that returns the list of Block objects that are in the camera’s window and will be rendered. This method is just to prepare the application to render huge worlds without performance loss. It’s a very simple algorithm. Get the blocks surrounding Bob within a distance and return those to render. It’s an optimisation. #61 – Creates the Level declared in line #06. It’s good to move out the level from the world as we want our game to have multiple levels. This is the obvious first step. The Level.java can be found here. As I mentioned before, the actual collision detection is in BobController.java public class BobController { // ... code omitted ... // private Array<Block> collidable = new Array<Block>(); // ... code omitted ... //public void update(float delta) { processInput(); if (grounded && bob.getState().equals(State.JUMPING)) { bob.setState(State.IDLE); } bob.getAcceleration().y = GRAVITY; bob.getAcceleration().mul(delta); bob.getVelocity().add(bob.getAcceleration().x, bob.getAcceleration().y); checkCollisionWithBlocks(delta); bob.getVelocity().x *= DAMP; if (bob.getVelocity().x > MAX_VEL) { bob.getVelocity().x = MAX_VEL; } if (bob.getVelocity().x < -MAX_VEL) { bob.getVelocity().x = -MAX_VEL; } bob.update(delta); }private void checkCollisionWithBlocks(float delta) { bob.getVelocity().mul(delta); Rectangle bobRect = rectPool.obtain(); bobRect.set(bob.getBounds().x, bob.getBounds().y, bob.getBounds().width, bob.getBounds().height); int startX, endX; int startY = (int) bob.getBounds().y; int endY = (int) (bob.getBounds().y + bob.getBounds().height); if (bob.getVelocity().x < 0) { startX = endX = (int) Math.floor(bob.getBounds().x + bob.getVelocity().x); } else { startX = endX = (int) Math.floor(bob.getBounds().x + bob.getBounds().width + bob.getVelocity().x); } populateCollidableBlocks(startX, startY, endX, endY); bobRect.x += bob.getVelocity().x; world.getCollisionRects().clear(); for (Block block : collidable) { if (block == null) continue; if (bobRect.overlaps(block.getBounds())) { bob.getVelocity().x = 0; world.getCollisionRects().add(block.getBounds()); break; } } bobRect.x = bob.getPosition().x; startX = (int) bob.getBounds().x; endX = (int) (bob.getBounds().x + bob.getBounds().width); if (bob.getVelocity().y < 0) { startY = endY = (int) Math.floor(bob.getBounds().y + bob.getVelocity().y); } else { startY = endY = (int) Math.floor(bob.getBounds().y + bob.getBounds().height + bob.getVelocity().y); } populateCollidableBlocks(startX, startY, endX, endY); bobRect.y += bob.getVelocity().y; for (Block block : collidable) { if (block == null) continue; if (bobRect.overlaps(block.getBounds())) { if (bob.getVelocity().y < 0) { grounded = true; } bob.getVelocity().y = 0; world.getCollisionRects().add(block.getBounds()); break; } } bobRect.y = bob.getPosition().y; bob.getPosition().add(bob.getVelocity()); bob.getBounds().x = bob.getPosition().x; bob.getBounds().y = bob.getPosition().y; bob.getVelocity().mul(1 / delta); }private void populateCollidableBlocks(int startX, int startY, int endX, int endY) { collidable.clear(); for (int x = startX; x <= endX; x++) { for (int y = startY; y <= endY; y++) { if (x >= 0 && x < world.getLevel().getWidth() && y >=0 && y < world.getLevel().getHeight()) { collidable.add(world.getLevel().get(x, y)); } } } } // ... code omitted ... // } The full source code is on github and I have tried to document it but I will go through the important bits here. #03 – the collidable array will hold each frame the blocks that are the candidates for collision with Bob. The update method is more concise now. #07 – processing the input as usual and nothing changed there #08 – #09 – resets Bob’s state if he’s not in the air. #12 – Bob’s acceleration is transformed to the frame time. This is important as a frame can be very small (usually 1/60 second) and we want to do this conversion just once in a frame. #13 – compute the velocity in frame time #14 – is highlighted because this is where the collision detection is happening. I’ll go through that method in a bit. #15 – #22 – Applies the DAMP to Bob to stop him and makes sure that Bob is not exceeding his maximum velocity. #25 – the checkCollisionWithBlocks(float delta) method which sets Bob’s states, position and other parameters based on his collision or not with the blocks in the level. #26 – transform velocity to frame time #27 – #28 – We use a Pool to obtain a Rectangle which is a copy of Bob’s current bounding box. This rectangle will be displaced where bob should be this frame and checked against the candidate blocks. #29 – #36 – These lines identify the start and end coordinates in the level matrix that are to be checked for collision. The level matrix is just a 2 dimensional array and each cell represents one unit so can hold one block. Check Level.java #31 – The Y coordinate is set since we only look for the horizontal for now. #32 – checks if Bob is heading left and if so, it identifies the tile to his left. The math is straight forward and I used this approach so if I decide that I need some other measurements for cells, this will still work. #37 – populates the collidable array with the blocks within the range provided. In this case is either the tile on the left or on the right, depending on Bob’s bearing. Also note that if there is no block in that cell, the result is null. #38 – this is where we displace the copy of Bob’s bounding box. The new position of bobRec is where Bob should be in normal circumstances. But only on the X axis. #39 – remember the collisionRects from the world for debugging? We clear that array now so we can populate it with the rectangles that Bob is colliding with. #40 – #47 – This is where the actual collision detection on the X axis is happening. We iterate through all the candidate blocks (in our case will be 1) and check if the block’s bounding box intersects Bob’s displaced bounding box. We use the bobRect.overlaps method which is part of the Rectangle class in libgdx and returns true if the 2 rectangles overlap. If there is an overlap, we have a collision so we set Bob’s velocity to 0 (line #43 add the rectangle to the world.collisionRects and break out of the detection. #48 – We reset the bounding box’s position because we are moving to check collision on the Y axis disregarding the X. #49 – #68 – is exactly the same as before but it happens on the Y axis. There is one additional instruction #61 – #63 and that sets the grounded state to true if a collision was detected when Bob was falling. #69 – Bob’s rectangle copy is reset #70 – Bob’s new velocity is being set which will be used to compute Bob’s new position. #71 – #72 – Bob’s real bounds’ position is updated #73 – We transform the velocity back to the base measurement units. This is very important. And that is all for the collision of Bob with the tiles. Of course we will evolve this as more entities are added but for now is as good as it gets. We cheated here a bit as in the diagram I stated that I will place Bob next to the Block when colliding but in the code I completely ignore the replacing. Because the distance is so tiny that we can’t even see it, it’s OK. It can be added, it won’t make much difference. If you decide to add it, make sure sure you set Bob’s position next next to the Block, a tiny bit farther so the overlap function will result false. There is a small addition to the WorldRenderer.java too. public class WorldRenderer { // ... code omitted ... // public void render() { spriteBatch.begin(); drawBlocks(); drawBob(); spriteBatch.end(); drawCollisionBlocks(); if (debug) drawDebug(); }private void drawCollisionBlocks() { debugRenderer.setProjectionMatrix(cam.combined); debugRenderer.begin(ShapeType.FilledRectangle); debugRenderer.setColor(new Color(1, 1, 1, 1)); for (Rectangle rect : world.getCollisionRects()) { debugRenderer.filledRect(rect.x, rect.y, rect.width, rect.height); } debugRenderer.end(); } // ... code omitted ... // } The addition of the drawCollisionBlocks() method which draws a white box wherever the collision is happening. It’s all for your viewing pleasure. The result of the work we put in so far should be similar to this video:This article should wrap up basic collision detection. Next we will look at extending the world, camera movement, creating enemies, using weapons, adding sound. Please share your ideas what should come first as all are important. The source code for this project can be found here: https://github.com/obviam/star-assault. You need to checkout the branch part4. To check it out with git: git clone -b part4 git@github.com:obviam/star-assault.git. You can also download it as a zip file. There is also a nice platformer in the libgdx tests directory. SuperKoalio. It demonstrates a lot of things I have covered so far and it’s much shorter and for the ones with some libgdx experience it is very helpful.   Reference: Android Game Development with libgdx – Collision Detection, Part 4 from our JCG partner Impaler at the Against the Grain blog. ...
apache-pig-logo

Herding Apache Pig – using Pig with Perl and Python

The past week or so we got some new data that we had to process quickly . There are quite a few technologies out there to quickly churn map/reduce jobs on Hadoop (Cascading, Hive, Crunch, Jaql to name a few of many) , my personal favorite is Apache Pig. I find that the imperative nature of pig makes it relatively easy to understand what’s going on and where the data is going and that it produces efficient enough map/reduces. On the down side pig lacks control structures so working with pig also mean you need to extend it with user defined functions (UDFs) or Hadoop streaming. Usually I use Java or Scala for writing UDFs but it is always nice to try something new so we decided to checkout some other technologies – namely perl and python. This post highlights some of the pitfalls we met and how to work around them.     Yuval, who was working with me on this mini-project likes perl (to each his own, I suppose) so we started with that. searching for pig and perl examples, we found something like the following: A = LOAD 'data'; B = STREAM A THROUGH `stream.pl`; The first pitfall here is that the perl script name is surrounded by a backtick (the character on the tilde (~) key) and not a single quote (so in the script above ’data’ is surrounded by single quotes and `stream.pl` is surrounded by backticks ). The second pitfall was that the code above works nicely when you use pig in local mode (pig -x local) but it failed when we tried to run it on the cluster. It took some head scratching and some trial and error but eventually Yuval came with the following: DEFINE CMD `perl stream.pl` ship ('/PATH/stream.pl'); A = LOAD 'data' B = STREAM A THROUGH CMD; Basically we’re telling pig to copy the pig script to HDFS so that it would be accessible on all the nodes. So, perl worked pretty well, but since we’re using Hadoop Streaming and get the data via stdin we lose all the context of the data that pig knows. We also need to emulate the textual representations of bags and tuples so the returned data will be available to pig for further work. This is all workable but not fun to work with (in my opinion anyway). I decided to write pig UDFs in python. python can be used with Apache streaming, like perl above, but it also integrates more tightly with Pig via jython (i.e the python UDF is compiled into java and ships to the cluster as part of the jar pig generates for the map/reduce anyway). Pig UDFs are better than streaming as you get Pig’s schema for the parameters and you can tell Pig the schema you return for your output. UDFs in python are especially nice as the code is almost 100% regular python and Pig does the mapping for you (for instance a bag of tuples in pig is translated to a list of tuples in python etc.). Actually the only difference is that if you want Pig to know about the data types you return from the python code you need to annotate the method with @outputSchema e.g. a simple UDF that gets the month as an int from a date string in the format YYYY-MM-DD HH:MM:SS @outputSchema('num:int') def getMonth(strDate): try: dt, _, _ = strDate.partition('.') return datetime.strptime(dt, '%Y-%m-%d %H:%M:%S').month except AttributeError: return 0 except IndexError: return 0 except ValueError: return 0 Using the PDF is as simple as declaring the python file where the UDF is defined. Assuming our UDF is ina a file called utils.py, it would be declared as follows: Register utils.py using jython as utils; And then using that UDF would go something like: A = LOAD 'data' using PigStorage('|') as (dateString:chararray); B = FOREACH A GENERATE utils.getMonth(dateString) as month; Again, like in the perl case there are a few pitfalls here. for one the python script and the pig script need to be in the same directory (relative paths only work in in the local mode). The more annoying pitfall hit me when I wanted to import some python libs (e.g. datetime in the example which is imported using “from datetime import datetime”). There was no way I could come up with to make this work. The solution I did come up with eventually was to take a jyhton standalone .jar (a jar with a the common python libraries included) and replace Pig’s jython Jar (in the pig lib directory) with the stanalone one. There’s probably a nicer way to do this (and I’d be happy to hear about it) but this worked for me. It only has to be done on the machine where you run the pig script as the python code gets compiled and shipped to the cluster as part of the jar file Pig generates anyway. Working with Pig and python has been really nice. I liked writing pig UDFs in python much more than writing them in Java or Scala for that matter. The two main reasons for that is that a lot of java cruft for integrating with pig is just not there so I can focus on just solving the business problem and the other reason is that with both Pig and Python being “scripts” the feedback loop from making a change to seing it work is much shorter. Anyway, Pig also supports Javascript and Ruby UDFs but these would have to wait for next time.   Reference: Herding Apache Pig – using Pig with Perl and Python from our JCG partner Arnon Rotem-Gal-Oz at the Cirrus Minor blog. ...
agile-logo

The Good, the Bad, and the Ugly Backlog

The product backlog is an important tool: It lists ideas, requirements, and new insights. But is it always the right tool to use? This post discusses the strengths of a traditional product backlog together with its limitations. It provides advice on when to use the backlog, and when other tools may be better suited. The Good A traditional product backlog lists the outstanding work necessary to create a product. This includes ideas and requirements, architectural refactoring work, and defects. I find its greatest strength its simplicity, which makes it incredibly flexible to use: Teams can work with the product   backlog in the way that’s best for their product. Items can be described as user stories or as use cases, for instance, and different prioritisation techniques can be applied. This flexibility makes it possible to use the backlog for a wide range of products, from mobile apps to mainframe systems. The second great benefit is the backlog’s ability to support sprint and release planning. This is achieved by ordering its items from top to bottom, and by detailing the items according to their priority. Small, detailed, and prioritised items at the top are the right input for the sprint planning meeting. Having the reminder of the backlog ordered makes it possible to anticipate when the items are likely to be delivered (if a release burndown chart is also used). The Bad While simplicity is its greatest strengths, I also find it a weakness: Personas to capture the users and customers with their needs don’t fit into a list, nor do scenarios and storyboards. The same is true for the user interface design, and operational qualities such performance or interoperability. As a consequence, these artefacts are kept separately, for instance, on a wiki or in a project management tool, or they are overlooked in my experience. While the latter can be very problematic, the former isn’t great either: information that belongs together is stored separately. This makes it more difficult to keep the various artefacts in synch, and it can cause inconsistencies and errors. Similarly, working with a product backlog that consists of a list makes sense when release planning is feasible and desirable. For brand-new products and major product updates, however, the backlog items have to emerge: Some will be missing initially and are discovered through stakeholder feedback, others are too sketchy or are likely to change significantly. To make things worse, a team working on a new product may not be able to estimate all product backlog items at the outset, as the team members may have to find out how to best implement the software. The Ugly I have seen quite a few ugly product backlogs in my work including disguised requirements specifications with too much detail, long wish lists containing many hundred items, and “dessert backlogs” consisting only of a handful of loosely related stories. While that’s not the fault of the product backlog, I believe that its simplicity does not always give teams the support they need, particularly when a new product is developed. Conclusion A traditional, linear product backlog works best when the personas, the user interaction, the user interface design, and the operational qualities are known, and don’t not have to be stated. This is usually the case for incremental product updates. For new products and major updates, however, I find that a traditional product backlog can be limiting, and I prefer to use my Product Canvas. (But the canvas would most likely be an overkill for an incremental product update or a maintenance release!)   Reference: The Good, the Bad, and the Ugly Backlog from our JCG partner Roman Pichler at the Pichler’s blog blog. ...
wso2-logo

Starting with WSO2 ESB by running the samples

I recently joined a new assignment where we have to implement an ESB solution based on the WSO2 toolstack. Although I am familiair with most of the concepts of the ESB and some other implementations (like Mule ESB) it is the first time I will have to work with the WSO2 ESB. Luckily there is a lot of documentation to find and the tool comes with a large amount of samples how to use the ESB. In this post I show the steps I took to make the first examples work. For a more thorough explanation see the information here. The first step is to download and install the ESB, which is really easy. You can download the zip here. After downloading simply extract it to the location where you want to install the tool. I installed it in the directory   ‘Users/pascal/develop/wso2esb-4.5.1?. After unzipping you will see the file ‘INSTALL.txt’ in the unzipped folder. In this file all information is stated how to start and stop the ESB. To work with the samples you will have to start the ESB with the command: ./wso2esb-samples.sh -sn ${sample-number} where ‘${sample-number}’ is the sample number you want to run. So to run the ESB with sample 1 just start the ESB with the command: './wso2esb-samples.sh -sn 1' and you will see logging like this:Please note that I set the log level to DEBUG for ‘org.apache.synapse’ package by modifying the ‘log4j.properties’ that can be found in the directory ‘$CARBON_HOME/repository/conf/’. The first examples are mostly based on proxying or redirecting to existing services. These services are based on Axis2 and also supplied with the installation of the WSO2 ESB. To start the Axis2 server open a new terminal window, browse to the directory ‘$CARBON_HOME/samples/axis2Server’ and execute the command ‘./axis2server.sh’. This starts the SimpleAxisServer as can be seen in the console in the logging:If you open a browser and navigate to 'http://localhost:9000/services' you will see that we currently have no service running:To deploy a service, for example the SimpleStockQuoteService, open a new terminal and navigate to the following directory: $CARBON+HOME/samples/axis2Server/src/SimpleStockQuoteService. In here simply supply the ‘ant’ command and the service is compiled, built and deployed to the Axis2Server we started earlier:Now if we look at 'http://localhost:9000/services' we see the SimpleStockService being available for requests:If you want you can test the service with SoapUI:So now we have the server side ready for the first examples. Lets run the client. Each sample has documentation which states how to run the Axis2 client. For sample 1 the documentation shows this:Open up a new terminal, browse to the following directory ‘/Users/pascal/develop/wso2esb-4.5.1/samples/axis2Client’ and supply the stated command to run the client:As you see we received a quote for the stock IBM. In the terminal running the WSO2 ESB we see the following logging caused by the incoming request: [2013-03-03 16:58:03,113] DEBUG - SynapseMessageReceiver Synapse received a new message for message mediation... [2013-03-03 16:58:03,114] DEBUG - SynapseMessageReceiver Received To: /services/StockQuote [2013-03-03 16:58:03,114] DEBUG - SynapseMessageReceiver SOAPAction: urn:getQuote [2013-03-03 16:58:03,114] DEBUG - SynapseMessageReceiver WSA-Action: urn:getQuote [2013-03-03 16:58:03,114] DEBUG - Axis2SynapseEnvironment Injecting MessageContext [2013-03-03 16:58:03,115] DEBUG - Axis2SynapseEnvironment Using Main Sequence for injected message [2013-03-03 16:58:03,115] DEBUG - SequenceMediator Start : Sequence <main> [2013-03-03 16:58:03,115] DEBUG - SequenceMediator Sequence <SequenceMediator> :: mediate() [2013-03-03 16:58:03,115] DEBUG - InMediator Start : In mediator [2013-03-03 16:58:03,115] DEBUG - InMediator Current message is incoming - executing child mediators [2013-03-03 16:58:03,115] DEBUG - InMediator Sequence <InMediator> :: mediate() [2013-03-03 16:58:03,115] DEBUG - FilterMediator Start : Filter mediator [2013-03-03 16:58:03,116] DEBUG - FilterMediator Source : get-property('To') against : .*/StockQuote.* matches - executing child mediators [2013-03-03 16:58:03,116] DEBUG - FilterMediator Sequence <FilterMediator> :: mediate() [2013-03-03 16:58:03,116] DEBUG - SendMediator Start : Send mediator [2013-03-03 16:58:03,116] DEBUG - EndpointContext Checking if endpoint : endpoint_d06dafda969c8e11992a8e8f5366fd33203399eaae89984c currently at state ACTIVE can be used now? [2013-03-03 16:58:03,116] DEBUG - AddressEndpoint Sending message through endpoint : endpoint_d06dafda969c8e11992a8e8f5366fd33203399eaae89984c resolving to address = http://localhost:9000/services/SimpleStockQuoteService [2013-03-03 16:58:03,116] DEBUG - AddressEndpoint SOAPAction: urn:getQuote [2013-03-03 16:58:03,116] DEBUG - AddressEndpoint WSA-Action: urn:getQuote [2013-03-03 16:58:03,117] DEBUG - Axis2FlexibleMEPClient Sending [add = false] [sec = false] [rm = false] [mtom = false] [swa = false] [format = null] [force soap11=false] [force soap12=false] [pox=false] [get=false] [encoding=null] [to=http://localhost:9000/services/SimpleStockQuoteService] [2013-03-03 16:58:03,117] DEBUG - Axis2FlexibleMEPClient Message [Original Request Message ID : urn:uuid:381071f8-c6ca-4e14-bfe4-0e53de7f7cc2] [New Cloned Request Message ID : urn:uuid:503aabca-2a04-413f-ba44-b6c623bf9db6] [2013-03-03 16:58:03,117] DEBUG - SynapseCallbackReceiver Callback added. Total callbacks waiting for : 1 [2013-03-03 16:58:03,119] DEBUG - SendMediator End : Send mediator [2013-03-03 16:58:03,120] DEBUG - DropMediator Start : Drop mediator [2013-03-03 16:58:03,120] DEBUG - DropMediator End : Drop mediator [2013-03-03 16:58:03,120] DEBUG - FilterMediator End : Filter mediator [2013-03-03 16:58:03,120] DEBUG - InMediator End : In mediator [2013-03-03 16:58:03,120] DEBUG - SequenceMediator End : Sequence <main> [2013-03-03 16:58:03,130] DEBUG - SynapseCallbackReceiver Callback removed for request message id : urn:uuid:503aabca-2a04-413f-ba44-b6c623bf9db6. Pending callbacks count : 0 [2013-03-03 16:58:03,132] DEBUG - SynapseCallbackReceiver Synapse received an asynchronous response message [2013-03-03 16:58:03,133] DEBUG - SynapseCallbackReceiver Received To: null [2013-03-03 16:58:03,133] DEBUG - SynapseCallbackReceiver SOAPAction: [2013-03-03 16:58:03,133] DEBUG - SynapseCallbackReceiver WSA-Action: [2013-03-03 16:58:03,134] DEBUG - SynapseCallbackReceiver Body : <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv='http://schemas.xmlsoap.org/soap/envelope/'><soapenv:Body><ns:getQuoteResponse xmlns:ns='http://services.samples'><ns:return xmlns:ax21='http://services.samples/xsd' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:type='ax21:GetQuoteResponse'><ax21:change>4.155006565416989</ax21:change><ax21:earnings>-8.118428276798063</ax21:earnings><ax21:high>84.31871512134151</ax21:high><ax21:last>80.49639852693167</ax21:last><ax21:lastTradeTimestamp>Sun Mar 03 16:58:03 CET 2013</ax21:lastTradeTimestamp><ax21:low>-79.94179251359746</ax21:low><ax21:marketCap>-6855110.427635921</ax21:marketCap><ax21:name>IBM Company</ax21:name><ax21:open>83.68193468970854</ax21:open><ax21:peRatio>23.68015151185118</ax21:peRatio><ax21:percentageChange>-5.440098733913549</ax21:percentageChange><ax21:prevClose>-76.37741093768571</ax21:prevClose><ax21:symbol>IBM</ax21:symbol><ax21:volume>16034</ax21:volume></ns:return></ns:getQuoteResponse></soapenv:Body></soapenv:Envelope> [2013-03-03 16:58:03,134] DEBUG - Axis2SynapseEnvironment Injecting MessageContext [2013-03-03 16:58:03,135] DEBUG - Axis2SynapseEnvironment Using Main Sequence for injected message [2013-03-03 16:58:03,135] DEBUG - SequenceMediator Start : Sequence <main> [2013-03-03 16:58:03,135] DEBUG - SequenceMediator Sequence <SequenceMediator> :: mediate() [2013-03-03 16:58:03,135] DEBUG - InMediator Start : In mediator [2013-03-03 16:58:03,135] DEBUG - InMediator Current message is a response - skipping child mediators [2013-03-03 16:58:03,135] DEBUG - InMediator End : In mediator [2013-03-03 16:58:03,135] DEBUG - OutMediator Start : Out mediator [2013-03-03 16:58:03,136] DEBUG - OutMediator Current message is outgoing - executing child mediators [2013-03-03 16:58:03,136] DEBUG - OutMediator Sequence <OutMediator> :: mediate() [2013-03-03 16:58:03,136] DEBUG - SendMediator Start : Send mediator [2013-03-03 16:58:03,136] DEBUG - SendMediator Sending response message using implicit message properties.. Sending To: http://www.w3.org/2005/08/addressing/anonymous SOAPAction: [2013-03-03 16:58:03,138] DEBUG - SendMediator End : Send mediator [2013-03-03 16:58:03,138] DEBUG - OutMediator End : Out mediator [2013-03-03 16:58:03,138] DEBUG - SequenceMediator End : Sequence <main> And finally in the Axis2Server terminal we see the following line added to the logging because of the request which is passed to port 9000 by the sample flow: Sun Mar 03 16:58:03 CET 2013 samples.services.SimpleStockQuoteService :: Generating quote for : IBM This shows that the sample is running correctly and all expected actions take place. Next step will be going through all other examples to get a better understanding of the possibilities of the WSO2 ESB.   Reference: Starting with WSO2 ESB by running the samples from our JCG partner Pascal Alma at the The Pragmatic Integrator blog. ...
agile-logo

11 Agile Myths and 2 Truths

I deliver a lot of Agile training courses and I give a lot of talks about Agile (BCS Bristol tonight). There are some questions that come up again and again which are the result of myths people have come to believe about Agile. Consequently I spend my time debunking these myths again and again. I’ve been keeping a little list and there are 11 reoccurring myths. There are also two truths which are a bit more difficult for teams and companies to accept.         Agile MythsAgile is new: No, the Agile manifesto was published in 2001, the Scrum Pattern language was works hoped at PLoP 1998, the Episodes pattern language (the forerunner of XP) was workshopped at PLoP 1995, Tom Gilb’s Evo method dates back to 1976 and there are some who trace things back further. Agile means No Documentation: You can have as much documentation as you like in Agile. Documentation is just another deliverable, if it brings you value then schedule it and product it like anything else. Please be aware: documentation is often unread, often fails to communicate, is used as a defensive tool and is typically the second most expensive think on a large software project (after rework). Agile means No Design: No, Agile probably means MORE DESIGN. Design is inherent all the way through development, at every planning meeting and more. Agile does mean the end to big-up-front design which is invalidated five minutes after someone starts coding. Agile means No Planning: No, again, Agile probably has more planning. Again planning is spread out through the whole development exercise rather than at the front and it is the work of everybody rather than one or two anointed individuals. Developers get to do what they like: No, if this is true for you then you are doing it wrong, please call me. Agile needs more discipline from the team and what gets done should be lead from a specific role usually designated the Customer or Product Owner and usually played by a Product Manager or Business Analyst. If developers are doing what they like then there is a failure of in this role. There is a right size for a User Story: There is no right size for a user story. Every team is different, get over it. Work must fit in a Sprint: If you are doing Hard Core Scrum then Yes. If you are playing Agile the way I do (which I now call Xanpan) then No. In fact I advise letting stories span sprints in order to improve flow. You can have stories spanning sprints but we won’t let them continue for ever and we will try and break them down into smaller pieces of work. Scrum and Kanban are sworn enemies: No but the marketing efforts behind each they can get a lot of eyeballs by playing it that way. Xanpan is a Kanban/XP hybrid, and XP isn’t that different to Scrum so there you go. Agile doesn’t work for fixed deadline projects: No, Agile works best in fixed deadline project environments. Agile doesn’t work on Brownfield projects: No, Agile works best in brownfield environments. Granted retrofitting automated unit tests is harder but it is far from insurmountable. Agile doesn’t work on Greenfield projects: No but your first objective is to get yourself to a steady state where you can think like a brownfield project.To my mind the ideal project to start an Agile initiative with is a brownfield system with a fixed deadline in about 3 to 6 months where development has started but requirements are still unclear. Now two truths about Agile:Agile will not work for us because… (complete this sentence for yourself) Agile is a good idea but … we should wait until we have finished X, got Y to buy in, bought Z and the (new) Pope has given it his blessingYou can always talk yourself out of it or find a good reason for not doing it today.   Reference: 11 Agile Myths and 2 Truths from our JCG partner Allan Kelly at the allan’s blog – Agile, Lean, Patterns blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close