Featured FREE Whitepapers

What's New Here?

junit-logo

Testing Custom Exceptions with JUnit’s ExpectedException and @Rule

Exception Testing Why test exception flows? Just like with all of your code, test coverage writes a contract between your code and the business functionality that the code is supposed to produce leaving you with a living documentation of the code along with the added ability to stress the functionality early and often. I won’t go into the many benefits of testing instead I will focus on just Exception Testing. There are many ways to test an exception flow thrown from a piece of code. Lets say that you have a guarded method that requires an argument to be not null. How would you test that condition? How do you keep JUnit from reporting a failure when the exception is thrown? This blog covers a few different methods culminating with JUnit’s ExpectedException implemented with JUnit’s @Rule functionality.The ‘old’ way In a not so distant past the process to test an exception required a dense amount of boilerplate code in which you would start a try/catch block, report a failure if your code did not produce the expected behavior and then catch the exception looking for the specific type. Here is an example: public class MyObjTest {@Test public void getNameWithNullValue() {try { MyObj obj = new MyObj(); myObj.setName(null); fail('This should have thrown an exception');} catch (IllegalArgumentException e) { assertThat(e.getMessage().equals('Name must not be null')); } } } As you can see from this old example, many of the lines in the test case are just to support the lack of functionality present to specifically test exception handling. One good point to make for the try/catch method is the ability to test the specific message and any custom fields on the expected exception. We will explore this a bit further down with JUnit’s ExpectedException and @Rule annotation. JUnit adds expected exceptions JUnit responded back to the users need for exception handling by adding a @Test annotation field ‘expected’. The intention is that the entire test case will pass if the type of exception thrown matched the exception class present in the annotation. public class MyObjTest {@Test(expected = IllegalArgumentException.class) public void getNameWithNullValue() { MyObj obj = new MyObj(); myObj.setName(null); } } As you can see from the newer example, there is quite a bit less boiler plate code and the test is very concise, however, there are a few flaws. The main flaw is that the test condition is too broad. Suppose you have two variables in a signature and both cannot be null, then how do you know which variable the IllegalArgumentException was thrown for? What happens when you have extended a Throwable and need to check for the presence of a field? Keep these in mind as you read further, solutions will follow. JUnit @Rule and ExpectedException If you look at the previous example you might see that you are expecting an IllegalArgumentException to be thrown, but what if you have a custom exception? What if you want to make sure that the message contains a specific error code or message? This is where JUnit really excelled by providing a JUnit @Rule object specifically tailored to exception testing. If you are unfamiliar with JUnit @Rule, read the docs here. ExpectedException JUnit provides a JUnit class ExpectedException intended to be used as a @Rule. The ExpectedException allows for your test to declare that an exception is expected and gives you some basic built in functionality to clearly express the expected behavior. Unlike the @Test(expected) annotation feature, ExpectedException class allows you to test for specific error messages and custom fields via the Hamcrest matchers library. An example of JUnit’s ExpectedException import org.junit.rules.ExpectedException;public class MyObjTest {@Rule public ExpectedException thrown = ExpectedException.none();@Test public void getNameWithNullValue() { thrown.expect(IllegalArgumentException.class); thrown.expectMessage('Name must not be null');MyObj obj = new MyObj(); obj.setName(null); } } As I eluded to above, the framework allows you to test for specific messages ensuring that the exception being thrown is the case that the test is specifically looking for. This is very helpful when the nullability of multiple arguments is in question. Custom Fields Arguably the most useful feature of the ExpectedException framework is the ability to use Hamcrest matchers to test your custom/extended exceptions. For example, you have a custom/extended exception that is to be thrown in a method and inside the exception has an ‘errorCode’. How do you test that functionality without introducing the boiler plate code from the try/catch block listed above? How about a custom Matcher! This code is available at: https://github.com/mike-ensor/custom-exception-testing Solution: First the test case import org.junit.rules.ExpectedException;public class MyObjTest {@Rule public ExpectedException thrown = ExpectedException.none();@Test public void someMethodThatThrowsCustomException() { thrown.expect(CustomException.class); thrown.expect(CustomMatcher.hasCode('110501'));MyObj obj = new MyObj(); obj.methodThatThrowsCustomException(); } } Solution: Custom matcher import com.thepixlounge.exceptions.CustomException; import org.hamcrest.Description; import org.hamcrest.TypeSafeMatcher;public class CustomMatcher extends TypeSafeMatcher<CustomException> {public static BusinessMatcher hasCode(String item) { return new BusinessMatcher(item); }private String foundErrorCode; private final String expectedErrorCode;private CustomMatcher(String expectedErrorCode) { this.expectedErrorCode = expectedErrorCode; }@Override protected boolean matchesSafely(final CustomException exception) { foundErrorCode = exception.getErrorCode(); return foundErrorCode.equalsIgnoreCase(expectedErrorCode); }@Override public void describeTo(Description description) { description.appendValue(foundErrorCode) .appendText(' was not found instead of ') .appendValue(expectedErrorCode); } }NOTE: Please visit https://github.com/mike-ensor/custom-exception-testing to get a copy of a working Hamcrest Matcher, JUnit @Rule and ExpectedException. And there you have it, a quick overview of different ways to test Exceptions thrown by your code along with the ability to test for specific messages and fields from within custom exception classes. Please be specific with your test cases and try to target the exact case you have setup for your test, remember, tests can save you from introducing side-effect bugs! Happy coding and don’t forget to share! Reference: Testing Custom Exceptions w/ JUnit’s ExpectedException and @Rule from our JCG partner Mike at the Mike’s site blog....
apache-log4j-logo

Log4j Thread Deadlock – A Case Study

This case study describes the complete root cause analysis and resolution of an Apache Log4j thread race problem affecting a Weblogic Portal 10.0 production environment. It will also demonstrate the importance of proper Java classloader knowledge when developing and supporting Java EE applications. This article is also another opportunity for you to improve your thread dump analysis skills and understand thread race conditions. Environment specificationsJava EE server: Oracle Weblogic Portal 10.0 OS: Solaris 10 JDK: Oracle/Sun HotSpot JVM 1.5 Logging API: Apache Log4j 1.2.15 RDBMS: Oracle 10g Platform type: Web PortalTroubleshooting toolsQuest Foglight for Java (monitoring and alerting) Java VM Thread Dump (thread race analysis)Problem overview Major performance degradation was observed from one of our Weblogic Portal production environments. Alerts were also sent from the Foglight agents indicating a significant surge in Weblogic threads utilization up to the upper default limit of 400. Gathering and validation of facts As usual, a Java EE problem investigation requires gathering of technical and non technical facts so we can either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts below were verified in order to conclude on the root cause:What is the client impact? HIGH Recent change of the affected platform? Yes, a recent deployment was performed involving minor content changes and some Java libraries changes & refactoring Any recent traffic increase to the affected platform? No Since how long this problem has been observed? New problem observed following the deployment Did a restart of the Weblogic server resolve the problem? No, any restart attempt did result in an immediate surge of threads Did a rollback of the deployment changes resolve the problem? YesConclusion #1: The problem appears to be related to the recent changes. However, the team was initially unable to pinpoint the root cause. This is now what we will discuss for the rest of the article. Weblogic hogging thread report The initial thread surge problem was reported by Foglight. As you can see below, the threads utilization was significant (up to 400) leading to a high volume of pending client requests and ultimately major performance degradation.As usual, thread problems require proper thread dump analysis in order to pinpoint the source of threads contention. Lack of this critical analysis skill will prevent you to go any further in the root cause analysis. For our case study, a few thread dump snapshots were generated from our Weblogic servers using the simple Solaris OS command kill -3 <Java PID>. Thread Dump data was then extracted from the Weblogic standard output log files. Thread Dump analysis The first step of the analysis was to perform a fast scan of all stuck threads and pinpoint a problem “pattern”. We found 250 threads stuck in the following execution path: "[ACTIVE] ExecuteThread: '20' for queue: 'weblogic.kernel.Default (self-tuning)'" daemon prio=10 tid=0x03c4fc38 nid=0xe6 waiting for monitor entry [0x3f99e000..0x3f99f970] at org.apache.log4j.Category.callAppenders(Category.java:186) - waiting to lock <0x8b3c4c68> (a org.apache.log4j.spi.RootCategory) at org.apache.log4j.Category.forcedLog(Category.java:372) at org.apache.log4j.Category.log(Category.java:864) at org.apache.commons.logging.impl.Log4JLogger.debug(Log4JLogger.java:110) at org.apache.beehive.netui.util.logging.Logger.debug(Logger.java:119) at org.apache.beehive.netui.pageflow.DefaultPageFlowEventReporter.beginPageRequest(DefaultPageFlowEventReporter.java:164) at com.bea.wlw.netui.pageflow.internal.WeblogicPageFlowEventReporter.beginPageRequest(WeblogicPageFlowEventReporter.java:248) at org.apache.beehive.netui.pageflow.PageFlowPageFilter.doFilter(PageFlowPageFilter.java:154) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42) at com.bea.p13n.servlets.PortalServletFilter.doFilter(PortalServletFilter.java:336) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42) at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(RequestDispatcherImpl.java:526) at weblogic.servlet.internal.RequestDispatcherImpl.forward(RequestDispatcherImpl.java:261) at <App>.AppRedirectFilter.doFilter(RedirectFilter.java:83) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42) at <App>.AppServletFilter.doFilter(PortalServletFilter.java:336) at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42) at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3393) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(Unknown Source) at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2140) at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2046) at weblogic.servlet.internal.ServletRequestImpl.run(Unknown Source) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:200) at weblogic.work.ExecuteThread.run(ExecuteThread.java:172)As you can see, it appears that all the threads are waiting to acquire a lock on an Apache Log4j object monitor (org.apache.log4j.spi.RootCategory) when attempting to log debug information to the configured appender and log file. How did we figure that out from this thread stack trace? Let’s dissect this thread stack trace in order for you to better understand this thread race condition e.g. 250 threads attempting to acquire the same object monitor concurrently.At this point the main question is why are we seeing this problem suddenly? An increase of the logging level or load was also ruled out at this point after proper verification. The fact that the rollback of the previous changes did fix the problem did naturally lead us to perform a deeper review of the promoted changes. Before we go to the final root cause section, we will perform a code review of the affected Log4j code e.g. exposed to thread race conditions. Apache Log4j 1.2.15 code review ## org.apache.log4j.Category /** * Call the appenders in the hierrachy starting at <code>this</code>. If no * appenders could be found, emit a warning. * * <p> * This method calls all the appenders inherited from the hierarchy * circumventing any evaluation of whether to log or not to log the * particular log request. * * @param event * the event to log. */ public void callAppenders(LoggingEvent event) { int writes = 0;for (Category c = this; c != null; c = c.parent) { // Protected against simultaneous call to addAppender, // removeAppender,... synchronized (c) { if (c.aai != null) { writes += c.aai.appendLoopOnAppenders(event); } if (!c.additive) { break; } } }if (writes == 0) { repository.emitNoAppenderWarning(this); }As you can see, the Catelogry.callAppenders() is using a synchronized block at the Category level which can lead to a severe thread race condition under heavy concurrent load. In this scenario, the usage of a re-entrant read write lock would have been more appropriate (e.g. such lock strategy allows concurrent “read” but single “write”). You can find reference to this known Apache Log4j limitation below along with some possible solutions. https://issues.apache.org/bugzilla/show_bug.cgi?id=41214 Does the above Log4j behaviour is the actual root cause of our problem? Not so fast… Let’s remember that this problem got exposed only following a recent deployment. The real question is what application change triggered this problem & side effect from the Apache Log4j logging API? Root cause: a perfect storm! Deep dive analysis of the recent changes deployed did reveal that some Log4j libraries at the child classloader level were removed along with the associated “child first” policy. This refactoring exercise ended-up moving the delegation of both Commons logging and Log4j at the parent classloader level. What is the problem? Before this change, the logging events were split between Weblogic Beehive Log4j calls at the parent classloader and web application logging events at the child class loader. Since each classloader had its own copy of the Log4j objects, the thread race condition problem was split in half and not exposed (masked) under the current load conditions. Following the refactoring, all Log4j calls were moved to the parent classloader (Java EE app); adding significant concurrency level to the Log4j components such as Category. This increase concurrency level along with this known Category.java thread race / deadlock behaviour was a perfect storm for our production environment. In other to mitigate this problem, 2 immediate solutions were applied to the environment:Rollback the refactoring and split Log4j calls back between parent and child classloader. Reduce logging level for some appenders from DEBUG to WARNINGThis problem case again re-enforce the importance of performing proper testing and impact assessment when applying changes such as library and class loader related changes. Such changes can appear simple at the “surface” but can trigger some deep execution pattern changes, exposing your application(s) to known thread race conditions. A future upgrade to Apache Log4j 2 (or other logging API’s) will also be explored as it is expected to bring some performance enhancements which may address some of these thread race & scalability concerns. Please provide any comment or share your experience on thread race related problems with logging API’s. Happy  coding and don’t forget to share! Reference: Log4j Thread Deadlock – A Case Study from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog....
junit-logo

JUnit Pass Test Case on Failures

Why create a mechanism to expect a test failure? There comes a time when one would want and expect a JUnit @Test case fail. Though this is pretty rare, it happens. I had the need to detect when a JUnit Test fails and then, if expected, to pass instead of fail. The specific case was that I was testing a piece of code that could throw an Assert error inside of a call of the object. The code was written to be an enhancement to the popular new Fest Assertions framework, so in order to test the functionality, one would expect test cases to fail on purpose. A Solution One possible solution is to utilize the functionality provided by a JUnit @Rule in conjunction with a custom marker in the form of an annotation. Why use a @Rule? @Rule objects provide an AOP-like interface to a test class and each test cases. Rules are reset prior to each test case being run and they expose the workings of the test case in the style of an @Around AspectJ advice would.Required code elements@Rule object to check the status of each @Test case @ExpectedFailure custom marker annotation Test cases proving code works! Optional specific exception to be thrown if annotated test case does not failNOTE: working code is available on my github page and has been added to Maven Central. Feel free to Fork the project and submit a pull request Maven Usage <dependency> <groupId>com.clickconcepts.junit</groupId> <artifactId>expected-failure</artifactId> <version>0.0.9</version> </dependency>Example Usage In this example, the ‘exception’ object is a Fest assertion enhanced ExpectedException (look for my next post to expose this functionality). The expected exception will make assertions and in order to test those, the test case must be marked as @ExpectedFailure public class ExceptionAssertTest {@Rule public ExpectedException exception = ExpectedException.none();@Rule public ExpectedTestFailureWatcher watcher = ExpectedTestFailureWatcher.instance();@Test @ExpectedFailure('The matcher should fail becasue exception is not a SimpleException') public void assertSimpleExceptionAssert_exceptionIsOfType() { // expected exception will be of type 'SimpleException' exception.instanceOf(SimpleException.class); // throw something other than SimpleException...expect failure throw new RuntimeException('this is an exception'); } }Implementation of Solution Reminder, the latest code is available on my github page.@Rule code (ExpectedTestFailureWatcher.java)import org.junit.rules.TestRule; import org.junit.runner.Description; import org.junit.runners.model.Statement; // YEAH Guava!! import static com.google.common.base.Strings.isNullOrEmpty;public class ExpectedTestFailureWatcher implements TestRule {/** * Static factory to an instance of this watcher * * @return New instance of this watcher */ public static ExpectedTestFailureWatcher instance() { return new ExpectedTestFailureWatcher(); }@Override public Statement apply(final Statement base, final Description description) { return new Statement() { @Override public void evaluate() throws Throwable { boolean expectedToFail = description.getAnnotation(ExpectedFailure.class) != null; boolean failed = false; try { // allow test case to execute base.evaluate(); } catch (Throwable exception) { failed = true; if (!expectedToFail) { throw exception; // did not expect to fail and failed...fail } } // placed outside of catch if (expectedToFail && !failed) { throw new ExpectedTestFailureException(getUnFulfilledFailedMessage(description)); } }/** * Extracts detailed message about why test failed * @param description * @return */ private String getUnFulfilledFailedMessage(Description description) { String reason = null; if (description.getAnnotation(ExpectedFailure.class) != null) { reason = description.getAnnotation(ExpectedFailure.class).reason(); } if (isNullOrEmpty(reason)) { reason = 'Should have failed but didn't'; } return reason; } }; } }@ExpectedFailure custom annotation (ExpectedFailure.java)import java.lang.annotation.*;/** * Initially this is just a marker annotation to be used by a JUnit4 Test case in conjunction * with ExpectedTestFailure @Rule to indicate that a test is supposed to be failing */ @Documented @Retention(RetentionPolicy.RUNTIME) @Target(value = ElementType.METHOD) public @interface ExpectedFailure { // TODO: enhance by adding specific information about what type of failure expected //Class assertType() default Throwable.class;/** * Text based reason for marking test as ExpectedFailure * @return String */ String reason() default ''; }Custom Exception (Optional, you can easily just throw RuntimeException or existing custom exception) public class ExpectedTestFailureException extends Throwable { public ExpectedTestFailureException(String message) { super(message); } }Can’t one exploit the ability to mark a failure as expected? With great power comes great responsibility, it is advised that you do not mark a test as being @ExpectedFailure if you do not understand exactly why the test if failing. It is recommended that this testing method be implemented with care. DO NOT use the @ExpectedFailure annotation as an alternative to @Ignore Possible future enhancements could include ways to specify the specific assertion or the specific message asserted during the test case execution.Known issues In this current state, the @ExpectedFailure annotation can cover up additional assertions and until the future enhancements have been put into place, it is advised to use this methodology wisely. Reference: Allowing JUnit Tests to Pass Test Case on Failures from our JCG partner Mike at the Mike’s site blog....
java-logo

Does Immutability really means Thread Safety?

I have often read articles telling “If an object is immutable, it is thread safe”. Actually, I have never found an article that convinces me that immutable means thread safety. Even the book by Brian Goetz Java Concurrency in Practice with its chapter on immutability did not fully satisfied me. In this book we can read word for word, in a frame : Immutable objects are always thread-safe. I think this sentence deserve more explanations. So I am going to try to define immutability and its relation to thread safety.Definitions Immutability My definition is “An immutable object is an object which state does not change after its construction”. I am deliberately vague, since no one really agrees on the exact definitions.Thread safety You can find a lot of different definition of “thread safe” on internet. It’s actually very tricky to define it. I would say that a thread safe code is a code which has an expected behaviour in multi-thread environment. I let you define “expected behaviour”…The String example Lets have a look at the code of String (actually just a part of the code…): public class String { private final char value[];/** Cache the hash code for the string */ private int hash; // Default to 0public String(char[] value) { this.value = Arrays.copyOf(value, value.length); }public int hashCode() { int h = hash; if (h == 0 && value.length > 0) { char val[] = value;for (int i = 0; i < value.length; i++) { h = 31 * h + val[i]; } hash = h; } return h; } } String is considered as immutable. Looking at its implementation, we can deduct one thing : an immutable can change its internal state (in this case, the hashcode which is lazy loaded) as long as it is not externally visible. Now I am going to rewrite the hashcode method in a non thread safe way : public int hashCode() { if (hash == 0 && value.length > 0) { char val[] = value;for (int i = 0; i < value.length; i++) { hash = 31 * hash + val[i]; } } return hash; } As you can see, I have removed the local variable h and affected the variable hash directly instead. This implementation is NOT thread safe! If several threads call hashcode at the same time, the returned value could be different for each thread. The question is, does this class is immutable? Since two different threads can see a different hashcode, in an external point of view we have a change of state and so it is not immutable. We can so conclude that String is immutable because it is thread safe and not the opposite. So… What’s the point of saying “Do some immutable object, it is thread-safe! But take care, you have to make your immutable object thread-safe!”?The ImmutableSimpleDateFormat example Below, I have written a class similar to SimpleDateFormat. public class VerySimpleDateFormat {private final DateFormat formatter = SimpleDateFormat.getDateInstance(SimpleDateFormat.SHORT);public String format(Date d){ return formatter.format(d); } } This code is not thread safe because SimpleDateFormat.format is not. Is this object immutable? Good question! We have done our best to make all fields not modifiable, we don’t use any setter or any methods that let suggest that the state of the object will change. Actually, internally SimpleDateFormat change its state and that’s what makes it not thread safe. Since something change in the object graph, I would say that it’s not immutable, even if it looks like it… The problem is not even that SimpleDateFormat changes its internal state, the problem is that it does it in a non-thread safe way. Conclusion of this example, it is not that easy to make an immutable class. The final keyword is not enough, you have to make sure that the object fields of your object doesn’t change their state, which is sometimes impossible.Immutable objects can have non thread-safe methods (No magics!) Let’s have a look at the following code. public class HelloAppender {private final String greeting;public HelloAppender(String name) { this.greeting = 'hello ' + name + '!\n'; }public void appendTo(Appendable app) throws IOException { app.append(greeting); } } The class HelloAppender is definitely immutable. The method appendTo accepts an Appendable. Since an Appendable has no guarantee to be thread-safe (eg. StringBuilder), appending to this Appendable will cause problems in a multi-thread environment.Conclusion Making immutable objects are definitely a good practice in some cases and it helps a lot to make thread-safe code. But it bothers me when I read everywhere Immutable objects are thread safe, displayed as an axiom. I get the point but I think it is always good to think a bit about that in order to understand what causes non-thread safe codes. Thanks to the comment of Jose, I end up this article with a different conclusion. It’s all about the definition of immutable. It needs clarifications! An object is immutable if :All its field are initialized before being used (which means you can do lazy initialization) The states of the field does not change after their initialization (does not change means that the object graph doesn’t change, even the internal state of the children)An immutable object will always be thread-safe unless it deals it has to manipulate non thread safe objects. Reference: Do Immutability really means Thread Safety? from our JCG partner Tibo Delor at the InvalidCodeException blog....
apache-hadoop-mapreduce-logo

MapReduce: Working Through Data-Intensive Text Processing

It has been a while since I last posted, as I’ve been busy with some of the classes offered by Coursera. There are some very interesting offerings and is worth a look. Some time ago, I purchased Data-Intensive Processing with MapReduce by Jimmy Lin and Chris Dyer. The book presents several key MapReduce algorithms, but in pseudo code format. My goal is to take the algorithms presented in chapters 3-6 and implement them in Hadoop, using Hadoop: The Definitive Guide by Tom White as a reference. I’m going to assume familiarity with Hadoop and MapReduce and not cover any introductory material. So let’s jump into chapter 3 – MapReduce Algorithm Design, starting with local aggregation. Local Aggregation At a very high level, when Mappers emit data, the intermediate results are written to disk then sent across the network to Reducers for final processing. The latency of writing to disk then transferring data across the network is an expensive operation in the processing of a MapReduce job. So it stands to reason that whenever possible, reducing the amount of data sent from mappers would increase the speed of the MapReduce job. Local aggregation is a technique used to reduce the amount of data and improve the efficiency of our MapReduce job. Local aggregation can not take the place of reducers, as we need a way to gather results with the same key from different mappers. We are going to consider 3 ways of achieving local aggregation:Using Hadoop Combiner functions. Two approaches of “in-mapper” combining presented in the Text Processing with MapReduce book.Of course any optimization is going to have tradeoffs and we’ll discuss those as well. To demonstrate local aggregation, we will run the ubiquitous word count job on a plain text version of A Christmas Carol by Charles Dickens (downloaded from Project Gutenberg) on a pseudo distributed cluster installed on my MacBookPro, using the hadoop-0.20.2-cdh3u3 distribution from Cloudera. I plan in a future post to run the same experiment on an EC2 cluster with more realistic sized data.Combiners A combiner function is an object that extends the Reducer class. In fact, for our examples here, we are going to re-use the same reducer used in the word count job. A combiner function is specified when setting up the MapReduce job like so: job.setReducerClass(TokenCountReducer.class); Here is the reducer code: public class TokenCountReducer extends Reducer<Text,IntWritable,Text,IntWritable>{ @Override protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int count = 0; for (IntWritable value : values) { count+= value.get(); } context.write(key,new IntWritable(count)); } } The job of a combiner is to do just what the name implies, aggregate data with the net result of less data begin shuffled across the network, which gives us gains in efficiency. As stated before, keep in mind that reducers are still required to put together results with the same keys coming from different mappers. Since combiner functions are an optimization, the Hadoop framework offers no guarantees on how many times a combiner will be called, if at all.In Mapper Combining Option 1 The first alternative to using Combiners (figure 3.2 page 41) is very straight forward and makes a slight modification to our original word count mapper: public class PerDocumentMapper extends Mapper<LongWritable, Text, Text, IntWritable> { @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { IntWritable writableCount = new IntWritable(); Text text = new Text(); Map<String,Integer> tokenMap = new HashMap<String, Integer>(); StringTokenizer tokenizer = new StringTokenizer(value.toString());while(tokenizer.hasMoreElements()){ String token = tokenizer.nextToken(); Integer count = tokenMap.get(token); if(count == null) count = new Integer(0); count+=1; tokenMap.put(token,count); }Set<String> keys = tokenMap.keySet(); for (String s : keys) { text.set(s); writableCount.set(tokenMap.get(s)); context.write(text,writableCount); } } } As we can see here, instead of emitting a word with the count of 1, for each word encountered, we use a map to keep track of each word already processed. Then when all of the tokens are processed we loop through the map and emit the total count for each word encountered in that line.In Mapper Combining Option 2 The second option of in mapper combining (figure 3.3 page 41) is very similar to the above example with two distinctions – when the hash map is created and when we emit the results contained in the map. In the above example, a map is created and has its contents dumped over the wire for each invocation of the map method. In this example we are going make the map an instance variable and shift the instantiation of the map to the setUp method in our mapper. Likewise the contents of the map will not be sent out to the reducers until all of the calls to mapper have completed and the cleanUp method is called. public class AllDocumentMapper extends Mapper<LongWritable,Text,Text,IntWritable> {private Map<String,Integer> tokenMap;@Override protected void setup(Context context) throws IOException, InterruptedException { tokenMap = new HashMap<String, Integer>(); }@Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer tokenizer = new StringTokenizer(value.toString()); while(tokenizer.hasMoreElements()){ String token = tokenizer.nextToken(); Integer count = tokenMap.get(token); if(count == null) count = new Integer(0); count+=1; tokenMap.put(token,count); } }@Override protected void cleanup(Context context) throws IOException, InterruptedException { IntWritable writableCount = new IntWritable(); Text text = new Text(); Set<String> keys = tokenMap.keySet(); for (String s : keys) { text.set(s); writableCount.set(tokenMap.get(s)); context.write(text,writableCount); } } } As we can see from the above code example, the mapper is keeping track of unique word counts, across all calls to the map method. By keeping track of unique tokens and their counts, there should be a substantial reduction in the number of records sent to the reducers, which in turn should improve the running time of the MapReduce job. This accomplishes the same effect as using the combiner function option provided by the MapReduce framework, but in this case you are guaranteed that the combining code will be called. But there are some caveats with this approach also. Keeping state across map calls could prove problematic and definitely is a violation of the functional spirit of a “map” function. Also, by keeping state across all mappers, depending on the data used in the job, memory could be another issue to contend with. Ultimately, one would have to weigh all of the trade offs to determine the best approach.Results Now lets take a look at the some results of the different mappers. Since the job was run in pseudo-distributed mode, actual running times are irrelevant, but we can still infer how using local aggregation could impact the efficiency of MapReduce job running on a real cluster. Per Token Mapper: 12/09/13 21:25:32 INFO mapred.JobClient: Reduce shuffle bytes=366010 12/09/13 21:25:32 INFO mapred.JobClient: Reduce output records=7657 12/09/13 21:25:32 INFO mapred.JobClient: Spilled Records=63118 12/09/13 21:25:32 INFO mapred.JobClient: Map output bytes=302886 In Mapper Reducing Option 1: 12/09/13 21:28:15 INFO mapred.JobClient: Reduce shuffle bytes=354112 12/09/13 21:28:15 INFO mapred.JobClient: Reduce output records=7657 12/09/13 21:28:15 INFO mapred.JobClient: Spilled Records=60704 12/09/13 21:28:15 INFO mapred.JobClient: Map output bytes=293402 In Mapper Reducing Option 2: 12/09/13 21:30:49 INFO mapred.JobClient: Reduce shuffle bytes=105885 12/09/13 21:30:49 INFO mapred.JobClient: Reduce output records=7657 12/09/13 21:30:49 INFO mapred.JobClient: Spilled Records=15314 12/09/13 21:30:49 INFO mapred.JobClient: Map output bytes=90565 Combiner Option: 12/09/13 21:22:18 INFO mapred.JobClient: Reduce shuffle bytes=105885 12/09/13 21:22:18 INFO mapred.JobClient: Reduce output records=7657 12/09/13 21:22:18 INFO mapred.JobClient: Spilled Records=15314 12/09/13 21:22:18 INFO mapred.JobClient: Map output bytes=302886 12/09/13 21:22:18 INFO mapred.JobClient: Combine input records=31559 12/09/13 21:22:18 INFO mapred.JobClient: Combine output records=7657 As expected the Mapper that did no combining had the worst results, followed closely by the first in-mapper combining option (although these results could have been made better had the data been cleaned up before running the word count). The second in-mapper combining option and the combiner function had virtually identical results. The significant fact is that both produced 2/3 less reduce shuffle bytes as the first two options. Reducing the amount of bytes sent over the network to the reducers by that amount would surely would have a positive impact on the efficiency of a MapReduce job. There is one point to keep in mind here and that is Combiners/In-Mapper combining can not just be used in all MapReduce jobs, in this case the word count lends itself very nicely to such an enhancement, but that might not always be true.Conclusion As you can see the benefits of using either in-mapper combining or the Hadoop combiner function require serious consideration when looking to improve the performance of your MapReduce jobs. As for which approach, it is up to you the weigh the trade offs for each approach.Related linksData-Intensive Processing with MapReduce by Jimmy Lin and Chris Dyer Hadoop: The Definitive Guide by Tom White Source Code from blog MRUnit for unit testing Apache Hadoop map reduce jobs Project Gutenberg a great source of books in plain text format, great for testing Hadoop jobs locally.Happy coding and don’t forget to share! Reference: Working Through Data-Intensive Text Processing with MapReduce from our JCG partner Bill Bejeck at the Random Thoughts On Coding blog....
grails-logo

Test Driven Traps, part 1

Have you ever been in a situation, where a simple change of code, broke a few hundred tests? Have you ever had the idea that tests slow you down, inhibit your creativity, make you afraid to change the code. If you had, it means you’ve entered the Dungeon-of-very-bad-tests, the world of things that should not be. I’ve been there. I’ve built one myself. And it collapsed killing me in the process. I’ve learned my lesson. So here is the story of a dead man. Learn from my faults or be doomed to repeat them. The story Test Driven Development, like all good games in the world, is simple to learn, hard to master. I’ve started in 2005, when a brilliant guy named Piotr Szarwas, gave me the book “Test Driven Development: By Example” (Kent Beck), and one task: creating a framework. These were the old times, when the technology we were using had no frameworks at all, and we wanted a cool one, like Spring, with Inversion-of-Control, Object-Relational Mapping, Model-View-Controller and all the good things we knew about. And so we created a framework. Then we built a Content Management System on top of it. Then we created a bunch of dedicated applications for different clients, Internet shops and what-not, on top of those two. We were doing good. We had 3000+ tests for the framework, 3000+ tests for the CMS, and another few thousand for every dedicated application. We were looking at our work, and we were happy, safe, secure. These were good times. And then, as our code base grew, we came to the point, where a simple anemic model we had, was not good enough anymore. I had not read the other important book of that time: “Domain Driven Design”, you see. I didn’t know yet, that you can only get so far with an anemic model. But we were safe. We had tons of tests. We could change anything. Or so I thought. I spent a week trying to introduce some changes in the architecture. Simple things really: moving methods around, switching collaborators, such things. Only to be overwhelmed by the number of tests I had to fix. That was TDD, I started my change with writing a test, and when I was finally done with the code under the test, I’d find another few hundred tests completely broken by my change. And when I got them fixed, introducing some more changes in the process, I’d find another few thousand broken. That was a butterfly effect, a chain reaction caused by a very small change. It took me a week to figure out, that I’m not even half done in here. The refactoring had no visible end. And at no point my code base was stable, deployment-ready. I had my branch in the repository, one I’ve renamed ‘Lasciate ogne speranza, voi ch’intrate’. We had tons and tons of tests. Of very bad tests. Tests that would pour concrete over our code, so that we could do nothing. The only real options were: either to leave it be, or delete all tests, and write everything from scratch again. I didn’t want to work with the code if we were to go for the first option, and the management would not find financial rationale for the second. So I quit. That was the Dungeon I built, only to find myself defeated by its monsters. I went back to the book, and found everything I did wrong in there. Outlined. Marked out. How could I skip that? How could I not notice? Turns out, sometimes, you need to be of age and experience, to truly understand the things you learn. Even the best of tools, when used poorly, can turn against you. And the easier the tool, the easier it seems to use it, the easier it is to fall into the trap of I-know-how-it-works thinking. And then BAM! You’re gone.The truth Test Driven Development and tests, are two completely different things. Tests are only a byproduct of TDD, nothing more. What is the point of TDD? What does TDD brings? Why do we do TDD? Because of three, and only those three reasons. 1. To find the best design, by putting ourselves into the user’s shoes.By starting with “how do I want to use it” thinking, we discover the most useful and friendly design. Always good, quite often that’s the best design out there. Otherwise, what we get is this:And you don’t want that. 2. To manage our fear. It takes balls, to make a ground change in a large code-base without tests, and say “it’s done” without introducing bugs in the process, doesn’t it? Well, the truth is, if you say “it’s done”, most of the time you are either ignorant, reckless, or just plain stupid. It’s like with concurrency: everybody knows it, nobody can do it well. Smart people are scared of such changes. Unless they have good tests, with high code coverage. TDD allows to manage our fears, by giving us proof, that things work as they should. TDD gives us safety 3. To have fast feedback. How long can you code, without running the app? How long can you code without knowing whether your code works as you think it should? Feedback in tests is important. Less so for frontend programming, where you can just run the shit up, and see for yourselves. More for coding in the backend. Even more, if your technology stack requires compilation, deployment, and starting up. Time is money, and I’d rather earn it, than wait for the deployment and click through my changes each time I make them. And that’s it. There are no more reasons for TDD whatsoever. We want Good Design, Safety, and Feedback. Good tests are those, which give us that. Bad tests? All the other tests are bad. The bad practice So how does a typical, bad test, look like? The one I see over and over, in close to every project, created by somebody who has yet to learn how NOT to build an ugly dungeon, how not to pour concrete over your code base. The one I’d write myself in 2005. This will be a Spock sample, written in groovy, testing a Grails controller. But don’t worry if you don’t know those technologies. I bet you’ll understand what’s going on in there without problems. Yes, it’s that simple. I’ll explain all the not-so-obvious parts. def 'should show outlet'() { given: def outlet = OutletFactory.createAndSaveOutlet(merchant: merchant) injectParamsToController(id: outlet.id) when: controller.show() then: response.redirectUrl == null } So we have a controller. It’s an outlet controller. And we have a test. What’s wrong with this test? The name of the test is “should show outlet”. What should a test with such a name check? Whether we show the outlet, right? And what does it check? Whether we are redirected. Brilliant? Useless. It’s simple, but I see it all around. People forget, that we need to: VERIFY THE RIGHT THING I bet that test was written after the code. Not in test-first fashion. But verifying the right thing is not enough. Let’s have another example. Same controller, different expectation. The name is: ‘should create outlet insert command with valid params with new account’ Quite complex, isn’t it? If you need an explanation, the name is wrong. But you don’t know the domain, so let me put some light on it: when we give the controller good parameters, we want it to create a new OutletInsertCommand, and the account of that one, should be new. The name doesn’t say what ‘new’ is, but we should be able to see it in the code. Have a look at the test: def 'should create outlet insert command with valid params with new account'() { given: def defaultParams = OutletFactory.validOutletParams defaultParams.remove('mobileMoneyAccountNumber') defaultParams.remove('accountType') defaultParams.put('merchant.id', merchant.id) controller.params.putAll(defaultParams) when: controller.save() then: 1 * securityServiceMock.getCurrentlyLoggedUser() >> user 1 * commandNotificationServiceMock.notifyAccepters(_) 0 * _._ Outlet.count() == 0 OutletInsertCommand.count() == 1 def savedCommand = OutletInsertCommand.get(1) savedCommand.mobileMoneyAccountNumber == '1000000000000' savedCommand.accountType == CyclosAccountType.NOT_AGENT controller.flash.message != null response.redirectedUrl == '/outlet/list' } If you are new to Spock: n*mock.whatever(), means that the method “whatever” of the mock object, should be called exactly n times. No more no less. The underscore “_” means “everything” or “anything”. And the >> sign, instructs the test framework to return the right side argument when the method is called. So what’s wrong with this test? Pretty much everything. Let’s go from the start of “then” part, mercifully skipping the oververbose set-up in the “given”. 1 * securityServiceMock.getCurrentlyLoggedUser() >> user The first line verifies whether some security service was asked for a logged user, and returns the user. And it was asked EXACTLY one time. No more, no less. Wait, what? How come we have a security service in here? The name of the test doesn’t say anything about security or users, why do we check it? Well, it’s the first mistake. This part is not, what we want to verify. This is probably required by the controller, but it only means it should be in the “given”. And it should not verify that it’s called “exactly once”. It’s a stub for God’s sake. The user is either logged in or not. There is no sense in making him “logged in, but you can ask only once”. Then, there is the second line. 1 * commandNotificationServiceMock.notifyAccepters(_) It verifies that some notification service is called exactly once. And it may be ok, the business logic may require that, but then… why is it not stated clearly in the name of the test? Ah, I know, the name would be too long. Well, that’s also a suggestion. You need to make another test, something like “should notify about newly created outlet insert command”. And then, it’s the third line. 0 * _._ My favorite one. If the code is Han Solo, this line is Jabba the Hut. It wants Hans Solo frozen in solid concrete. Or dead. Or both. This line, if you haven’t deducted yet, is “You shall not make any other interactions with any mock, or stubs, or anything, Amen!”. That’s the most stupid thing I’ve seen in a while. Why would a sane programmer ever put it here? That’s beyond my imagination. No it isn’t. Been there, done that. The reason why a programmer would use such a thing is to make sure, that he covered all the interactions. That he didn’t forget about anything. Tests are good, what’s wrong in having more good? He forgot about sanity. That line is stupid, and it will have it’s vengeance. It will bite you in the ass, some day. And while it may be small, because there are hundreds of lines like this, some day you gonna get bitten pretty well. You may as well not survive. And then, another line. Outlet.count() == 0 This verifies whether we don’t have any outlets in the database. Do you know why? You don’t. I do. I do, because I know the business logic of this domain. You don’t because this tests sucks at informing you, what it should. Then there is the part, that actually makes sense. OutletInsertCommand.count() == 1 def savedCommand = OutletInsertCommand.get(1) savedCommand.mobileMoneyAccountNumber == '1000000000000' savedCommand.accountType == CyclosAccountType.NOT_AGENT We expect the object we’ve created in the database, and then we verify whether it’s account is “new”. And we know, that the “new” means a specific account number and type. Though it screams for being extracted into another method. And then… controller.flash.message != null response.redirectedUrl == '/outlet/list' Then we have some flash message not set. And a redirection. And I ask God, why the hell are we testing this? Not because the name of the test says so, that’s for sure. The truth is, that looking at the test, I can recreate the method under test, line by line. Isn’t it brilliant? This test represents every single line of a not so simple method. But try to change the method, try to change a single line, and you have big chance to blow this thing up. And when those kinds of tests are in the hundreds, you have concrete all over you code. You’ll be able to refactor nothing. So here’s another lesson. It’s not enough to verify the right thing. You need to VERIFY ONLY THE RIGHT THING. Never ever verify the algorithm of the method step by step. Verify the outcomes of the algorithm. You should be free to change the method, as long as the outcome, the real thing you expect, is not changed. Imagine a sorting problem. Would you verify it’s internal algorithm? What for? It’s got to work and it’s got to work well. Remember, you want good design and security. Apart from this, it should be free to change. Your tests should not stay in the way. Now for another horrible example. @Unroll('test merchant constraints field #field for #error') def 'test merchant all constraints'() { when: def obj = new Merchant((field): val)then: validateConstraints(obj, field, error)where: field | val | error 'name' | null | 'nullable' 'name' | '' | 'blank' 'name' | 'ABC' | 'valid' 'contactInfo' | null | 'nullable' 'contactInfo' | new ContactInfo() | 'validator' 'contactInfo' | ContactInfoFactory.createContactInfo() | 'valid' 'businessSegment' | null | 'nullable' 'businessSegment' | new MerchantBusinessSegment() | 'valid' 'finacleAccountNumber' | null | 'nullable' 'finacleAccountNumber' | '' | 'blank' 'finacleAccountNumber' | 'ABC' | 'valid' 'principalContactPerson' | null | 'nullable' 'principalContactPerson' | '' | 'blank' 'principalContactPerson' | 'ABC' | 'valid' 'principalContactInfo' | null | 'nullable' 'principalContactInfo' | new ContactInfo() | 'validator' 'principalContactInfo' | ContactInfoFactory.createContactInfo() | 'valid' 'feeCalculator' | null | 'nullable' 'feeCalculator' | new FixedFeeCalculator(value: 0) | 'valid' 'chain' | null | 'nullable' 'chain' | new Chain() | 'valid' 'customerWhiteListEnable' | null | 'nullable' 'customerWhiteListEnable' | true | 'valid' 'enabled' | null | 'nullable' 'enabled' | true | 'valid' } Do you understand what’s going on? If you haven’t seen it before, you may very well not. The “where” part, is a beautiful Spock solution for parametrized tests. The headers of those columns are the names of variables, used BEFORE, in the first line. It’s sort of a declaration after the usage. The test is going to be fired many times, once for for each line in the “where” part. And it’s all possible thanks to Groovy’s Abstract Syntaxt Tree Transofrmation. We are talking about interpreting and changing the code during the compilation. Cool stuff. So what this test is doing? Nothing. Let me show you the code under test. static constraints = { name(blank: false) contactInfo(nullable: false, validator: { it?.validate() }) businessSegment(nullable: false) finacleAccountNumber(blank: false) principalContactPerson(blank: false) principalContactInfo(nullable: false, validator: { it?.validate() }) feeCalculator(nullable: false) customerWhiteListEnable(nullable: false) } This static closure, is telling Grails, what kind of validation we expect on the object and database level. In Java, these would most probably be annotations. And you do not test annotations. You also do not test static fields. Or closures without any sensible code, without any behavior. And you don’t test whether the framework below (Grails/GORM in here) works the way it works. Oh, you may test that for the first time you are using it. Just because you want to know how and if it works. You want to be safe, after all. But then, you should probably delete this test, and for sure, not repeat it for every single domain class out there. This test doesn’t event verify that, by the way. Because it’s a unit test, working on a mock of a database. It’s not testing the real GORM (Groovy Object-Relational Mapping, an adapter on top of Hibernate). It’s testing the mock of the real GORM. Yeah, it’s that stupid. So if TDD gives us safety, design and feedback, what does this test provide? Absolutely nothing. So why did the programmer put it here? Because his brain says: tests are good. More tests are better. Well, I’ve got news for you. Every single test which does not provide us safety and good design is bad. Period. Those which provide only feedback, should be thrown away the moment you stop refactoring your code under the test. So here’s my lesson number three: PROVIDE SAFETY AND GOOD DESIGN, OR BE GONE. That was the example of things gone wrong. What should we do about it? The answer: delete it. But I yet have to see a programmer who removes his tests. Even so shitty as this one. We feel very personal about our code, I guess. So in case you are hesitating, let me remind you what Kent Beck wrote in his book about TDD: The first criterion for your tests is confidence. Never delete a test if it reduces your confidence in the behavior of the system. The second criterion is communication. If you have two tests that exercise the same path through the code, but they speak to different scenarios for a readers, leave them alone. [Kent Beck, Test Driven Development: by Example] Now you know, it’s safe to delete it. So much for today. I have some good examples to show, some more stories to tell, so stay tuned for part 2. Reference: Test Driven Traps, part 1 from our JCG partner Jakub Nabrdalik at the Solid Craft blog....
junit-logo

Test Driven Traps, part 2

The Story of a Unit in Unit Tests In the previous part of this article, you could see some bad, though popular, test samples. But I’m not a professional critic (also known as a troll, or a hater), to grumble about without having anything constructive to say. Years of TDD have taught me more than just how bad the things can go. There are many simple but effective tricks, that can make you test-life much easier. Imagine this: you have a booking system for a small conference room in a small company. By some strange reason, it has to deal with off-line booking. People post their booking requests to some frontend, and once a week you get a text file with working hours of the company, and all the bookings (for what day, for how long, by whom, submitted at what point it time) in random order. Your system should produce a calendar for the room, according to some business rules (first come, first served, only in office business hours, that sort of things). As part of the analysis, we have a clearly defined input data, and expected outcomes, with examples. Beautiful case for TDD, really. Something that sadly never happens in the real life. Our sample test data looks like this: class TestData { static final String INPUT_FIRST_LINE = '0900 1730\n'; static final String FIRST_BOOKING = '2011-03-17 10:17:06 EMP001\n' + '2011-03-21 09:00 2\n'; static final String SECOND_BOOKING = '2011-03-16 12:34:56 EMP002\n' + '2011-03-21 09:00 2\n'; static final String THIRD_BOOKING = '2011-03-16 09:28:23 EMP003\n' + '2011-03-22 14:00 2\n'; static final String FOURTH_BOOKING = '2011-03-17 10:17:06 EMP004\n' + '2011-03-22 16:00 1\n'; static final String FIFTH_BOOKING = '2011-03-15 17:29:12 EMP005\n' + '2011-03-21 16:00 3';static final String INPUT_BOOKING_LINES = FIRST_BOOKING + SECOND_BOOKING + THIRD_BOOKING + FOURTH_BOOKING + FIFTH_BOOKING;static final String CORRECT_INPUT = INPUT_FIRST_LINE + INPUT_BOOKING_LINES;static final String CORRECT_OUTPUT = '2011-03-21\n' + '09:00 11:00 EMP002\n' + '2011-03-22\n' + '14:00 16:00 EMP003\n' + '16:00 17:00 EMP004\n' + ''; } So now we start with a positive test: BookingCalendarGenerator bookingCalendarGenerator = new BookingCalendarGenerator();@Test public void shouldPrepareBookingCalendar() { //when String calendar = bookingCalendarGenerator.generate(TestData.CORRECT_INPUT);//then assertEquals(TestData.CORRECT_OUTPUT, calendar); } It looks like we have designed a BookingCalendarGenerator with a “generate” method. Fair enough. Lets add some more tests. Tests for the business rules. We get something like this: @Test public void noPartOfMeetingMayFallOutsideOfficeHours() { //given String tooEarlyBooking = '2011-03-16 12:34:56 EMP002\n' + '2011-03-21 06:00 2\n';String tooLateBooking = '2011-03-16 12:34:56 EMP002\n' + '2011-03-21 20:00 2\n';//when String calendar = bookingCalendarGenerator.generate(TestData.INPUT_FIRST_LINE + tooEarlyBooking + tooLateBooking);//then assertTrue(calendar.isEmpty()); }@Test public void meetingsMayNotOverlap() { //given String firstMeeting = '2011-03-10 12:34:56 EMP002\n' + '2011-03-21 16:00 1\n';String secondMeeting = '2011-03-16 12:34:56 EMP002\n' + '2011-03-21 15:00 2\n';//when String calendar = bookingCalendarGenerator.generate(TestData.INPUT_FIRST_LINE + firstMeeting + secondMeeting);//then assertEquals('2011-03-21\n' + '16:00 17:00 EMP002\n', calendar); }@Test public void bookingsMustBeProcessedInSubmitOrder() { //given String firstMeeting = '2011-03-17 12:34:56 EMP002\n' + '2011-03-21 16:00 1\n';String secondMeeting = '2011-03-16 12:34:56 EMP002\n' + '2011-03-21 15:00 2\n';//when String calendar = bookingCalendarGenerator.generate(TestData.INPUT_FIRST_LINE + firstMeeting + secondMeeting);//then assertEquals('2011-03-21\n15:00 17:00 EMP002\n', calendar); }@Test public void orderingOfBookingSubmissionShouldNotAffectOutcome() { //given List<String> shuffledBookings = newArrayList(TestData.FIRST_BOOKING, TestData.SECOND_BOOKING, TestData.THIRD_BOOKING, TestData.FOURTH_BOOKING, TestData.FIFTH_BOOKING); shuffle(shuffledBookings); String inputBookingLines = Joiner.on('\n').join(shuffledBookings);//when String calendar = bookingCalendarGenerator.generate(TestData.INPUT_FIRST_LINE + inputBookingLines);//then assertEquals(TestData.CORRECT_OUTPUT, calendar); } That’s pretty much all. But what if we get some rubbish as the input. Or if we get an empty string? Let’s design for that: @Test(expected = IllegalArgumentException.class) public void rubbishInputDataShouldEndWithException() { //when String calendar = bookingCalendarGenerator.generate('rubbish');//then exception is thrown }@Test(expected = IllegalArgumentException.class) public void emptyInputDataShouldEndWithException() { //when String calendar = bookingCalendarGenerator.generate('');//then exception is thrown } IllegalArgumentException is fair enough. We don’t need to handle it in any more fancy way. We are done for now. Let’s finally write the class under the test: BookingCalendarGenerator. And so we do. And it comes out, that the whole thing is a little big for a single method. So we use the power of Extract Method pattern. We group code fragments into different methods. We group methods and data those operate on, into classes. We use the power of Object Oriented programming, we use Single Responsibility Principle, we use composition (or decomposition, to be precise) and we end up with a package like this:We have one public class, and several package-scope classes. Those package scope classes clearly belong to the public one. Here’s a class diagram for clarity:Those aren’t stupid data-objects. Those are full fledged classes. With behaviour, responsibility, encapsulation. And here’s a thing that may come to our Test Driven minds: we have no tests for those classes. We have only for the public class. That’s bad, right? Having no tests must be bad. Very bad. Right? Wrong. We do have tests. We fire up our code coverage tool and we see: 100% methods and classes. 95% lines. Not bad (I’ll get to that 5% of uncertainty in the next post).But we have only a single unit test class. Is that good? Well, let me put some emphasis, to point the answer out: It’s a UNIT test. It’s called a UNIT test for a reason! The unit does not have to be a single class. The unit does not have to be a single package. The unit is up to you to decide. It’s a general name, because your sanity, your common sense, should tell you where to stop. So we have six classes as a unit, what’s the big deal? How about if somebody wants to use one of those classes, apart from the rest. He would have no tests for it, right? Wrong. Those classes are package-scope, apart from the one that’s actually called in the test. This package-scope thing tells you: “Back off. Don’t touch me, I belong to this package. Don’t try to use me separately, I was design to be here!”. So yeah, if a programmer takes one of those out, or makes it public, he would probably know, that all the guarantees are voided. Write your own tests, man. How about if somebody wants to add some behaviour to one of those classes, I’ve been asked. How would he know he’s not breaking something? Well, he would start with a test, right? It’s TDD, right? If you have a change of requirements, you code this change as a test, and then, and only then, you start messing with the code. So you are safe and secure. I see people writing test-per-class blindly, without giving any thought to it, and it makes me cry. I do a lot of pair-programming lately, and you know what I’ve found? Java programmers in general do not use package-scope. Java programmers in general do not know, that protected means: for me, all my descendants, and EVERYONE in the same package. That’s right, protected is more than package-scope, not less a single bit. So if Java programmers do not know what a package-scope really is, and that’s, contrary to Groovy, is the default, how could they understand what a Unit is? How high can I get? Now here’s an interesting thought: if we can have a single test for a package, we could have a single test for a package tree. You know, something like this:We all know that packages in Java are not really tree-like, that the only thing those have with the directory structure is by a very old convention, and we know that the directory structure is there only to solve the collision-of-names problem, but nevertheless, we tend to use packages, like if the name.after.the.dot had some meaning. Like if we could hide one package inside another. Or build layers of lasagne with them. So is it O.K. to have a single test class for a tree of packages? Yes it is. But if so, where is the end to that? Can we go all the way up in the package tree, to the entry point of our application? Those… those would be integration tests, or functional tests, perhaps. Could we do that? Would that be good? The answer is: it would. In a perfect world, it would be just fine. In our shitty, hanging-on-the-edge-of-a-knife, world, it would be insane. Why? Because functional, end-to-end test are slow. So slow. So horribly slow, that it makes you wanna throw them away and go some place where you would not have to be always waiting for something. A place of total creativity, constant feedback, and lightning fast safety. And you’re back to unit testing. There are even some more reasons. One being, that it’s hard to test all flows of the application, testing it end-to-end. You should probably do that for all the major flows, but what about errors, bad connections, all those tricky logic parts that may throw up at one point or another. No, sometimes it would be just too hard, to set up the environment for integration test like that, so you end up testing it with unit tests anyway. The second reason is, that though functional tests do not pour concrete over your code, do not inhibit your creativity by repeating you algorithm in the test case, they also give no safety for refactoring. When you had a package with a single public class, it was quite obvious what someone can safely do, and what he cannot. When you have something enclosed in a library, or a plugin, it’s still obvious. But if you have thousands of public classes, and you are implementing a new feature, you are probably going to use some of them, and you would like to know that they are fine. So, no, in our world, it doesn’t make sense to go with functional tests only. Sorry. But it also doesn’t make sense to create a test per class. It’s called the UNIT test, for a reason. Use that. Happy coding and don’t forget to share! Reference: Test Driven Traps, part 2 from our JCG partner Jakub Nabrdalik at the Solid Craft blog....
career-logo

Disrupt Tech Recruiting II – So You Want Ari Gold?

After publishing How to Disrupt Technical Recruiting – Hire an Agent and reading the subsequent feedback from readers at Hacker News and elsewhere, it is clear that at least some subset of engineers believe two things:The technical recruiting industry is at times remarkably flawed, and the financial incentives inherent to the system will not always lead a recruiter to represent the job seeker’s best interests. There is some demand for a talent/agent model for tech professionals, and it is a service several would be willing to pay for.And it is also worth noting that the only agent that engineers have know is Ari Gold (or at least that is the agent they want). Not even ONE Jerry Maguire reference? It seems engineers want to hug it out and care less about being shown the money. During my dissection of the industry, I somehow overlooked perhaps the biggest flaw that is at the absolute core of the issues with the recruiting industry. I’m a bit embarrassed that I missed it, and it wasn’t mentioned in the threads, so here goes: Secrecy and privacy of information is THE cornerstone to traditional recruiting. Why is that? Well think about it. If hiring companies knew every local candidate that possessed the skills they were seeking, many would not use an agency recruiter to handle the process and would simply have HR/managers contact them directly. Likewise, there would be many candidates that distrust recruiters applying directly to jobs if every available job were listed in one single place to search. There would certainly be some companies and candidates that recognize the value the recruiter brings beyond the initial introduction, but if all information were available to both sides we would find much less demand in the industry. Why do you think recruiters don’t generally list their clients’ names publicly? Why are there so many complaints about recruiters being unwilling to share a deep amount of detail about projects or their open jobs? There are two main reasons.To prevent other recruiters from learning about who they represent (NOTE: this is a big one, particularly for recruiters that work exclusively with start-ups that fly under the radar). To prevent candidates from applying directly to those companies, thus cutting out the chance for the recruiter’s fee.So in order to preserve their chances of being paid a fee, recruiters need to keep candidates and companies from knowing about each other until a precise moment where they make the introduction to both sides, and then hope that the company doesn’t respond with some evidence of prior contact and that the candidate doesn’t say that he/she applied to that job yesterday. If either of those two scenarios happen, the recruiter has some incentive for that deal to NOT work out, even though it is his/her client! Let’s assume that company has one vacancy, and my candidate applied to it a day before I discussed it with him/her. My incentive instantly goes from wanting my candidate to get the job (so I can get a fee) to a financial incentive that the candidate isn’t successful (so the job will still be available for another candidate I can find). That is a very dark and unfortunate set of incentives inherent to the system. It’s very easy to see how the incentives are built in this model. Recruiters have the incentive to help out their client companies to fill jobs, but only if the hire comes from my agency. The recruiter has the incentive to help a candidate find a job, but only if he/she takes a job at one of our clients. This is the unfortunate symptom of contingency work, and thankfully in my current business model (which isn’t contingency) I don’t have these same incentives to the same degree. Who are contingency recruiters really providing a service to anyway? Why do you think recruiters see LinkedIn as both a blessing and a curse? We use it to find new candidates and keep in touch with past candidates and clients, yet we realize that everyone else now has access to the same information and companies can much more easily find candidates. As a recruiter, you are probably likely to list your contacts as ‘private’ for this reason. The agent model would not have this privacy incentive built into the system. I could see an argument made where an agent might not want other agents to know who he/she is representing, but I think that is a much smaller problem than what we have today. When one of an agent’s talents is seeking work, there would be a campaign of open information to try and get the talent noticed. Having a business model based on privacy and secrecy is much less attractive than an agent model with openness. Enough about what’s broken, let’s talk about solutions. Based on the discussions, here are the services that the talent knows they want:Negotiation in job changes – Engineers seem to agree that an agent would probably be better at negotiating compensation packages. Another added benefit to having an agent in act as a proxy in negotiations is the preserved comfort factor after you potentially start the job. Picture a very trying or heated negotiation between an engineer and a start-up CEO that comes to an eventual agreement, and then the two must work in adjacent cubicles the following Monday. A buffer would have helped prevent any potential awkwardness. Job identification and coordination – Many in the community made it pretty clear that they are not interested or skilled in cold calling and introducing their services to potential clients. Having an agent do the legwork to identify potential employers and then to contact them and schedule meetings on the talent’s behalf will save lots of time to do other things. For consultants paid hourly, remember that every hour spent on job search is an hour you can’t bill to a client. Competitive salary/rate information – There seems to be a general distrust for websites that provide market information, and engineers may be somewhat insulated when it comes to what others with similar skills are earning. An agent would have some solid evidence regarding street value of any talent and would do the research necessary. Marketing/PR/Selling – It has become clear to me over the years that there is a sizable percentage of talented engineers that are uncomfortable talking up their own skills, and that was mentioned a couple times as another asset of an agent. Marketing and PR would probably mean different things for different career levels, but it could include work to increase the talent’s blog traffic, booking a presenter’s invite to a users group/meetup, or strategy on how to build the talent’s brand.There were some services and advantages that were somewhat surprisingly not mentioned:Handling incoming job solicitations – All of the complaints about recruiter spam and cold calls are gone for the agent’s talent. Forward calls, emails and LinkedIn spam to your agent, and he/she will investigate. Put your agent’s name on your website and LinkedIn profile. If the opportunity is legitimate and meets the criteria the talent sets for sharing, your will get the details. If not, you never have to waste your time hearing about it. Interview coaching – Not just your standard fare interview coaching you can find on web sites, but also any inside tips on the specific interview. This could include past experiences others have had with the company, some research on the background of the people you are meeting, and anything else that will be helpful to the talent. Remember that the agent is representing you at this point and not the company. As one reader noted, a good agent would not want to do companies a disservice by providing specific interview questions (as that would harm the agent model, the hiring company, and the agent himself/herself), but it is fair to offer the talent some guidance on what format to expect and who they will be meeting. Internal company information – As your representative, the agent should be providing talent with details on both their current employer and any that you may be considering for future employment. The information learned from discussions with others in the business will be helpful to the agent’s entire stable of talent. Consider it the type of info you may find on Glassdoor, but from verified sources. Career advice in non-search situations – Are you thinking about moving from code into management, or considering taking a leadership position on a project doomed for failure? Considering a jump into contracting, or thinking about abandoning your current independent consulting project to join a start-up? Want to talk about it? The value of this advice could be quite large yet difficult to quantify. Negotiation of promotions or raises – Changing jobs is not the only scenario where negotiation may take place. An agent could help you get the best deal from your current employer, either through direct negotiation or by coaching you on how to maximize your chances of getting a good number. Resume creation/curating – This one isn’t a huge service but it’s a few hours out of your work year. Transparency! – The agent model is completely transparent to everyone involved. The agent represents the talent, and the talent pays for that representation. There is no question about the agent’s loyalty as there is in contingency recruiting models (is the recruiter representing me or the company?).Some potential issues were identified in the commentsWould there be any conflict of interest when representing multiple candidates? Doubtful. I guess there could be a rare situation where two talents vie for the same position, and in that scenario the agent would simply represent both fairly and let the best person for the job win. The agent would probably have no financial incentive favoring either candidate. Transparency would be important (revealing the situation to both talents). Would there be backlash by hiring entities against candidates who use an agent? It’s hard to tell, but it’s conceivable that there would be some organizations that might react hesitantly at first. However, the key here is that an agent would be providing a service to a company for free (or if not completely free, definitely much less expensive than recruiters). It would be hard to think that firms would not be very pleased with a cold call from an agent stating that there is a qualified candidate interested in your company, and you will get all the normal recruiter services for the process with no recruiter fee at the back-end. How much would the agent need to know about technology? This one comes up quite a bit in criticisms of technical recruiters and then in the discussion of an agent. The agent needs to know enough about tech that he/she won’t misrepresent the talent, and will not waste time marketing talent to positions that are not a technical fit. An agent should also have some sense of tech trends, what skills are in demand, and the overall market for different types of talent. The biggest complaint seems to be wasting time with discussions of jobs that are not a fit or being sent on interviews that are not appropriate. An agent would not have the same incentive to send you everywhere imaginable, as that incentive is not built into the agent model. Would the service be just for contractors? No, not the way I see it. The pricing model for contractors is very easy to consider, as paying a percentage of hourly rate to the agent is already a common practice for staffing firms (what is called ‘margin’). The value of an agent’s service to engineers with salaried positions seems very obvious in helping to manage the overall career as well as the career moves.How do we compensate the agent? This is a bit tricky and I believe that the pricing model might be different for contractors and talent in direct salaried positions. For contractors, the existing model (a ‘per hour’ cut) is well-known and accepted. I think the main difference is an agent would have some fixed transparent rate ($x/hr or y%/hr. Many recruiting firms do not reveal the full bill rate to the contractor, which tends to cause distrust when the contractor finds out the actual rate. Having a fixed percentage or dollar per hour figure would make the relationship much more comfortable for both sides. For talent in permanent salaried positions, there needs to be more discussion. My vision would be some flat annual representation fee for service with an additional charge/bonus for times of active job search (where the level of services and time invested would increase). The job search fee could have several models built in – perhaps an hourly rate, flat fee per week/month, flat fee per job search, or some percentage of salary. The problem with the percentage of salary model is that it gives some incentive for the agent to suggest you take the highest paying job even if that is not best for your career. Another model would be a negotiation bonus above some level set and agreed upon by the agent and talent. “For anything I get you over 120K, I get a one-time bonus of 50% of the difference.” That leads to some incentive to take the highest paying job, but it also gives more motivation to negotiate. The reason I include an annual representation fee into the discussion is it helps to eliminate an agent’s incentive for candidates to change jobs, and it allows the agent to justify investing time with the talent to discuss career-related issues that come up. In the traditional recruiting model, the recruiter hopes everyone is looking for a job at all times. An agent would have much less of a financial incentive for you to change employers, so the agent wouldn’t be as likely to suggest you leave a job that is right for you. CONCLUSION Traditional contingency recruiting incentives are:Privacy and secrecy of client company and candidate information. Only help client company when it results in a fee. Only help candidates when they take jobs at my clients. A desire for people to leave jobs as often as possible.Agent model incentives:Keep the talent happy so they remain your talent. ???Let’s keep the dialogue going on this. Based on lots of direct feedback, it seems the community could be on to something. Agree, disagree, suggest? Reference: Disrupt Tech Recruiting II – So You Want Ari Gold? from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
javafx-logo

Pure Java JavaFX 2.0 Menus

In recent posts on JavaFX, I have focused on using JavaFX 2.0’s new Java APIs without use of the JavaFX 1.x’s JavaFXScript and without use of JavaFX 2.0’s new FXML. All of these examples have been compiled with the standard Java compiler and executed with the standard Java launcher. In this post, I continue the theme of using pure Java APIs supported by JavaFX 2.0 while demonstrating development of JavaFX 2.0 menus. I list the entire code listing for this example later in this post, but I first show snippets of the code to make it easier to focus on each piece. A good starting point for using JavaFX 2.0 menus is to instantiate an instance of MenuBar. This is straightforward as shown next. Instantiating a javafx.scene.control.MenuBarfinal MenuBar menuBar = new MenuBar(); A MenuBar can contain Menu instances as its children and each Menu instance can have instances of MenuItem as its children. The next code listing demonstrates instantiation of a Menu, adding of MenuItem instances (or an instance of SeparatorMenuItem) to that Menu instance, and then adding the Menu instance to the instance of MenuBar.Adding Newly Instantiated Menu and MenuItem Instances to MenuBar// Prepare left-most 'File' drop-down menu final Menu fileMenu = new Menu('File'); fileMenu.getItems().add(new MenuItem('New')); fileMenu.getItems().add(new MenuItem('Open')); fileMenu.getItems().add(new MenuItem('Save')); fileMenu.getItems().add(new MenuItem('Save As')); fileMenu.getItems().add(new SeparatorMenuItem()); fileMenu.getItems().add(new MenuItem('Exit')); menuBar.getMenus().add(fileMenu); The example above is too simplified for realistic uses. There are no event handlers or actions associated with clicking on any of the menu items and there are no ways to select the menu items via keystroke rather than via mouse clicking. The next code listing demonstrates instantiation of MenuItem instances that include more than just a text string. In this code listing, there is an example of using MenuItemBuilder to build a much more complex MenuItem that includes association to a key combination and includes an association to an action handler.More Sophisticated MenuItem Instantiation with Keystroke and Event Associations// Prepare 'Help' drop-down menu final Menu helpMenu = new Menu('Help'); final MenuItem searchMenuItem = new MenuItem('Search'); searchMenuItem.setDisable(true); helpMenu.getItems().add(searchMenuItem); final MenuItem onlineManualMenuItem = new MenuItem('Online Manual'); onlineManualMenuItem.setVisible(false); helpMenu.getItems().add(onlineManualMenuItem); helpMenu.getItems().add(new SeparatorMenuItem()); final MenuItem aboutMenuItem = MenuItemBuilder.create() .text('About') .onAction( new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent e) { out.println('You clicked on About!'); } }) .accelerator( new KeyCodeCombination( KeyCode.A, KeyCombination.CONTROL_DOWN)) .build(); helpMenu.getItems().add(aboutMenuItem); menuBar.getMenus().add(helpMenu); Besides demonstrating MenuItemBuilder, associating a key combination (CTRL-A in this case) with a menu item, and associating an action with a menu item, this code example also demonstrates making a menu item disabled (grayed out) with setDisable(boolean) or making it not appear at all with setVisible(boolean). Although I could have specified disabling the menu item or making the menu item invisible with a MenuItemBuilder, I intentionally used ‘set’ methods on the MenuItems in this example to contrast that approach with using the MenuItemBuilder. For completeness, here is the entire code listing of my example.JavaFxMenus.java (The Complete Listing)package dustin.examples;import static java.lang.System.out;import javafx.application.Application; import javafx.beans.property.ReadOnlyDoubleProperty; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.control.*; import javafx.scene.input.KeyCode; import javafx.scene.input.KeyCodeCombination; import javafx.scene.input.KeyCombination; import javafx.scene.paint.Color; import javafx.stage.Stage;/** * Example of creating menus in JavaFX. * * @author Dustin */ public class JavaFxMenus extends Application { /** * Build menu bar with included menus for this demonstration. * * @param menuWidthProperty Width to be bound to menu bar width. * @return Menu Bar with menus included. */ private MenuBar buildMenuBarWithMenus(final ReadOnlyDoubleProperty menuWidthProperty) { final MenuBar menuBar = new MenuBar();// Prepare left-most 'File' drop-down menu final Menu fileMenu = new Menu('File'); fileMenu.getItems().add(new MenuItem('New')); fileMenu.getItems().add(new MenuItem('Open')); fileMenu.getItems().add(new MenuItem('Save')); fileMenu.getItems().add(new MenuItem('Save As')); fileMenu.getItems().add(new SeparatorMenuItem()); fileMenu.getItems().add(new MenuItem('Exit')); menuBar.getMenus().add(fileMenu);// Prepare 'Examples' drop-down menu final Menu examplesMenu = new Menu('JavaFX 2.0 Examples'); examplesMenu.getItems().add(new MenuItem('Text Example')); examplesMenu.getItems().add(new MenuItem('Objects Example')); examplesMenu.getItems().add(new MenuItem('Animation Example')); menuBar.getMenus().add(examplesMenu);// Prepare 'Help' drop-down menu final Menu helpMenu = new Menu('Help'); final MenuItem searchMenuItem = new MenuItem('Search'); searchMenuItem.setDisable(true); helpMenu.getItems().add(searchMenuItem); final MenuItem onlineManualMenuItem = new MenuItem('Online Manual'); onlineManualMenuItem.setVisible(false); helpMenu.getItems().add(onlineManualMenuItem); helpMenu.getItems().add(new SeparatorMenuItem()); final MenuItem aboutMenuItem = MenuItemBuilder.create() .text('About') .onAction( new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent e) { out.println('You clicked on About!'); } }) .accelerator( new KeyCodeCombination( KeyCode.A, KeyCombination.CONTROL_DOWN)) .build(); helpMenu.getItems().add(aboutMenuItem); menuBar.getMenus().add(helpMenu);// bind width of menu bar to width of associated stage menuBar.prefWidthProperty().bind(menuWidthProperty);return menuBar; }/** * Start of JavaFX application demonstrating menu support. * * @param stage Primary stage. */ @Override public void start(final Stage stage) { stage.setTitle('Creating Menus with JavaFX 2.0'); final Group rootGroup = new Group(); final Scene scene = new Scene(rootGroup, 800, 400, Color.WHEAT); final MenuBar menuBar = buildMenuBarWithMenus(stage.widthProperty()); rootGroup.getChildren().add(menuBar); stage.setScene(scene); stage.show(); }/** * Main executable function for running examples. * * @param arguments Command-line arguments: none expected. */ public static void main(final String[] arguments) { Application.launch(arguments); } } The next series of screen snapshots attempt to demonstrate what this application looks like when executed using the java launcher. The images show the initial appearance of the application, the drop-down menu presented when ‘File’ menu is clicked on, the drop-down menu presented when the ‘Help’ menu is clicked on, and finally an image that shows the message written to standard output when the ‘About’ menu item is clicked on under the ‘Help’ menu. The code in the example featured in this post has numerous syntax features that should look familiar to Swing developers. In fact, many of the JavaFX classes used above have the same names as AWT classes and so care must be used to import the correct class when using the IDE’s automatic import suggestions. The example above also provides an example of JavaFX binding. In particular, the width of the menu bar is bound to the width of the stage’s width. This is useful because it looks better to have the menu bar span the entire top of the visual rather than being just wide enough to hold the menu labels. Building menus is fairly straightforward in JavaFX 2.0 and can be implemented using basic Java tools and the JavaFX 2.0 JAR. Happy coding and don’t forget to share! Reference: (Pure Java) JavaFX 2.0 Menus from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
packt-logo

Android books giveaway for celebrating Packt’s 1000 title

Fellow geeks, We are pleased to announce that we have once again teamed up with Packt Publishing and we are organizing another giveaway for you! The occasion is the celebration of 1000 IT titles by Packt! So, 4 lucky winners will have the chance to win a copy of Packt’s best-selling books on Android (ebook format). Read along to find more details about the contest. Packt Publishing is about to publish its 1000th title. Packt books are renowned among developers for being uniquely practical and focused, covering highly specific tools and technologies.     For this occasion, Packt is giving everyone a surprise gift! However, the surprise won’t be revealed until September 28, 2012, which is when the 1000th title is expected to be unveiled. To receive this gift, simply register for an account on the Packt website and return between September 28 and 30.     Another thing to notice is that Packt supports many of the Open Source projects covered by its books through a project royalty donation, which has contributed over £300,000 to Open Source projects up to now. As part of the celebration Packt is allocating $30,000 to share between projects and authors, soon to be disclosed on their website. And of course Packt was kind enough to provide some cool Android books only for the readers of Java Code Geeks. Here are the books!     AndEngine for Android Game Development Cookbook: RAWOverview of the uber-cool AndEngine game engine for Android Game Development. Includes step by step detailed instructions and information on a number of AndEngine functions, including illustrations and diagrams for added support and resultsThis book is currently available as a RAW (Read As we Write) book. Android 3.0 Application Development CookbookQuickly develop applications that take advantage of the very latest mobile technologies, including web apps, sensors, and touch screens. Excellent for developing a full Android application. Appcelerator Titanium Smartphone App Development CookbookLeverage your JavaScript skills to write mobile applications using Titanium Studio tools with the native advantage! Android User Interface Development: Beginner’s GuideLeverage the Android platform’s flexibility and power to design impactful user-interfaces. How to Enter? Just send an email here using as subject “Packt Android Giveaway”. An empty email will do, it’s that simple! Optionally in your email, you may mention which book would you prefer the most! We are waiting for your emails! (Note: By entering the contest you will be automatically included in the forthcoming Java Code Geeks Newsletter.)     Again, don’t forget to sign-up for a free account in Packt before the 30th of September, 2012. Visit their website: www.PacktPub.com. A pleasant surprise gift is in order for each one of you! We would love it if you shared this! Spread the world people! ;-)...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close