Featured FREE Whitepapers

What's New Here?

codehaus-cargo-logo

Maven Cargo plugin for Integration Testing

A very common need in the lifecycle of a project is setting up integration testing. Luckily, Maven has built-in support for this exact scenario, with the following phases of the default build lifecycle (from the Maven documentation):pre-integration-test: Perform actions required before integration tests are executed. This may involve things such as setting up the required environment. integration-test: Process and deploy the package if necessary into an environment where integration tests can be run. post-integration-test: Perform actions required after integration tests have been executed. This may including cleaning up the environment.First, the maven-surefire-plugin is configured so that integration tests are excluded from the standard build lifecycle: <plugin> <groupId>org.apache.maven.plugins<groupId> <artifactId>maven-surefire-plugin<artifactId> <version>2.10<version> <configuration> <excludes> <exclude>***IntegrationTest.java<exclude> <excludes> <configuration> <plugin> Exclusions are done via ant-style path expressions, so all integration tests must follow this pattern and end with “IntegrationTest.java“. Next, the cargo-maven2-plugin is used, as Cargo comes with top-notch out of the box support for embedded web servers. Of course if the server environment requires specific configuration, cargo also knows how to construct the server out of an archived package as well as deploy to an external server. <plugin> <groupId>org.codehaus.cargo<groupId> <artifactId>cargo-maven2-plugin<artifactId> <version>1.1.3<version> <configuration> <wait>true<wait> <container> <containerId>jetty7x<containerId> <type>embedded<type> <container> <configuration> <properties> <cargo.servlet.port>8080<cargo.servlet.port> <properties> <configuration> <configuration> <plugin> An embedded Jetty 7 web server is defined, listening on port 8080. Notice the wait flag being set to true – this is because for the newer versions of cargo (1.1.0 upwards), the default value of the of the flag has changed from true to false, due to this bug. We want to be able to start the project by simply running mvn cargo:start, especially during the development phase, so the flag should be active. However, when running the integration tests we want the server to start, allow the tests to run and then stop, which is why the flag will be overridden later on. In order for the package maven phase to generate a deployable war file, the packaging of the project must be: <packaging>war</packaging>. Next, a new integration Maven profile is created to enable running the integration tests only when this profile is active, and not as part as the standard build lifecycle. <profiles> <profile> <id>integration<id> <build> <plugins> ... <plugins> <build> <profile> <profiles> It is this profile that will contain all the remaining configuration. Now, the Jetty server is configured to start in the pre-integration-test phase and stop in the post-integration-test phase. <plugin> <groupId>org.codehaus.cargo<groupId> <artifactId>cargo-maven2-plugin<artifactId> <configuration> <wait>false<wait> <configuration> <executions> <execution> <id>start-server<id> <phase>pre-integration-test<phase> <goals> <goal>start<goal> <goals> <execution> <execution> <id>stop-server<id> <phase>post-integration-test<phase> <goals> <goal>stop<goal> <goals> <execution> <executions> <plugin> This ensures the cargo:start goal and cargo:stop goals will execute before and after the integration-test phase. Note that because there are two individual execution definitions, the id element must be present (and different) in both, so that Maven can accepts the configuration. Next, maven-surefire-plugin configuration needs to be overridden inside the integration profile, so that the integration tests which were excluded in the default lifecycle are will now included and run: <plugins> <plugin> <groupId>org.apache.maven.plugins<groupId> <artifactId>maven-surefire-plugin<artifactId> <executions> <execution> <phase>integration-test<phase> <goals> <goal>test<goal> <goals> <configuration> <excludes> <exclude>none<exclude> <excludes> <includes> <include>***IntegrationTest.java<include> <includes> <configuration> <execution> <executions> <plugin> <plugins> There are a few things worth noting: 1. The test goal of the maven-surefire-plugin is executed in integration-test phase; at this point, Jetty is already started with the project deployed, so the integration tests should run with no problems. 2. The integration tests are now included in the execution. In order to achieve this, the exclusions are also overridden – this is because the way Maven handles overriding plugin configurations inside profiles. The base configuration is not completely overridden, but rather augmented with new configuration elements inside the profile. Because of this, the original <excludes> configuration, which excluded the integration tests in the first place, is still present in the profile, and needs to be overridden, or it would conflict with the <includes> configuration and the tests would still not run. 3. Note that, since there is only a single <execution> element, there is no need for an id to be defined. Now, the entire process can run: mvn clean install -PintegrationConclusion The step by step configuration of Maven covers the entire process of setting up integration process as part of the project lifecycle. Usually this is set up to run in a Continuous Integration environment, preferably after each commit. If the CI server already has a server running and consuming ports, then the cargo configuration will have to deal with that scenario, which I will cover in a future post. Reference: How to set up Integration Testing with the Maven Cargo plugin from our JCG partner Eugen Paraschiv at the baeldung blog....
career-logo

Computer Science Education in High Demand

The growing need for qualified computer programmers and the availability of free, online education programs is what inspired today’s post by Olivia Leonardi. She is looking to add to a discussion on Java Code Geeks that laid out 27 things every programmer needs to know by discussing ways people can become a web professional, something necessary before they hold those tenants dear. Leonardi is a writer and researcher for a website offering a wealth of information about entering the computer science field including jobs and careers for those who have completed computer science programs. Computer Science Education in High Demand Despite the growing importance of computer programming skills both in the working world and in everyday life, most schools fail to even cover the basics or broach the subject. But because there has been a longstanding will, there are now many ways. For many eager learners, this gap in classroom material and formal instruction has been filled by online programs that are rapidly gaining esteem in the professional world. Something necessary as the job outlook for recent graduates is bleak. In a recent paper, Northwestern University economist Robert J. Gordon asserts that the US should prepare for “an extended period of slowing growth, with economic expansion getting ever more sluggish and the bottom 99% getting the short end of the stick”. However, despite stubbornly high unemployment and low rates of growth, the long term job prospects for computer science related fields remain remarkably strong. The Bureau of Labor Statistics predicts a 28% growth rate from 2010 to 2020, well above average for most other industries, excluding some healthcare fields. In a 2011 Forbes poll to determine which master’s degree programs would provide the best long-term opportunities, computer science tied physician assistant studies for the advanced degree with the best job prospects. While many other fields see half of their graduate into unemployment, computer science programs can prepare graduate for competitive offers from growing companies. Computer science is also a very lucrative field of study as mid-career median pay is, on average, $109,000 and almost twice the national average salary of $41,000 annually. “We’re in the midst of a technology wave, and computer scientists are so highly valued,” says Al Lee, director of quantitative analysis at Payscale. “As long as people and businesses use technology, computer science degree-holders will be in demand,” Lee adds. Yet, despite the growing need in business around the globe, most students still never receive comprehensive computer literacy training. In the UK, where the number of people studying programming has fallen by a third in the past four years, Prime Minister David Cameron recently admitted that the government is not doing enough to teach the next generation about computer science. Google executive chairman Eric Schmidt also recently pointed out that while students learn how to use software, they are not taught about how software is made. The need for programmers has lead many hopeful programmers to seek out many rapidly growing independent resources. Dream in Code, for example, is a site that allows users to learn the fundamental elements of programming by browsing their content and those serious about learning can sign up to become a permanent member. The expansive content of Dream in Code covers almost every programming language and consists of a broad base of expert users. W3Schools also offers an exhaustive scope of information on web technologies, including tutorials for simple HTML through complex AJAX and Server Side Scripting. Computer programmers at every level can learn or brush up on skills at Google Code University as well. Offering courses on AJAX programming, algorithms, distributed systems, web security and languages, as well as novice guides on Linux, databases and SQL, GCU includes relevant material for any programmer. Each course consists of simple tutorials that cover basic steps, as well as video lectures from university professors and professionals that are licensed under Creative Commons, meaning anyone can use the material or feature is in their own classes. Some traditional schools have already noted the trend toward free or cheap online community resources for computer literacy and begun to offer university materials through similar platforms. MIT and UC Berkeley, among others, have pioneer EdX which hosts most of their content online and free of charge. The shift towards online resources for computer education is allowing many who would otherwise never have the opportunity to acquire first-rate skills in a field that will be among the most marketable for years to come. In the coming years, as computers and technology are only expected to become further engrained into our lives, these resources will allow ambitious and focused students to lead the way. Reference: Computer Science Education in High Demand from our W4G partner Olivia Leonardi....
software-development-2-logo

Pragmatic Thinking and Learning – how to think consciously about thinking and learning

Firstly, I think every programmer should read this book, even more, anyone whose career requires constant learning new things, skills of effective thinking and problem solving, should read this book as well. Why? Because in this publication author really carefully gathered the available scientific knowledge about how our brain works, how it processes information and how it stores the new knowledge. And more importantly, he described how we can change our behavior to make the process of learning and problem solving the most effective. And I don’t think I need to convince anyone that effective learning and thinking is very useful (if not necessary) in our work. The book itself covers various areas of how our brain works, how we learn and solve problems. The list of most interesting and useful topics included in this title below. Dreyfus skill model Skill level, according to Dreyfus, is divided into five levels: – Novice, when you need step by step instructions (this new Hello World application using new language/web framework you have done recently?) to get things done. You have problems with troubleshooting or doing something what is not described in “recipe” – Advanced Beginner: you can do something on your own, add a component which is not described in the tutorial, etc. but it is still difficult to solve problems. – Competent: you can troubleshoot, you can do a lot on your own without tutorials and detailed instructions. You actions are based on past experience. – Proficient: you are able to understand a big picture, a concept around which framework/library was built. You can also apply your knowledge to a similar context – Expert: when you simply, intuitively know the answer and (what can be most surprising) sometimes can’t explain why you choose that way and not the other. Majority of people in most areas are in one of three first levels. Proficiency and being an expert requires a lot of learning, trying, failing and derive knowledge from other’s experience. Only about 1 to 5 percent of people are experts in something. L-Mode and R-Mode Our mind is working in two modes: Linear Mode and Rich Mode. Most of the time we use the first but sometimes when we are stuck with a problem, it is good to give Rich Mode some space to start working. We can do it by just going for a walk, taking a shower, mowing a lawn, etc. Any tedious or not-requiring full concentration task will do the job. When our mind isn’t busy with constant thinking, it could switch to Rich Mode and surprisingly deliver the answer while we weren’t (consciously) thinking about the problem. I guess that is why I liked one of my previous job where my desk was quite far from toilets so I needed to have a small walk a few times a day. Write Drunk, Revise Sober Don’t strive for perfection, try to unleash your creativity. Not aiming in being 100% accurate and perfect is the most important when doing first drafts, sketching a prototype and researching new areas. Just relax and allow yourself to be creative. Don’t care about some inaccuracies and minor errors. They will be taken into account later, now it’s time to create something new. Target SMART Objectives It is very important to pick a proper objectives in your life. We all have heard about those New Year’s resolutions which are abandoned by the end of January. Their problem is that they are not well-thought-out objectives. To make our life better and our objectives easier to achieve we should follow those five rules. Objective should be: – Specific, so the more detailed it is the better. Instead of “I will loose weight” you should say “I will loose 10kg”. – Measurable, connected with the first one. if objective is specific it will be also easy to measure. 10 kilos are easy to check. So you should try to define your goals so they are easy to measure. – Achievable. Its not the best idea to say “I will learn Scala in one week” because it is not specific, hard to measure and, most important, very hard or even impossible to achieve in such short amount of time. Instead you should say “I will learn how to create Console Calculator in one week using Scala”. And it is definitely doable. – Relevant. If you don’t like Microsoft and all its products, picking a C# as a new language to learn won’t work. You should care about your objective. The more warm feelings you have about your objective the better. You will find impossible to motivate yourself if you hate/don’t like what you are trying to achieve. In your example, you should pick a language you like or you have positive associations, maybe it is Scala (the next Java), maybe Kotlin (because it is also the name of Ketchup producer in Poland) – Time-Boxed. You should always include information about deadline in your objective. “I will pass this certificate” seems ok but when? In a three months or in a five years? And five years makes this objective almost useless. Read Deliberately – SQ3R When reading a book about subject you want to learn, try to follow SQ3R rule: – Scan table of contents and chapter summaries to get a rough idea what is book about and what knowledge it contains – Question: write down questions that comes to your mind after scanning – Read entirely – Recite: summarize, take some notes using your own words and your understanding of the subject – Review: reread, update your notes, join or start a discussion with someone about what you’ve learnt/read Similar technique of reading called PQ RAR is described here. Manage your knowledge You should have a place to gather your knowledge and things you think might be useful in the future. It can be personal wiki, notes written in Evernote or Springpad. And another important thing, every time you have an idea you should be able to write it down, using either a classic paper notepad or application in your mobile phone. You should choose something you can easily take with you everywhere or almost everywhere. Some valuable quotes And for the end, some quotes from the book I found really intriguing: – You got to be careful if you don’t know where you’re going, because you might not get there. – Time can’t be created or destroyed, only allocated. – Give yourself permission to fail; it’s the path to success. – Inaction is the enemy, not error. – Remember the danger doesn’t lie in doing something wrong; it lies in doing nothing at all. Don’t be afraid to make mistakes. Summary As I wrote in the beginning, this books is really, really worth reading if you want to squeeze more out of your brain. It will help you to optimize learning and thinking process so you could be more effective at your day-to-day work without spending more on hardware, software or more comfortable furniture. And what about you? Do you have your own special tricks to learn faster or solve problems easier? If yes, please share them in the comments. Reference: Pragmatic Thinking and Learning – how to think consciously about thinking and learning from our JCG partner Tomasz Dziurko at the Code Hard Go Pro blog....
java-logo

A Generic and Concurrent Object Pool

In this post we will take a look at how we can create an object pool in Java. In recent years, the performance of the JVM has multiplied manifold that object pooling for better performance has been made almost redundant for most type of objects. In essence, creation of objects are no longer considered as expensive as it was done before. However there are some kind of objects that certainly proves costly on creation. Objects such as Threads, database connection objects etc are not lightweight objects and are slightly more expensive to create. In any application we require the use of multiple objects of the above kind. So it would be great if there was a very way easy to create and mantain an object pool of that type so that objects can be dynamically used and reused, without the client code being bothered about the live cycle of the objects. Before actually writing the code for an object pool, let us first identify the main requirements that any object pool must answer.The pool must let clients use an object if any is available. It must reuse the objects once they are returned to the pool by a client. If required, it must be able to create more objects to satisfy growing demands of the client. It must provide a proper shutdown mechanism, such that on shutdown no memory leaks occur.Needless to say, the above points will form the basis of the interface that we will expose to our clients. So our interface declaration will be as follows: package com.test.pool;/** * Represents a cached pool of objects. * * @author Swaranga * * @param < T > the type of object to pool. */ public interface Pool< T > { /** * Returns an instance from the pool. * The call may be a blocking one or a non-blocking one * and that is determined by the internal implementation. * * If the call is a blocking call, * the call returns immediately with a valid object * if available, else the thread is made to wait * until an object becomes available. * In case of a blocking call, * it is advised that clients react * to {@link InterruptedException} which might be thrown * when the thread waits for an object to become available. * * If the call is a non-blocking one, * the call returns immediately irrespective of * whether an object is available or not. * If any object is available the call returns it * else the call returns < code >null< /code >. * * The validity of the objects are determined using the * {@link Validator} interface, such that * an object < code >o< /code > is valid if * < code > Validator.isValid(o) == true < /code >. * * @return T one of the pooled objects. */ T get(); /** * Releases the object and puts it back to the pool. * * The mechanism of putting the object back to the pool is * generally asynchronous, * however future implementations might differ. * * @param t the object to return to the pool */ void release(T t); /** * Shuts down the pool. In essence this call will not * accept any more requests * and will release all resources. * Releasing resources are done * via the < code >invalidate()< /code > * method of the {@link Validator} interface. */ void shutdown(); }The above interface is intentionally made very simple and generic to support any type of objects. It provides methods to get/return an object from/to the pool. It also provides a shutdown mechanism to dispose of the objects. Now we try to create an implementation of the above interface. But before doing that it is important to note that an ideal release() method will first try to check if the object returned by the client is still reusable. If yes then it will return it to the pool else the object has to be discarded. We want every implementation of the Pool interface to follow this rule. So before creating a concrete implementation, we create an abstract implementation hat imposes this restriction on subsequent implementations. Our abstract implementation will be called, surprise, AbstractPool and its definition will be as follows: package com.test.pool;/** * Represents an abstract pool, that defines the procedure * of returning an object to the pool. * * @author Swaranga * * @param < T > the type of pooled objects. */ abstract class AbstractPool < T > implements Pool < T > { /** * Returns the object to the pool. * The method first validates the object if it is * re-usable and then puts returns it to the pool. * * If the object validation fails, * some implementations * will try to create a new one * and put it into the pool; however * this behaviour is subject to change * from implementation to implementation * */ @Override public final void release(T t) { if(isValid(t)) { returnToPool(t); } else { handleInvalidReturn(t); } } protected abstract void handleInvalidReturn(T t); protected abstract void returnToPool(T t); protected abstract boolean isValid(T t); } In the above class, we have made it mandatory for object pools to validate an object before returning it to the pool. To customize the behaviour of their pools the implementations are free to chose the way they implement the three abstract methods. They will decide using their own logic, how to check if an object is valid for reuse [the validate() method], what to do if the object returned by a client is not valid [the handleInvalidReturn() method] and the actual logic to return a valid object to the pool [the returnToPool() method]. Now having the above set of classes we are almost ready for a concrete implementation. But the catch is that since the above classes are designed to support generic object pools, hence a generic implementation of the above classes will not know how to validate an object [since the objects will be generic :-)]. Hence we need something else that will help us in this. What we actually need is a common way to validate an object so that the concrete Pool implementations will not have to bother about the type of objects being validated. So we introduce a new interface, Validator, that defines methods to validate an object. Our definition of the Validator interface will be as follows: package com.test.pool;/** * Represents the functionality to * validate an object of the pool * and to subsequently perform cleanup activities. * * @author Swaranga * * @param < T > the type of objects to validate and cleanup. */ public static interface Validator < T > { /** * Checks whether the object is valid. * * @param t the object to check. * * @return true * if the object is valid else false. */ public boolean isValid(T t); /** * Performs any cleanup activities * before discarding the object. * For example before discarding * database connection objects, * the pool will want to close the connections. * This is done via the * invalidate() method. * * @param t the object to cleanup */ public void invalidate(T t); }The above interface defines methods to check if an object is valid and also a method to invalidate and object. The invalidate method should be used when we want to discard an object and clear up any memory used by that instance. Note that this interface has little significance by itself and makes sense only when used in context of an object pool. So we define this interface inside the top level Pool interface. This is analogous to the Map and Map.Entry interfaces in the Java Collections Library. Hence our Pool interface becomes as follows: package com.test.pool;/** * Represents a cached pool of objects. * * @author Swaranga * * @param < T > the type of object to pool. */ public interface Pool< T > { /** * Returns an instance from the pool. * The call may be a blocking one or a non-blocking one * and that is determined by the internal implementation. * * If the call is a blocking call, * the call returns immediately with a valid object * if available, else the thread is made to wait * until an object becomes available. * In case of a blocking call, * it is advised that clients react * to {@link InterruptedException} which might be thrown * when the thread waits for an object to become available. * * If the call is a non-blocking one, * the call returns immediately irrespective of * whether an object is available or not. * If any object is available the call returns it * else the call returns < code >null< /code >. * * The validity of the objects are determined using the * {@link Validator} interface, such that * an object < code >o< /code > is valid if * < code > Validator.isValid(o) == true < /code >. * * @return T one of the pooled objects. */ T get(); /** * Releases the object and puts it back to the pool. * * The mechanism of putting the object back to the pool is * generally asynchronous, * however future implementations might differ. * * @param t the object to return to the pool */ void release(T t); /** * Shuts down the pool. In essence this call will not * accept any more requests * and will release all resources. * Releasing resources are done * via the < code >invalidate()< /code > * method of the {@link Validator} interface. */ void shutdown();/** * Represents the functionality to * validate an object of the pool * and to subsequently perform cleanup activities. * * @author Swaranga * * @param < T > the type of objects to validate and cleanup. */ public static interface Validator < T > { /** * Checks whether the object is valid. * * @param t the object to check. * * @return true * if the object is valid else false. */ public boolean isValid(T t); /** * Performs any cleanup activities * before discarding the object. * For example before discarding * database connection objects, * the pool will want to close the connections. * This is done via the * invalidate() method. * * @param t the object to cleanup */ public void invalidate(T t); } }We are almost ready for a concrete implementation. But before that we need one final weapon, which is actually the most important weapon of an object pool. It is called ‘the ability to create new objects’.c Sine our object pools will be generic, they must have knowledge of how to create new objects to populate its pool. This functionality must also not depend on the type of the object pool and must be a common way to create new objects. The way to do this will be an interface, called ObjectFactory that defines just one method, which is ‘how to create a new object’. Our ObjectFactory interface is as follows: package com.test.pool;/** * Represents the mechanism to create * new objects to be used in an object pool. * * @author Swaranga * * @param < T > the type of object to create. */ public interface ObjectFactory < T > { /** * Returns a new instance of an object of type T. * * @return T an new instance of the object of type T */ public abstract T createNew(); } We are finally done with our helper classes and now we will create a concrete implementation of the Pool interface. Since we want a pool that can be used in concurrent applications, we will create a blocking pool that blocks the client if no objects are available in the pool. The blocking mechanism will block indefinitely until an objects becomes available. This kind of implementation begets that another method be there which will block only for a given time-out period, if any object becomes available before the time out that object is returned otherwise after the timeout instead of waiting for ever, a null object is returned. This implementation is analogous to a LinkedBlockingQueue implementation of the Java Concurrency API and thus before implementing the actual class we expose another implementation, BlockingPool, which is analogous to the BlockingQueue interface of the Java Concurrency API. Hence the Blockingpool interface declaration is as follows: package com.test.pool;import java.util.concurrent.TimeUnit;/** * Represents a pool of objects that makes the * requesting threads wait if no object is available. * * @author Swaranga * * @param < T > the type of objects to pool. */ public interface BlockingPool < T > extends Pool < T > { /** * Returns an instance of type T from the pool. * * The call is a blocking call, * and client threads are made to wait * indefinitely until an object is available. * The call implements a fairness algorithm * that ensures that a FCFS service is implemented. * * Clients are advised to react to InterruptedException. * If the thread is interrupted while waiting * for an object to become available, * the current implementations * sets the interrupted state of the thread * to true and returns null. * However this is subject to change * from implementation to implementation. * * @return T an instance of the Object * of type T from the pool. */ T get(); /** * Returns an instance of type T from the pool, * waiting up to the * specified wait time if necessary * for an object to become available.. * * The call is a blocking call, * and client threads are made to wait * for time until an object is available * or until the timeout occurs. * The call implements a fairness algorithm * that ensures that a FCFS service is implemented. * * Clients are advised to react to InterruptedException. * If the thread is interrupted while waiting * for an object to become available, * the current implementations * set the interrupted state of the thread * to true and returns null. * However this is subject to change * from implementation to implementation. * * * @param time amount of time to wait before giving up, * in units of unit * @param unit a TimeUnit determining * how to interpret the * timeout parameter * * @return T an instance of the Object * of type T from the pool. * * @throws InterruptedException * if interrupted while waiting */ T get(long time, TimeUnit unit) throws InterruptedException; } And our BoundedBlockingPool implementation will be as follows: package com.test.pool;import java.util.concurrent.BlockingQueue; import java.util.concurrent.Callable; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.TimeUnit;public final class BoundedBlockingPool < T > extends AbstractPool < T > implements BlockingPool < T > { private int size; private BlockingQueue < T > objects; private Validator < T > validator; private ObjectFactory < T > objectFactory; private ExecutorService executor = Executors.newCachedThreadPool(); private volatile boolean shutdownCalled; public BoundedBlockingPool( int size, Validator < T > validator, ObjectFactory < T > objectFactory) { super(); this.objectFactory = objectFactory; this.size = size; this.validator = validator; objects = new LinkedBlockingQueue < T >(size); initializeObjects(); shutdownCalled = false; } public T get(long timeOut, TimeUnit unit) { if(!shutdownCalled) { T t = null; try { t = objects.poll(timeOut, unit); return t; } catch(InterruptedException ie) { Thread.currentThread().interrupt(); } return t; } throw new IllegalStateException( 'Object pool is already shutdown'); } public T get() { if(!shutdownCalled) { T t = null; try { t = objects.take(); } catch(InterruptedException ie) { Thread.currentThread().interrupt(); } return t; } throw new IllegalStateException( 'Object pool is already shutdown'); } public void shutdown() { shutdownCalled = true; executor.shutdownNow(); clearResources(); } private void clearResources() { for(T t : objects) { validator.invalidate(t); } } @Override protected void returnToPool(T t) { if(validator.isValid(t)) { executor.submit(new ObjectReturner(objects, t)); } } @Override protected void handleInvalidReturn(T t) { } @Override protected boolean isValid(T t) { return validator.isValid(t); } private void initializeObjects() { for(int i = 0; i < size; i++) { objects.add(objectFactory.createNew()); } } private class ObjectReturner < E > implements Callable < Void > { private BlockingQueue < E > queue; private E e; public ObjectReturner(BlockingQueue < E > queue, E e) { this.queue = queue; this.e = e; } public Void call() { while(true) { try { queue.put(e); break; } catch(InterruptedException ie) { Thread.currentThread().interrupt(); } } return null; } } } The above is a very basic object pool backed internally by a LinkedBlockingQueue. The only method of interest is the returnToPool() method. Since the internal storage is a blocking pool, if we tried to put the returned element directly into the LinkedBlockingPool, it might block he client if the queue is full. But we do not want a client of an object pool to block just for a mundane task like returning an object to the pool. So we have made the actual task of inserting the object into the LinkedBlockingQueue as an asynchronous task and submit it to an Executor instance so that the client thread can return immediately. Now we will use the above object pool into our code. We will use the object pool to pool some database connection objects. Hence we will need a Validator to validate our database connection objects. Our JDBCConnectionValidator will look like this: package com.test;import java.sql.Connection; import java.sql.SQLException;import com.test.pool.Pool.Validator;public final class JDBCConnectionValidator implements Validator < Connection > { public boolean isValid(Connection con) { if(con == null) { return false; } try { return !con.isClosed(); } catch(SQLException se) { return false; } } public void invalidate(Connection con) { try { con.close(); } catch(SQLException se) { } } } And our JDBCObjectFactory, that will enable the object pool to create new objects will be as follows: package com.test;import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException;import com.test.pool.ObjectFactory;public class JDBCConnectionFactory implements ObjectFactory < Connection > { private String connectionURL; private String userName; private String password; public JDBCConnectionFactory( String driver, String connectionURL, String userName, String password) { super(); try { Class.forName(driver); } catch(ClassNotFoundException ce) { throw new IllegalArgumentException( 'Unable to find driver in classpath', ce); } this.connectionURL = connectionURL; this.userName = userName; this.password = password; } public Connection createNew() { try { return DriverManager.getConnection( connectionURL, userName, password); } catch(SQLException se) { throw new IllegalArgumentException( 'Unable to create new connection', se); } } } Now we create a JDBC object pool using the above Validator and ObjectFactory: package com.test; import java.sql.Connection;import com.test.pool.Pool; import com.test.pool.PoolFactory;public class Main { public static void main(String[] args) { Pool < Connection > pool = new BoundedBlockingPool < Connection > ( 10, new JDBCConnectionValidator(), new JDBCConnectionFactory('', '', '', '') ); //do whatever you like } } As a bonus for reading the entire post. I will provide another implementation of the Pool interface that is essentially a non blocking object pool. The only difference of this implementation for the previous one is that this implementation does not block the client if an element is unavailable, rather return null. Here it goes: package com.test.pool;import java.util.LinkedList; import java.util.Queue; import java.util.concurrent.Semaphore;public class BoundedPool < T > extends AbstractPool < T > { private int size; private Queue < T > objects; private Validator < T > validator; private ObjectFactory < T > objectFactory; private Semaphore permits; private volatile boolean shutdownCalled; public BoundedPool( int size, Validator < T > validator, ObjectFactory < T > objectFactory) { super(); this.objectFactory = objectFactory; this.size = size; this.validator = validator; objects = new LinkedList < T >(); initializeObjects(); shutdownCalled = false; } @Override public T get() { T t = null; if(!shutdownCalled) { if(permits.tryAcquire()) { t = objects.poll(); } } else { throw new IllegalStateException( 'Object pool already shutdown'); } return t; }@Override public void shutdown() { shutdownCalled = true; clearResources(); } private void clearResources() { for(T t : objects) { validator.invalidate(t); } }@Override protected void returnToPool(T t) { boolean added = objects.add(t); if(added) { permits.release(); } } @Override protected void handleInvalidReturn(T t) { }@Override protected boolean isValid(T t) { return validator.isValid(t); } private void initializeObjects() { for(int i = 0; i < size; i++) { objects.add(objectFactory.createNew()); } } }Considering we are now two implementations strong, it is better to let users create our pools via factory with meaningful names. Here is the factory: package com.test.pool;import com.test.pool.Pool.Validator;/** * Factory and utility methods for * {@link Pool} and {@link BlockingPool} classes * defined in this package. * This class supports the following kinds of methods: * * * Method that creates and returns a default non-blocking * implementation of the {@link Pool} interface. * * * Method that creates and returns a * default implementation of * the {@link BlockingPool} interface. * * * * @author Swaranga */public final class PoolFactory { private PoolFactory() { } /** * Creates a and returns a new object pool, * that is an implementation of the {@link BlockingPool}, * whose size is limited by * the size parameter. * * @param size the number of objects in the pool. * @param factory the factory to create new objects. * @param validator the validator to * validate the re-usability of returned objects. * * @return a blocking object pool * bounded by size */ public static < T > Pool < T > newBoundedBlockingPool( int size, ObjectFactory < T > factory, Validator < T > validator) { return new BoundedBlockingPool < T > ( size, validator, factory); } /** * Creates a and returns a new object pool, * that is an implementation of the {@link Pool} * whose size is limited * by the size parameter. * * @param size the number of objects in the pool. * @param factory the factory to create new objects. * @param validator the validator to validate * the re-usability of returned objects. * * @return an object pool bounded by size */ public static < T > Pool < T > newBoundedNonBlockingPool( int size, ObjectFactory < T > factory, Validator < T > validator) { return new BoundedPool < T >(size, validator, factory); } } Thus our clients now can create object pools in a more readable manner: package com.test; import java.sql.Connection;import com.test.pool.Pool; import com.test.pool.PoolFactory;public class Main { public static void main(String[] args) { Pool < Connection > pool = PoolFactory.newBoundedBlockingPool( 10, new JDBCConnectionFactory('', '', '', ''), new JDBCConnectionValidator()); //do whatever you like } } And so ends our long post. This one was long overdue. Feel free to use it, change it, add more implementations. Happy coding and don’t forget to share! Reference: A Generic and Concurrent Object Pool from our JCG partner Sarma Swaranga at the The Java HotSpot blog....
software-development-2-logo

Don’t Prioritize Features!

Estimating the “value” of features is a waste of time. I was in a JAD session once where people argued about if the annoying beeping (audible on the conference line) was a smoke alarm or a fire alarm. Yes, you can get to an answer, but so what?! The important thing is to solve the problem. Solutions Versus Features Everyone on that conference call had an immediate and visceral appreciation of the value of making the beeping stop. That’s the power of solving a problem. The methods of solving the problem – mute the offender, replace the battery, throw the alarm out the window – do not have implicit value. They have an indirect value, in an “end justifies the means” kind of way. But not direct value. The same sort of thing applies when talking about prioritizing features. Eric Krock (@voximate) just wrote a really good article, Per-Feature ROI Is (Usually) a Stupid Waste of Time, where he does two great things, and (barely) missed an opportunity for a hat trick. The first great thing Eric did was look at the challenges of determining relative (ordinal or cardinal) value of “several things.” He points out several real world challenges:When you have a product with several things already and you want to determine the value of yet another thing – how do you allocate a portion of future revenue to the new thing versus the things you already have? When thing A and thing B have to be delivered together, to realize value, how do you prioritize things A & B? Relative to each other? The opportunity cost of having your product manager do a valuation exercise on a bunch of things is high. She could be doing more valuable things. You won’t perform a retrospective on the accuracy of your valuation. So you won’t know if it was a waste of time, and you won’t get better at future exercises.The second great thing Eric did was reference a Tyner Blain article from early 2007 on measuring the costs of features. I mean “great” on three levels.As a joke (for folks who don’t know me, figured I’d mention that I’m kidding, just in case you get the wrong idea). There is some good stuff in that earlier costing article about allocation of fixed and variable costs (with a handy reminder. Eric’s article gives me an opportunity to shudder at the language I was using in 2007, see how much some of my thinking has evolved in four years, and improve a bit of it here and now.What Eric slightly missed is the same thing I completely missed in 2007 – features don’t have inherent value. Solutions to problems do have value. He only slightly missed it because he got the problem manifestation right – it takes a lot of effort, for little reward, to spend time thinking about what features are worth. I also missed the opportunity in an article looking at utility curves as an approach to estimating benefits, written two days after the one on cost allocation. We were both so close! People don’t buy features. They buy solutions. Valuing Solutions Instead of Features Estimating the value of solutions addresses a lot of the real problems that Eric calls out. It also has a side benefit of keeping your perspective outside-in versus inside-out. Or as others often say, it keeps you “market driven.” Anything that you’re doing, as a product manager, that has you focused on understanding your market and your customers and their problems is a good thing. It may even be the most important thing. I would contend that it eliminates objection 3 – the opportunity cost of estimating the value of solutions is minimal or zero. There may be activities with more urgency, but off the top of my head, none that are more important, for a product manager. Comment if I’m missing something (it’s late and I just got home from another week on the road). The way I approach determining the value of a solution is by developing a point of view about how much incremental profit I will get when my product starts solving this additional problem. Revenue can increase from additional sales, or from the ability to increase prices. Cost can increase if new marketing and other operations (launches, PR campaigns, etc) are required to realize the incremental revenue. I start with a customer-centric market model. A given solution, or improved solution (as in “solves the problem better,” or “solves more of the problem”) – which only applies to some problems – is interesting to some customers, in some market segments. A solution has value when it brings in incremental customers, in a targeted market segment. It also has value when it reduces or prevents erosion of your current customer base (in a SaaS or maintenance-revenue model) to competitive solutions. The time you spend thinking about buyer and user personas, the problems they care about, and the nature of those problems (which varies by persona) is not time wasted – or even spent “at the cost of doing something else.” To make this useful, you have to have a forecast – without solution A, we will sell X; with solution A we will sell Y (and to whom). A good product manager will be looking at sales, and will be able to reconcile the sales with the projections. That helps with objection 4 (but doesn’t completely address it – you don’t know if your projections were accurate, so you can’t really know if your estimation is accurate). This also helps you deal with challenge #1. You’ve got a model that says “the current product works great for high school students, but not college students, because they also have problem A, which they solve today by…” Your intention is to create solution A, making your product viable to college students. Allocate the incremental profits from college-student sales to solution A. My approach to challenge #2 is a little more tactical.Coupled Solutions There are a couple ways that Eric’s “must deliver A and B” scenario are interesting, when looking at the value of solutions. Scenario 1: Solution A solves part of problem X for persona M. Solution B solves part of problem X for persona M. Combined, they solve more of problem X for persona M. This makes sense for “more is better” problems – where “more” solution yields “more” value. In this case, I have a forecast (the more time I spend on it, the better it will be) that maps incremental sales to improved solutions. The “first” solution to be released will have more value than the second. If they are being released together, then I don’t care about the allocation – I combine them. Scenario 2: If, however, the two solutions are valuable to different personas, then I treat them separately – even if they solve “the same problem,” it is not the same problem (for the same person).Conclusion Prioritization by “Bang For the Buck” is worth doing.Just make sure you are prioritizing solutions, not features. Also note: this article talked about valuation – what you do with that valuation, prioritizing by market, can be trickier. Reference: Don’t Prioritize Features! from our JCG partner Scott Sehlhorst at the Business Analysis | Product Management | Software Requirements blog....
software-development-2-logo

Software Developers Hate Worthless Tasks

Most software developers that I know, especially the best ones, loathe worthless tasks. This is probably true of most people who strive to do what they do to the best of their ability, but I’m not aware of any area in which this attitude is more prevalent than in software development. The best software developers are passionate about what they do because they build things, they create things, and they make what they imagine become reality. On the other side, however, developers can quickly become disillusioned and lose productivity when faced with tasks they perceive as unimportant or ‘busy work.’ Finding Value in One’s Work Over the years, I’ve seen some accomplished software developers leave their positions to pursue managerial positions or even completely different careers. Because this is often different than what I see from the ‘typical’ software developer, I often ask what led to the change. In some cases, it’s as simple as responding to pressure to take a management role to ‘justify’ their higher salary. In other cases, it’s developers who are tired of learning new things at the pace often required in software development. The most common reasons that I hear from these folks have to do more with boredom or loss of interest in the work itself. These are typically people who are not being sufficiently challenged anymore and often are putting time and effort into something they perceive as having very little or no value. Some of the lowest points in my career in software development have been when a project or task that I’ve put significant time, energy, and creativity into is terminated or significantly reduced in scope. Although I’ve generally received the same monetary remuneration as I would have for a successful delivered product into getting to that point, the feeling in such cases is more of discouragement than of satisfaction. Although compensated for my time and effort, it still hurts to think that the time and effort have no lasting value. Cancelled assignments or tasks are not the only source of disillusionment related to not finding value in one’s work. Working on unnecessary tasks or ‘busy work’ can be almost as difficult on a software developer. There always seems to be plenty of truly useful and contributory things to do and this makes it even more difficult to work on things that seem to have much less or no value.Process One of the biggest perceived enemies of software development productivity from the perspective of many software developers is onerous process. In Process Kills Developer Passion, James Turner writes, ‘the blind application of process best practises across all development is turning what should be a creative process into chartered accountancy with a side of prison.’ Turner makes a point that I’ve been making to anyone who will listen for the last few years: not all developers are equal and they shouldn’t all be treated exactly the same way. Many projects make the mistake of assuming the ‘least common denominator’ and enforcing processes on everyone to accommodate that ‘least common denominator.’ Turner articulates this more colourfully than I do: ‘companies need to start acknowledging that there is a qualitative difference between developers. Making all of them wear the same weighted yokes to ensure the least among them doesn’t screw up is detrimental to overall morale and efficiency of the whole.’ I think most of us who have worked in the industry for some time do realize that a degree of process is justified and even beneficial. The degree depends on the project, the skills and experience of the developers, and the size of the team. There are many benefits to standardization and code conventions. There are similarly well-advertised benefits to unit testing and other quality processes. That being stated, the best developers can ascertain which processes fit which situations best and which don’t fit certain situations as well.Meetings I once was told, ‘It takes a very good meeting to beat having no meeting at all.’ This is often very sage advice. However, I have seen situations where a short, well-run meeting provides tremendous benefit. Most meetings waste peoples’ time, especially if the meeting organizer starts late and ‘fills the time.’ The best meetings start promptly and address only what must be addressed. I’ve worked with people in the same office who will not talk or coordinate unless forced to by a third party and this is often facilitated by a short, informal meeting. Similarly, difficult design decisions and architecture trade-offs can be discussed effectively in meetings. It seems to be the natural tendency of meetings to go long, into the weeds, and become very dissatisfying as they waste developers’ time. Well-run meetings, however, can have the opposite effect: they can help developers have clearer direction and be more efficient in working together as a team. I previously blogged on one tip for effective software development meetings. Taking notes in a meeting, especially in a way such that participants can see them live as they are taken, has numerous advantages. These include getting everyone on the same page at the same time, documenting major decisions for future reference, and in having material to send those who did not make the meeting.Not Every Idea Should Be Implemented Not all ideas are created equal. Developers are often understandably impatient when they are coerced into implementing poor or useless ideas. This can be especially painful when the idea is counterproductive. It is difficult to justify to oneself spending time on something that will almost certainly never be used or, even worse, might make the user experience worse.Scripting Tedious Tasks Many developers look for ways to script especially tedious tasks rather than performing the tedious tasks manually even if the time spent writing the script matches the time that would have been spent completing the tasks directly. This is perhaps one of the best examples proving that most developers loathe tedious tasks. There are often many positives to this typical developer reaction to tedious tasks. First, it often turns out that the tasks that we thought we’d only do once need to be implemented again. It may be that the script can be applied to a similar situation or it may be that the script needs to be applied against a new set of input. Second, the act of writing a script provides more value than simply getting a task done; it can lead to improved familiarity with the scripting language and can sometimes provide a nice codification of the problem at hand.Convention over Configuration (Configuration by Exception) A major development in the software development industry in recent years has been the rapid adoption of convention over configuration (configuration by exception) in various languages and frameworks. The idea here is that developers need provide configuration information only when the configuration is different than the configuration provided by default (conventional configuration). This saves developers time and tedious effort to provide configuration details for common tasks.Less Boilerplate Code Another recent trend in software development is the focus on reduced boilerplate code. Convention over configuration has helped with this. Many of the alternative JVM languages tout less boilerplate code as one of their advantages over Java. For example, some Java developers feel that Groovy’s property support is one of its nicest features. Even Java has embraced reduced boilerplate code in multiple areas. One of the features that make many libraries and frameworks popular is the ability to write and maintain less boilerplate code thanks to the framework or library. In cases where neither language nor framework has reduced the boilerplate code, IDEs and code-generators have been used successfully to implement boilerplate code with less tedium and with less risk. Writing boilerplate code is not only tedious, it can be easier to make mistakes when it must be implemented by hand.Some Things Are Worth More Than They Appear One mistake I have made multiple times and seen other developers make is to decide that a particular task is useless or of little value. I’m often correct in my identification of worthless or low-value tasks, but on rare occasions I am surprised when a seemingly useless task provides some real value or a tangible benefit. Observing why this is the case helps me refine my ability to differentiate between worthwhile and worthless tasks. Such situations have also reminded me to keep an open mind on the value of new ideas until I’ve thought carefully about the action and its good and bad ramifications. One of the most important things a software development manager can do is to assign worthwhile tasks to developers and to ensure that they understand the value in all assigned tasks.Execution Matters Even an idea with potential for value can lead to no or dramatically reduced value if not implemented correctly. For example, incorrect application of unit tests can dramatically reduce the ratio of value to cost for implementing and using unit tests. Similarly, code reviews and use of code quality tools can provide great value when executed correctly or provide much less value for the cost if executed incorrectly.Conclusion Most of us do better work when we enjoy what we do and when we perceive that what we do has value. Worthless or low-value tasks are more likely to be seen as tedious and are more likely to not be done well. Developers will be happier and more motivated when they do not have worthless tasks forced upon them. Don’t forget to share! Reference: Software Developers Hate Worthless Tasks from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
java-duke-logo

Make JFrame transparent

First create a frame that has a slider in it which will be used to set transparency amount. import javax.swing.JFrame; import javax.swing.JSlider;public class TransparentFrame extends JFrame { public TransparentFrame() { setTitle('Transparent Frame'); setSize(400,400); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JSlider slider = new JSlider(JSlider.HORIZONTAL); add(slider); setVisible(true); } public static void main(String[] args) { new TransparentFrame(); } } Output of this will be:Now add a change listener to slider so we can monitor it. slider.addChangeListener(new ChangeListener() { @Override public void stateChanged(ChangeEvent e) { } }); Now we will write our transparency logic in this method but before we do that first let’s see how to make a JFrame transparent. To make a JFrame transparent java has a built in utility class that is AWTUtilities. By using methods provided in this class we can make our JFrame transparent. Following is the code for that: AWTUtilities.setWindowOpacity(window, floatOpacity);Arguments: Window – Your frame/window object. floatOpactity – between 0 to 1. 1 for no opacity and 0 for fully transparent. So now we know that we have to add this logic to slider change event and give sliders value as floatOpacity value. So for that change stateChanged() method with following: @Override public void stateChanged(ChangeEvent e) { JSlider slider = (JSlider) e.getSource(); if(!slider.getValueIsAdjusting()){ AWTUtilities.setWindowOpacity(TransparentFrame.this, slider.getValue()); } } Think it’s done. No we still have to make sure that opacity value doesn’t go beyond its limit that is 0.0f to 1.0f. So for that we have to limit our slider to these values. As slider don’t supports point values we will take values in multiple of 10 and then divide them by 100 to get desired value. For this we will change JSlider declaration and stateChanged like follow: JSlider slider = new JSlider(JSlider.HORIZONTAL, 10, 100, 100);Change following line in stateChanged method: AWTUtilities.setWindowOpacity(TransparentFrame.this, slider.getValue()/100f);So now when we run this program we see a frame with a slider in it which is set to end. And when we change slider the frame accordingly change its transparency. Output:Note: To use AWTUtilities class in eclipse you need to change preference setting or you may have error for accessing restricted classes. To change settings do as follows:Right click on your project. Select properties. Select Java compiler and expand it. Select Errors/Warnings. Enable project specific settings. In Deprecated and Restricted API you will find Forbidden References (access rules.) Change it to Warning or IgnoreReference: Make JFrame transparent from our JCG partner Harsh Raval at the harryjoy blog....
javafx-logo

Expression Based PathTransitions in JavaFX

In JavaFX you are able to animate nodes along a path using PathTransition objects. PathTransitions use Shape objects to describe the path they need to animate along. JavaFX provides various types of Shapes (e.g. Polygon, Circle, PolyLine, Path). The Path shape is interesting in that it allows you to create complicated shapes using various movements called PathElement. Some PathElements are ArcTo, CubicCurveTo, HLineTo, LineTo, MoveTo, QuadCurveTo, VLineTo. Their names imply what they do. While the PathElemetns are great for descrbing complex paths, I found that I would rather describe my paths using mathematical expressions. All the years of working with graphs in math class have affected the way my mind thinks. Quadratic and trigonometric expressions have a warm and cozy feel to them. As such, I sought to create PathTransitions that are described using mathematical expressions. I describe my solution in this post in the case that anyone wishes to accomplish the same. To accomplish this, the first thing one needs is a way to solve mathematical expressions, like x*sin(x), or x^2/45, or (x^2)/sin(x-2), or whatever else you can imagine. For this I was lucky enough to stumble upon Lawrence Dol’s Software Monkey web site. Lawrence created a super light Java class, named MathEval, that evaluates mathematical expressions. The class is only 31KB and is very easy to use. I used MathEval to plot the points of given expression. The JavaFx class Polyline is used to store the plotted points that MathEval solved for and turn them into a Shape object that PathTransition can take as input. The class I ended up creating is called ExpressionTransitionMaker – sorry, no Javadoc, but you can find the source code here. This class is supposed to be very easy to use. It has two main methods that are worth mentioning. The first is: public void addExpressionEntry(double start, double end, double poll, GraphType type, String expression) throws IllegalArgumentException;This method is used to add expression entries, which consist of an expression with supporting information like the start and end positions on a graph, poll interval, and GraphType. ExpressionTransitionMaker can make three different types of graphs described by this GraphType enum: public enum GraphType {vertical("y"), horizontal("x"), polar("a"); private String var;GraphType(String var) { this.var = var; }public String getVar() { return var; } }Each expression entry requires one expression. For horizontal graphs the expression needs to be in the form f(x); g(y) for vertical; and r(a) for polar. That is to say that the horizontal graph needs an expressions where the only variable is the letter “x”, and the vertical graph needs an expression where the only variable is the letter “y”, and the polar graph needs an expression where the only variable is the letter “a”, where “a” signifies the angle in radians. Muliple expression entries can be added. As the names imply the horizontal and vertical graphs are intended for graphs that go left-and-right and up-and-down respectively. Polar graphs are intended for graphs that move along a circular or spiral path. All three can go in the opposite direction if given a negative poll value and the appropriate start and end points. The second method worth mentioning is: public SequentialTransition getSequentialTransition();Because multiple expressions can be added into the ExpressionTransitionMaker, a SequentialTransition object is used to play all the PathTransitions that ExpressionTransitionMaker can create. The returned SequentialTransition is populated with multiple PathTransitions, one for each expression added. The SequentialTransition will play all its transitions in sequential order. Well that’s about it. There is one or two more public methods in ExpressionTransitionMaker that a user might find handy but was left out for brevity. I created a simple little app that tests out the ExpressionTransitionMaker and can be found here. The app can be played over a browser and it allows you to enter in multiple expressions which are used to animate an image along the screen.If you have any feedback to provide I would love to read it. Reference: Expression Based PathTransitions in JavaFX from our W4G partner Jose Martinez...
junit-logo

5 Tips for Unit Testing Threaded Code

Here’s a few tips on how take make testing your code for logical correctness (as opposed to multi-threaded correctness). I find that there are essentially two stereotypical patterns with threaded code:Task orientated – many, short running, homogeneous tasks, often run within the Java 5 executor framework, Process orientated – few, long running, heterogeneous tasks, often event based (waiting on notification), or polling (sleeping between cycles), often expressed using a thread or runnable.Testing either type of code can be hard; the work is done in another thread, and therefore notification of completion can be opaque, or is hidden behind a level of abstraction. The code is on GitHub.Tip 1 – Life-cycle Manage Your Objects Object that have a managed life-cycle are are easier to test, the life-cycle allows for set-up and tear-down, which means you can clean-up after your test and no spurious threads are lying around to pollute other tests.           public class Foo { private ExecutorService executorService;public void start() { executorService = Executors.newSingleThreadExecutor(); }public void stop() { executorService.shutdown(); } }Tip 2 – Set a Timeout on Your Tests Bugs in code (as you’ll see below) can result in a multi-threaded test never completing, as (for example) you’re waiting on some flag that never gets set. JUnit lets you set a timeout on your test. ... @Test(timeout = 100) // in case we never get a notification public void testGivenNewFooWhenIncrThenGetOne() throws Exception { ...Tip 3 – Run Tasks in the Same Thread as Your Test Typically you’ll have an object that runs tasks in a thread pool. This means that your unit test might have to wait for the task to complete, but you’re not able to know when it would complete. You might guess, for example: public class Foo { private final AtomicLong foo = new AtomicLong(); ... public void incr() { executorService.submit(new Runnable() { @Override public void run() { foo.incrementAndGet(); } }); } ... public long get() { return foo.get(); } }public class FooTest {private Foo sut; // system under test@Before public void setUp() throws Exception { sut = new Foo(); sut.start(); }@After public void tearDown() throws Exception { sut.stop(); }@Test public void testGivenFooWhenIncrementGetOne() throws Exception { sut.incr(); Thread.sleep(1000); // yuk - a slow test - don't do this assertEquals("foo", 1, sut.get()); } } But this is problematic. Execution is non-uniform so there’s no guarantee that this will work on another machine. It’s fragile, changes to the code can cause the test to fail as it suddenly take a bit too long. Its slow, as you will be generous with sleep when it fails. A trick is to make the task run synchronously, i.e. in the same thread as the test. Here this can be achieved by injecting the executor: public class Foo { ... public Foo(ExecutorService executorService) { this.executorService = executorService; } ... public void stop() { // nop } Then you can have use a synchronous executor service (similar in concept to a SynchronousQueue) to test: public class SynchronousExecutorService extends AbstractExecutorService { private boolean shutdown;@Override public void shutdown() {shutdown = true;}@Override public List<Runnable> shutdownNow() {shutdown = true; return Collections.emptyList();}@Override public boolean isShutdown() {shutdown = true; return shutdown;}@Override public boolean isTerminated() {return shutdown;}@Override public boolean awaitTermination(final long timeout, final TimeUnit unit) {return true;}@Override public void execute(final Runnable command) {command.run();} } An updated test that doesn’t need to sleep: public class FooTest {private Foo sut; // system under test private ExecutorService executorService;@Before public void setUp() throws Exception { executorService = new SynchronousExecutorService(); sut = new Foo(executorService); sut.start(); }@After public void tearDown() throws Exception { sut.stop(); executorService.shutdown(); }@Test public void testGivenFooWhenIncrementGetOne() throws Exception { sut.incr(); assertEquals("foo", 1, sut.get()); } } Note that you need to life-cycle manage the executor externally to Foo.Tip 4 – Extract the Work from the Threading If your thread is waiting for an event, or a time before it does any work, extract the work to its own method and call it directly. Consider this: public class FooThread extends Thread { private final Object ready = new Object(); private volatile boolean cancelled; private final AtomicLong foo = new AtomicLong();@Override public void run() { try { synchronized (ready) { while (!cancelled) { ready.wait(); foo.incrementAndGet(); } } } catch (InterruptedException e) { e.printStackTrace(); // bad practise generally, but good enough for this example } }public void incr() { synchronized (ready) { ready.notifyAll(); } }public long get() { return foo.get(); }public void cancel() throws InterruptedException { cancelled = true; synchronized (ready) { ready.notifyAll(); } } } And this test: public class FooThreadTest {private FooThread sut;@Before public void setUp() throws Exception { sut = new FooThread(); sut.start(); Thread.sleep(1000); // yuk assertEquals("thread state", Thread.State.WAITING, sut.getState()); }@After public void tearDown() throws Exception { sut.cancel(); }@After public void tearDown() throws Exception { sut.cancel(); }@Test public void testGivenNewFooWhenIncrThenGetOne() throws Exception { sut.incr(); Thread.sleep(1000); // yuk assertEquals("foo", 1, sut.get()); } } Now extract the work: @Override public void run() { try { synchronized (ready) { while (!cancelled) { ready.wait(); undertakeWork(); } } } catch (InterruptedException e) { e.printStackTrace(); // bad practise generally, but good enough for this example } }void undertakeWork() { foo.incrementAndGet(); } Re-factor the test: public class FooThreadTest {private FooThread sut;@Before public void setUp() throws Exception { sut = new FooThread(); }@Test public void testGivenNewFooWhenIncrThenGetOne() throws Exception { sut.incr(); sut.undertakeWork(); assertEquals("foo", 1, sut.get()); } }Tip 5 – Notify State Change via Events An alternative to the previous two tips is to use a notification system, so your test can listen to the threaded object. Here’s a task oriented example: public class ObservableFoo extends Observable { private final AtomicLong foo = new AtomicLong(); private ExecutorService executorService;public void start() { executorService = Executors.newSingleThreadExecutor(); }public void stop() { executorService.shutdown(); }public void incr() { executorService.submit(new Runnable() { @Override public void run() { foo.incrementAndGet(); setChanged(); notifyObservers(); // lazy use of observable } }); }public long get() { return foo.get(); } } And its corresponding test (note the use of timeout): public class ObservableFooTest implements Observer {private ObservableFoo sut; private CountDownLatch updateLatch; // used to react to event@Before public void setUp() throws Exception { updateLatch = new CountDownLatch(1); sut = new ObservableFoo(); sut.addObserver(this); sut.start(); }@Override public void update(final Observable o, final Object arg) { assert o == sut; updateLatch.countDown(); }@After public void tearDown() throws Exception { sut.deleteObserver(this); sut.stop(); }@Test(timeout = 100) // in case we never get a notification public void testGivenNewFooWhenIncrThenGetOne() throws Exception { sut.incr(); updateLatch.await(); assertEquals("foo", 1, sut.get()); } } This has pros and cons: Pros:Creates useful code for listening to the object. Can take advantage of existing notification code, which makes it a good choice where that already exists. Is more flexible, can apply to both tasks and process orientated code. It is more cohesive than extracting the work.Cons:Listener code can be complex and introduce its own problems, creating additional production code that ought to be tested. De-couples submission from notification. Requires you to deal with the scenario that no notification is sent (e.g. due to bug). Test code can be quite verbose and therefore prone to having bugs.Reference: 5 Tips for Unit Testing Threaded Code from our JCG partner Alex Collins at the Alex Collins ‘s blog blog....
java-logo

Java: Choosing the right Collection

Here is a quick guide for selecting the proper implementation of a Set , List , or Map in your application. The best general purpose or ‘primary’ implementations are likely ArrayList, LinkedHashMap, and LinkedHashSet. Their overall performance is better, and you should use them unless you need a special feature provided by another implementation. That special feature is usually ordering or sorting. Here, ‘ordering’ refers to the order of items returned by an Iterator, and ‘sorting’ refers to sorting items according to Comparable or Comparator.Interface HasDuplicates? Implementations HistoricalSet no HashSet … LinkedHashSet* … TreeSet …List yes … ArrayList* … LinkedList … Vector, StackMap no duplicate keys  HashMap … LinkedHashMap* … TreeMap Hashtable,PropertiesPrincipal features of non-primary implementations :HashMap has slightly better performance than LinkedHashMap HashSet has slightly better performance than LinkedHashSet TreeSet is ordered and sorted, but slow TreeMap is ordered and sorted, but slow LinkedList has fast adding to the start of the list, and fast deletion from the interior via iterationIteration order for above implementations :HashSet – undefined HashMap – undefined LinkedHashSet – insertion order LinkedHashMap – insertion order of keys (by default), or ‘access order’ ArrayList – insertion order LinkedList – insertion order TreeSet – ascending order, according to Comparable / Comparator TreeMap – ascending order of keys, according to Comparable / ComparatorFor LinkedHashSet and LinkedHashMap, the re-insertion of an item does not affect insertion order. While being used in a Map or Set, these items must not change state (hence, it is recommended that these items be immutable objects):keys of a Map items in a SetSorting requires either that :the stored items implement Comparable a Comparator for the stored objects be definedTo retain the order of a ResultSet as specified in an ORDER BY clause, insert the records into a List or a LinkedHashMap. Reference: Choosing the right Collection from our JCG partner Sanjeev Kumar at the Architect’s Diary blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close