Featured FREE Whitepapers

What's New Here?

apache-wicket-logo

Apache Wicket: Remember Me functionality

It is quite common in web applications to have “Remember Me” functionality that allows user to be automatically logged in each time when he visits our website. Such feature can be implemented using Spring Security, but in my opinion using request based authentication framework with component based web framework is not the best idea. These two worlds just do not fit well together so I prefer to use my own baked solution which I will present below. Base project We start with a simple web application written using latest, still hot Apache Wicket 6. You can download complete sources from GitHub and start application using mvn clean compile jetty:run . Base application consists of two pages:Home Page: displays welcome message for logged and not-logged users or either logout or login link. Login Page: allows user to login basing on simple in-memory collection of users. Some working login/password pairs: John/john, Lisa/lisa, Tom/tom .Remember Me functionality Standard way to implement Remember Me functionality looks as follows:Ask user if he wants to be remembered and auto-logged in the future. If so, save cookies with login and password on his computer. For every new user coming to our website, check if cookies from step 2 are present and if so, auto login user. When he manually logs out, remove cookies so it is possible to clear data used to auto-login.Point 2 needs some explanation. In this example app we are going to save login and not hashed, not encrypted password in cookies. In real scenario this is unacceptable. Instead of it, you should consider storing hashed and salted password so even if someone intercepts user cookie, password still will be secret and more work will be needed to decode it. Update: Micha? Mat?oka posted two very interesting links how this could be done in real systems. Those approaches do not even use password nor password hashes. For more details please look into his comment below this post. Step 1: As a User I want to decide if I want to use “Remember Me” featureLink to commit with this step To allow user to notify application that he wants to use ‘Remember Me’ functionality we will simply add a checkbox to login page. So we need to amend LoginPage java and html file a bit (new stuff is highlighted): <form wicket:id='form' class='form-horizontal'> <fieldset> <legend>Please login</legend> </fieldset><div class='control-group'> <div wicket:id='feedback'></div> </div> <div class='control-group'> <label class='control-label' for='login'>Login</label> <div class='controls'> <input type='text' id='login' wicket:id='login' /> </div> </div> <div class='control-group'> <label class='control-label' for='password'>Password</label> <div class='controls'> <input type='password' id='password' wicket:id='password' /> </div> </div> <div class='control-group'> <div class='controls'> <label class='checkbox'> <input type='checkbox' wicket:id='rememberMe'> Remember me on this computer </label> </div> </div> <div class='form-actions'> <input type='submit' wicket:id='submit' value='Login' title='Login' class='btn btn-primary'/> </div> </form>private String login; private String password; private boolean rememberMe;public LoginPage() {Form<Void> loginForm = new Form<Void>('form'); add(loginForm);loginForm.add(new FeedbackPanel('feedback')); loginForm.add(new RequiredTextField<String>('login', new PropertyModel<String>(this, 'login'))); loginForm.add(new PasswordTextField('password', new PropertyModel<String>(this, 'password'))); loginForm.add(new CheckBox('rememberMe', new PropertyModel<Boolean>(this, 'rememberMe')));Button submit = new Button('submit') { // (...) };loginForm.add(submit); } Now we are ready for next step. Step 2: As a System I want to save login and password in cookiesLink to commit with this step First we need a CookieService that will encapsulate all logic responsible for working with cookies: saving, listing and clearing cookie when needed. Code is rather simple, we work with WebResponse and WebRequest classes to modify cookies in user’s browser. public class CookieService {public Cookie loadCookie(Request request, String cookieName) {List<Cookie> cookies = ((WebRequest) request).getCookies();if (cookies == null) { return null; }for (Cookie cookie : cookies) { if(cookie.getName().equals(cookieName)) { return cookie; } }return null; }public void saveCookie(Response response, String cookieName, String cookieValue, int expiryTimeInDays) { Cookie cookie = new Cookie(cookieName, cookieValue); cookie.setMaxAge((int) TimeUnit.DAYS.toSeconds(expiryTimeInDays)); ((WebResponse)response).addCookie(cookie); }public void removeCookieIfPresent(Request request, Response response, String cookieName) { Cookie cookie = loadCookie(request, cookieName);if(cookie != null) { ((WebResponse)response).clearCookie(cookie); } } } Then when user checks ‘Remember Me’ on LoginPage, we have to save cookies in his browser: Button submit = new Button('submit') { @Override public void onSubmit() { UserService userService = WicketApplication.get().getUserService();User user = userService.findByLoginAndPassword(login, password);if(user == null) { error('Invalid login and/or password. Please try again.'); } else { UserSession.get().setUser(user);if(rememberMe) { CookieService cookieService = WicketApplication.get().getCookieService(); cookieService.saveCookie(getResponse(), REMEMBER_ME_LOGIN_COOKIE, user.getLogin(), REMEMBER_ME_DURATION_IN_DAYS); cookieService.saveCookie(getResponse(), REMEMBER_ME_PASSWORD_COOKIE, user.getPassword(), REMEMBER_ME_DURATION_IN_DAYS); }setResponsePage(HomePage.class); } } };Step 3: As a User I want to be auto-logged when I return to web applicationLink to commit with this step To check if user entering our application is a “returning user to auto-login” we have to enrich logic responsible for creating new user session. Currently it is done in WicketApplication class which when requested, creates new WebSession instance. So every time new session is created, we have to check for cookies presence and if they are valid user/password pair, auto-login this user. So let’s start with extracting session related logic into separate class called SessionProvider. It will need UserService and CookieService to check for existing users and cookies so we pass them as a references in the constructor. public class WicketApplication extends WebApplication {private UserService userService = new UserService(); private CookieService cookieService = new CookieService(); private SessionProvider sessionProvider = new SessionProvider(userService, cookieService);@Override public Session newSession(Request request, Response response) { return sessionProvider.createNewSession(request); } } Role of SessionProvider is to create new UserSession, check if proper cookies are present and if so, set logged user. Additionally we add feedback message to inform user that he was auto logged. So let’s look into the code: public class SessionProvider {public SessionProvider(UserService userService, CookieService cookieService) { this.userService = userService; this.cookieService = cookieService; }public WebSession createNewSession(Request request) { UserSession session = new UserSession(request);Cookie loginCookie = cookieService.loadCookie(request, REMEMBER_ME_LOGIN_COOKIE); Cookie passwordCookie = cookieService.loadCookie(request, REMEMBER_ME_PASSWORD_COOKIE);if(loginCookie != null && passwordCookie != null) { User user = userService.findByLoginAndPassword(loginCookie.getValue(), passwordCookie.getValue());if(user != null) { session.setUser(user); session.info('You were automatically logged in.'); } }return session; } } To show feedback message on HomePage.java we must add FeedbackPanel there, but for the brevity I will omit this in this post. You can read commit to check how to do that. So we after three steps we should have ‘Remember Me’ working. To check it quickly please modify session timeout in web.xml file by adding: <session-config> <session-timeout>1</session-timeout> </session-config> and then start application mvn clean compile jetty:run, go to login page, login, close browser and after over a 1 minute (when session expires) open it again on http://localhost:8080. You should see something like this:So it works. But we still need one more thing: allow user to remove cookies and turn-off auto-login. Step 4: As a User I want to be able to logout and clear my cookiesLink to commit with this step In the last step we have to allow user to clear his data and disable “Remember Me” for his account. This will be achieved by clearing both cookies when user explicitly clicks Logout link. Link<Void> logoutLink = new Link<Void>('logout') { @Override public void onClick() { CookieService cookieService = WicketApplication.get().getCookieService(); cookieService.removeCookieIfPresent(getRequest(), getResponse(), SessionProvider.REMEMBER_ME_LOGIN_COOKIE); cookieService.removeCookieIfPresent(getRequest(), getResponse(), SessionProvider.REMEMBER_ME_PASSWORD_COOKIE);UserSession.get().setUser(null); UserSession.get().invalidate(); } }; logoutLink.setVisible(UserSession.get().userLoggedIn()); add(logoutLink);Summary So that’s all. In this port we have implemented simple ‘Remember Me’ functionality in web application written using Apache Wicket without using any external authentication libraries. Happy coding and don’t forget to share! Reference: Remember Me functionality in Apache Wicket from our JCG partner Tomasz Dziurko at the Code Hard Go Pro blog....
openjdk-logo

Building OpenJDK on Windows

Doing some experiments, I found that it often useful to have JDK source code available in hand to make some changes, play with it, etc. So I decided to download and compile that beast. Apparently, it took me some time to do that, although my initial thought was that it’s should be as simple as running make command :). As you can guess, I found that it’s not a trivial task and to simplify my life in future, it would be useful to keep some records of what I was doing. Below are steps which I had to do to make it happen. I assume machine already has Visual Studio 2010 installed. I have a feeling that Express version should work just fine, but I haven’t tried.Install cygwin. Ensure that you have installed all packages listed here, some of them are not installed by default. Just in case, here is the copy of that table, but it is recommended to verify with the master source:Binary Name Category Package Descriptionar.exe Devel binutils The GNU assembler, linker and binary utilitiesmake.exe Devel make The GNU version of the ‘make’ utility built for CYGWIN.m4.exe Interpreters m4 GNU implementation of the traditional Unix macro processorcpio.exe Utils cpio A program to manage archives of filesgawk.exe Utils awk Pattern-directed scanning and processing languagefile.exe Utils file Determines file type using ‘magic’ numberszip.exe Archive zip Package and compress (archive) filesunzip.exe Archive unzip Extract compressed files in a ZIP archivefree.exe System procps Display amount of free and used memory in the systemDo not forget to add cygwin’s ‘bin’ folder into the PATH. Install Mercurial from here and add ‘hg’ to the PATH. Install Microsoft Windows SDK for Windows 7 and .NET Framework 4. Install DirectX SDK. JDK requires v9.0, but I couldn’t find it easily. So I decided not to bother and installed the latest one. Seems like it’s working just fine. Bootstrap JDK is required for the build. It just happened that I used JDK6, but suppose, any version >JDK6 will work with no probs. Download and install Ant. I used version 1.8.2. Add Ant to the PATH. Checkout sources. For a number of reasons it was the most complicated part. ‘hg’ is not particularly stable, so some things which are supposed to be done my scripts, had be done manually. So, to start run this in command line: hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u <some_folder>\openjdk7'This should download root folder with some helper scripts. Then in cygwin go to just created ‘openjdk7′ folder and run ‘get_source.sh’. ‘get_source.sh’ may fail or just hang (and that’s exactly what happened to me). If it does, then you may try to to use ‘–pull’ flag (pull protocol for metadata). I’m not absolutely sure why, but it helped me. Unfortunately, scripts are not written in very friendly manner and there is no way to pass any ‘hg’ arguments to sources retrieval script. So you need to go to ‘make\scripts\hgforest.sh’ and add ‘–pull’ to every invocation of ‘hg clone’. And if even after adding ‘–pull’ it still fails, well… just give up and run these commands manually: hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/corba corba hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/hotspot hotspot hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/jaxp jaxp hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/jaxws jaxws hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/jdk jdk hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/langtools langtoolsHopefully now you have sources and can contunue :)Build requires some external binaries and a version of ‘make.exe’, which works under windows. ‘make’ which comes with cygwin doesn’t really work, because has some problems with drive letters in path names. Next is we need to compile a couple of files. One is fixed version of ‘make.exe’. The other is FreeType library, which is only available to download as source. If you are not interested in compiling all that stuff and want just compile JDK with less hassle, I would recommend to download binaries from here (that’s my Drive). Unpack ‘make.exe’ into ‘openjdk7/bin’. Note, that ‘make.exe’ from the package is quite old and requires cygintl-3.dll, which is not provided with current cygwin. To fix it, just copy cygintl-8.dll -> cygintl-3.dll. Freetype lib and dll has to be put in folder referenced by ‘ALT_FREETYPE_LIB_PATH’ conf variable (see Step 13). Also, some Freetype headers are still required and are located by make via ‘ALT_FREETYPE_HEADERS_PATH’ variable (see Step 13). It means you will also need to download the source code. If you are not looking for a simple solution and want to compile these binaries yourself, then follow these instructions:Download make 3.82 from here and unpack it. Find ‘config.h.W32′ and uncomment line with ‘HAVE_CYGWIN_SHELL’ definition. Open make_msvc_net2003.sln solution in Visual Studio, select ‘Release’ configuration and make a build. In ‘Release’ folder, you will get ‘make_msvc.net2003.exe’, rename it to ‘make.exe’. Now compile FreeType:Download source of FreeType v.2.4.7 from here. Unpack it somewhere and open ‘\builds\win32\vc2010\freetype.sln’ in Visual Studio. Goto project properties (right click on project in project tree) and in ‘Configuration Properties/General/Configuration type’ select ‘Dynamic Library (.ddl)’ and rename output to ‘freetype’. Update ftoption.h, add following two lines: #define FT_EXPORT(x) __declspec(dllexport) x #define FT_BASE(x) __declspec(dllexport) x Make a build and you will get dll & lib in ‘objs\win32\vc2010′. Do not forget to assign appropriate values to ‘ALT_FREETYPE_LIB_PATH’ and ‘ALT_FREETYPE_HEADERS_PATH’ variables (see Step 13).I had some problems with javadoc generation, which was failing with OutOfMemory. In order to fix it, I had to change ‘openjdk7\jdk\make\docs\Makefile’. This code: ifeq ($(ARCH_DATA_MODEL),64) MAX_VM_MEMORY = 1024 else ifeq ($(ARCH),universal) MAX_VM_MEMORY = 1024 else MAX_VM_MEMORY = 512 endifhas to be replaed with this: ifeq ($(ARCH_DATA_MODEL),64) MAX_VM_MEMORY = 1024 else ifeq ($(ARCH),universal) MAX_VM_MEMORY = 1024 else MAX_VM_MEMORY = 1024 endifCopy ‘msvcr100.dll’ to drops: cp /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ Visual\ Studio\ 10.0/Common7/Packages/Debugger/X64/msvcr100.dll ./drops/Ensure that cygwin’s ‘find.exe’ in the PATH before the Windows’ one. The easiest way to do so is to copy it into ‘openjdk7/bin’, which is then set at the beginning of current PATH. Create a batch file similar to the following one. Do not forgot to update paths appopriately: ALT_BOOTDIR=C:/Stuff/java_libs/jdk1.6.0_25 ANT_HOME=C:/Stuff/java_libs/apache-ant-1.8.2 JAVA_HOME= CLASSPATH= PATH=C:/Stuff/openjdk7/bin;%PATH% ALLOW_DOWNLOADS=true ALT_MSVCRNN_DLL_PATH=C:/Stuff/java_libs/openjdk7/dropsC:\WINDOWS\system32\cmd.exe /E:ON /V:ON /K 'C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd' /Release /xp /x86Run batch file. Now you have fully configured environment, which is ready for build. Run ‘bash’ and from the shell execute ‘make': make ARCH_DATA_MODEL=32 ALT_OUTPUTDIR=C:/Users/Stas/Stuff/java_libs/openjdk7/output_32 ALT_FREETYPE_LIB_PATH=C:/Users/Stas/Stuff/java_libs/openjdk7/freetype-2.4.7/objs/win32/vc2010 ALT_FREETYPE_HEADERS_PATH=C:/Users/Stas/Stuff/java_libs/openjdk7/freetype-2.4.7/include ALT_BOOTDIR=C:/Users/Stas/Stuff/java_libs/jdk1.6.0_25 ALT_DROPS_DIR=c:/OpenJDK/ALT_DROPS_DIR ALT_DROPS_DIR=C:/Users/Stas/Stuff/java_libs/openjdk7/drops HOTSPOT_BUILD_JOBS=4 PARALLEL_COMPILE_JOBS=4 2>&1 | tee C:/Stuff/java_libs/openjdk7/output_32.logThis will start build of 32bit JDK.  Have a coffee, tea, or whatever you prefer to have and then after about an hour or so you should see something like this: #-- Build times ---------- Target all_product_build Start 2012-09-01 23:08:55 End 2012-09-01 23:55:48 00:02:35 corba 00:06:46 hotspot 00:00:30 jaxp 00:00:51 jaxws 00:35:30 jdk 00:00:37 langtools 00:46:53 TOTAL -------------------------Reference: Building OpenJDK on Windows from our JCG partner Stanislav Kobylansky at the Stas’s blog blog....
java-logo

Java Annotations – Retention

Consider a Java annotation: public @interface AnAnnotaton {} A class with this annotation applied on it: @AnAnnotaton class AnAnnotatedClass{ } And a test which checks if this annotation is present on a class: import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.is;import java.lang.annotation.Annotation; import org.junit.Test;public class AnAnnotationTest { @Test public void testAnAnnotation() throws Exception { AnAnnotatedClass anAnnotatedClass = new AnAnnotatedClass(); Annotation[] annotationsOnClass = anAnnotatedClass.getClass().getAnnotations(); assertThat(annotationsOnClass.length, is(1)); }} Sounds reasonable right, one would expect the above test to pass since the class does have an annotation of AnAnnotation on it. But, this test fails, and the reason is… a missing meta annotation(@Retention) on the annotation which indicates how long the annotation is to be retained, if the annotation above is changed as follows, the test would work as expected. @Retention(RetentionPolicy.RUNTIME) public @interface AnAnnotaton {} So what does @Retention do – to quote from the Javadoc: Indicates how long annotations with the annotated type are to be retained. If no Retention annotation is present on an annotation type declaration, the retention policy defaults to RetentionPolicy.CLASS there are three different retention policies: 1. SOURCE – where the annotations are removed by the compiler 2. CLASS – Annotations are present in the bytecode, but are not present at runtime and hence not available when trying to reflectively determine if a class has the annotation. 3. RUNTIME – Annotations are retained in the byte code and is available at runtime and hence can be reflectively found on a class. This is the reason why when the annotation definition was changed to include @Retention(RetentionPolicy.RUNTIME), the test now runs through. Something basic, but easy to miss out. Reference: Java Annotations – Retention from our JCG partner Biju Kunjummen at the all and sundry blog....
software-development-2-logo

Can you get by without estimating? Should you try?

Estimating remains one of the hardest problems in software development. So hard in fact that more people lately are advocating that we shouldn’t bother estimating at all. David Anderson, the man behind Kanban, says that we should stop estimating, and that estimates are a waste of time. In his case study about introducing Kanban ideas at Microsoft, one of the first steps that they took to improve a team’s productivity was to get them to stop estimating and start focusing instead on prioritizing work and getting the important work done. Then you have experts like Ron Jeffries saying things like I believe that most estimation is waste and that it is more common to use estimation as a replacement for proper steering, and to use it as a whip on the developers, than it is to use it for its only valid purpose in Release Planning, which is more like ‘decide whether to do this project’ than ‘decide just how long this thing we just thought of is going to take, according to people who don’t as yet understand it or know how they’ll do it” and Estimation is clearly ‘waste’. It’s not software…If estimation IS doing you some good, maybe you should think about it as a kind of waste, and try to get rid of it. And, from others on the “If you do bother estimating, there’s no point in putting a lot of effort into it” theme: Spending effort beyond some minutes to make an estimate ‘less wrong’ is wasted time. Spending effort calculating the delta between estimates and actuals is wasted time. Spending effort training, working and berating people to get ‘less wrong’ estimates is wasted time and damaging to team performance. In “Software estimation considered harmful?” Peter Seibel talks about a friend running a startup, who found that it was more important to keep people focused and motivated on delivering software as quickly as possible. He goes on to say If the goal is simply to develop as much software as we can per unit time, estimates (and thus targets), may be a bad idea. He bases this on a 1985 study in Peopleware which showed that programmers were more productive when working against their own estimates than estimates from somebody else, but that people were most productive on projects where no estimates were done at all. Seibel then admits that maybe “estimates are needed to coordinate work with others” – so he looks at estimating as a “tool for communication”. But from this point of view, estimates are an expensive and inefficient way to communicate information that is of low-quality – because of the cone of uncertainty all estimates contain variability and error anyways.What’s behind all of this? Most of this thinking seems to come out of the current fashion of applying Lean to everything, treating anything that you do as potential waste and eliminating waste wherever you find it. It runs something like: Estimating takes time and slows you down. You can’t estimate perfectly anyways, so why bother trying? A lot of this talk and examples focus on startups and other small-team environments where predictability isn’t as important as delivering. Where it’s more important to get something done than to know when everything will be done or how much it will cost.Do you need to estimate or not? I can accept that estimates aren’t always important in a startup – once you’ve convinced somebody to fund your work anyways. If you’re firefighting, or in some kind of other emergency, there’s not much point in stopping and estimating either – when it doesn’t matter how much something costs, when all you care about is getting whatever it is that you have to do done as soon as possible. Estimating isn’t always important in maintenance – the examples where Kanban is being followed without estimating are in maintenance teams. This is because most maintenance changes are small by definition – maintenance is usually considered to be fixing bugs and making changes that take less than 5 days to complete. In order to really know how long a change is going to take, you need to review the code to know what and where to make changes. This can take up to half of the total time of making the change – and if you’re already half way there, you might as well finish the job rather than stopping and estimating the rest of the work. Most of the time, a rule of thumb or placeholder is a good enough estimate. In my job, we have an experienced development team that has been working on the same system for several years. Almost all of the people were involved in originally designing and coding the system and they all know it inside-out. The development managers triage work as it comes in. They have a good enough feel for the system to recognize when something looks big or scary, when we need to get some people involved upfront and qualify what needs to get done, work up a design or a proof of concept before going further. Most of the time, developers can look at what’s in front of them, and know what will fit in the time box and what won’t. That’s because they know the system and the domain and they usually understand what needs to be done right away – and if they don’t understand it, they know that right away too. The same goes for the testers – most of the time they have a good idea of how much work testing a change or fix will take, and whether they can take it on. Sure sometimes people will make mistakes, and can’t get done what they thought they could and we have to delay something or back it out. But spending a little more time on analysis and estimating upfront probably wouldn’t have changed this. It’s only when they get deep into a problem, when they’ve opened the patient up and there’s blood everywhere, it’s only then that they realize that the problem is a lot worse than they expected. We’re not getting away without estimates. What we’re doing is taking advantage of the team’s experience and knowledge to make decisions quickly and efficiently, without unnecessary formality. This doesn’t scale of course. It doesn’t work for large projects and programs with lots of inter-dependencies and interfaces, where a lot of people need to know when certain things will be ready. It doesn’t work for large teams where people don’t know the system, the platform, the domain or each other well enough to make good quick decisions. And it’s not good enough when something absolutely must be done by a drop dead date – hard industry deadlines and compliance mandates. In all these cases, you have to spend the time upfront to understand and estimate what needs to get done, and probably re-estimate again later as you understand the problem better. Sometimes you can get along without estimates. But don’t bet on it. Reference: Can you get by without estimating? Should you try? from our JCG partner Jim Bird at the Building Real Software blog....
software-development-2-logo

BAM, SOA & Big Data

Leveraging Big Data has become a commodity for most IT departments. It’s like the mobile phone. You can’t remember the times when you couldn’t just call someone from your mobile, no matter where you are in the world, can you? Similarly, IT folks can’t remember the days when files were too big to summarize, or grep, or even just store. Setup a Hadoop cluster and everything can be stored, analyzed and made sense of. But, then I tried to ask the question, what if the data is not stored in a file? What if it was all flying around in my system?Shown above is a setup that is not uncommon deployment of a production SOA setup. Let’s summarize briefly what each server does:An ESB cluster fronts all the traffic and does some content based routing (CBR). Internal and external app server clusters host apps that serve different audiences. A Data Services Server cluster exposes Database operations as a service. A BPS cluster coordinates a bunch of processes between the ESB, one App server cluster and the DSS cluster.Hard to digest? Fear not. It’s a complicated system that would serve a lot of complex requirements while enhancing re-use, interoperability and all other good things SOA brings. Now, in this kind of system whether it’s SOA enabled or not, there lies a tremendous amount of data. And No, they are not stored as files. They are transferred between your servers and systems. Tons and tons of valuable data are going through your system everyday. What if you could excavate this treasure of data and make use of all the hidden gems to derive business intelligence? The answer to this can be achieved through Business Activity Monitoring (BAM-ing). It would involve the process of aggregating, analyzing and presenting data. SOA and BAM was always a love story. As system functions were exposed as services, monitoring these services meant you were able to monitor the whole system. Most of the time, if the system architects were smart, they used open standards, that made plugging and monitoring systems even easier. But even with BAM, it was impossible to capture every message and every request that passed through the server. The data growth alone would be tremendous for a fairly active deployment. So, here we have a Big Data problem, but it is not a typical one. A big data problem that concerns live data. So to actually fully monitor all the data that passes through your system you need a BAM solution that is Big Data ready. In other words, to make full sense of the data and derive intelligence out of the data that passes through modern systems, we need a Business Activity Monitor that is Big Data ready. Now, a system architect has to worry about BAM, SOA and Big Data as they are essentially interwined. A solution that delivers anything less, is well short of a visionary. Don’t forget to share! Reference: BAM, SOA & Big Data from our JCG partner Mackie Mathew at the dev_religion blog....
codehaus-cargo-logo

Maven Cargo plugin for Integration Testing

A very common need in the lifecycle of a project is setting up integration testing. Luckily, Maven has built-in support for this exact scenario, with the following phases of the default build lifecycle (from the Maven documentation):pre-integration-test: Perform actions required before integration tests are executed. This may involve things such as setting up the required environment. integration-test: Process and deploy the package if necessary into an environment where integration tests can be run. post-integration-test: Perform actions required after integration tests have been executed. This may including cleaning up the environment.First, the maven-surefire-plugin is configured so that integration tests are excluded from the standard build lifecycle: <plugin> <groupId>org.apache.maven.plugins<groupId> <artifactId>maven-surefire-plugin<artifactId> <version>2.10<version> <configuration> <excludes> <exclude>***IntegrationTest.java<exclude> <excludes> <configuration> <plugin> Exclusions are done via ant-style path expressions, so all integration tests must follow this pattern and end with “IntegrationTest.java“. Next, the cargo-maven2-plugin is used, as Cargo comes with top-notch out of the box support for embedded web servers. Of course if the server environment requires specific configuration, cargo also knows how to construct the server out of an archived package as well as deploy to an external server. <plugin> <groupId>org.codehaus.cargo<groupId> <artifactId>cargo-maven2-plugin<artifactId> <version>1.1.3<version> <configuration> <wait>true<wait> <container> <containerId>jetty7x<containerId> <type>embedded<type> <container> <configuration> <properties> <cargo.servlet.port>8080<cargo.servlet.port> <properties> <configuration> <configuration> <plugin> An embedded Jetty 7 web server is defined, listening on port 8080. Notice the wait flag being set to true – this is because for the newer versions of cargo (1.1.0 upwards), the default value of the of the flag has changed from true to false, due to this bug. We want to be able to start the project by simply running mvn cargo:start, especially during the development phase, so the flag should be active. However, when running the integration tests we want the server to start, allow the tests to run and then stop, which is why the flag will be overridden later on. In order for the package maven phase to generate a deployable war file, the packaging of the project must be: <packaging>war</packaging>. Next, a new integration Maven profile is created to enable running the integration tests only when this profile is active, and not as part as the standard build lifecycle. <profiles> <profile> <id>integration<id> <build> <plugins> ... <plugins> <build> <profile> <profiles> It is this profile that will contain all the remaining configuration. Now, the Jetty server is configured to start in the pre-integration-test phase and stop in the post-integration-test phase. <plugin> <groupId>org.codehaus.cargo<groupId> <artifactId>cargo-maven2-plugin<artifactId> <configuration> <wait>false<wait> <configuration> <executions> <execution> <id>start-server<id> <phase>pre-integration-test<phase> <goals> <goal>start<goal> <goals> <execution> <execution> <id>stop-server<id> <phase>post-integration-test<phase> <goals> <goal>stop<goal> <goals> <execution> <executions> <plugin> This ensures the cargo:start goal and cargo:stop goals will execute before and after the integration-test phase. Note that because there are two individual execution definitions, the id element must be present (and different) in both, so that Maven can accepts the configuration. Next, maven-surefire-plugin configuration needs to be overridden inside the integration profile, so that the integration tests which were excluded in the default lifecycle are will now included and run: <plugins> <plugin> <groupId>org.apache.maven.plugins<groupId> <artifactId>maven-surefire-plugin<artifactId> <executions> <execution> <phase>integration-test<phase> <goals> <goal>test<goal> <goals> <configuration> <excludes> <exclude>none<exclude> <excludes> <includes> <include>***IntegrationTest.java<include> <includes> <configuration> <execution> <executions> <plugin> <plugins> There are a few things worth noting: 1. The test goal of the maven-surefire-plugin is executed in integration-test phase; at this point, Jetty is already started with the project deployed, so the integration tests should run with no problems. 2. The integration tests are now included in the execution. In order to achieve this, the exclusions are also overridden – this is because the way Maven handles overriding plugin configurations inside profiles. The base configuration is not completely overridden, but rather augmented with new configuration elements inside the profile. Because of this, the original <excludes> configuration, which excluded the integration tests in the first place, is still present in the profile, and needs to be overridden, or it would conflict with the <includes> configuration and the tests would still not run. 3. Note that, since there is only a single <execution> element, there is no need for an id to be defined. Now, the entire process can run: mvn clean install -PintegrationConclusion The step by step configuration of Maven covers the entire process of setting up integration process as part of the project lifecycle. Usually this is set up to run in a Continuous Integration environment, preferably after each commit. If the CI server already has a server running and consuming ports, then the cargo configuration will have to deal with that scenario, which I will cover in a future post. Reference: How to set up Integration Testing with the Maven Cargo plugin from our JCG partner Eugen Paraschiv at the baeldung blog....
career-logo

Computer Science Education in High Demand

The growing need for qualified computer programmers and the availability of free, online education programs is what inspired today’s post by Olivia Leonardi. She is looking to add to a discussion on Java Code Geeks that laid out 27 things every programmer needs to know by discussing ways people can become a web professional, something necessary before they hold those tenants dear. Leonardi is a writer and researcher for a website offering a wealth of information about entering the computer science field including jobs and careers for those who have completed computer science programs. Computer Science Education in High Demand Despite the growing importance of computer programming skills both in the working world and in everyday life, most schools fail to even cover the basics or broach the subject. But because there has been a longstanding will, there are now many ways. For many eager learners, this gap in classroom material and formal instruction has been filled by online programs that are rapidly gaining esteem in the professional world. Something necessary as the job outlook for recent graduates is bleak. In a recent paper, Northwestern University economist Robert J. Gordon asserts that the US should prepare for “an extended period of slowing growth, with economic expansion getting ever more sluggish and the bottom 99% getting the short end of the stick”. However, despite stubbornly high unemployment and low rates of growth, the long term job prospects for computer science related fields remain remarkably strong. The Bureau of Labor Statistics predicts a 28% growth rate from 2010 to 2020, well above average for most other industries, excluding some healthcare fields. In a 2011 Forbes poll to determine which master’s degree programs would provide the best long-term opportunities, computer science tied physician assistant studies for the advanced degree with the best job prospects. While many other fields see half of their graduate into unemployment, computer science programs can prepare graduate for competitive offers from growing companies. Computer science is also a very lucrative field of study as mid-career median pay is, on average, $109,000 and almost twice the national average salary of $41,000 annually. “We’re in the midst of a technology wave, and computer scientists are so highly valued,” says Al Lee, director of quantitative analysis at Payscale. “As long as people and businesses use technology, computer science degree-holders will be in demand,” Lee adds. Yet, despite the growing need in business around the globe, most students still never receive comprehensive computer literacy training. In the UK, where the number of people studying programming has fallen by a third in the past four years, Prime Minister David Cameron recently admitted that the government is not doing enough to teach the next generation about computer science. Google executive chairman Eric Schmidt also recently pointed out that while students learn how to use software, they are not taught about how software is made. The need for programmers has lead many hopeful programmers to seek out many rapidly growing independent resources. Dream in Code, for example, is a site that allows users to learn the fundamental elements of programming by browsing their content and those serious about learning can sign up to become a permanent member. The expansive content of Dream in Code covers almost every programming language and consists of a broad base of expert users. W3Schools also offers an exhaustive scope of information on web technologies, including tutorials for simple HTML through complex AJAX and Server Side Scripting. Computer programmers at every level can learn or brush up on skills at Google Code University as well. Offering courses on AJAX programming, algorithms, distributed systems, web security and languages, as well as novice guides on Linux, databases and SQL, GCU includes relevant material for any programmer. Each course consists of simple tutorials that cover basic steps, as well as video lectures from university professors and professionals that are licensed under Creative Commons, meaning anyone can use the material or feature is in their own classes. Some traditional schools have already noted the trend toward free or cheap online community resources for computer literacy and begun to offer university materials through similar platforms. MIT and UC Berkeley, among others, have pioneer EdX which hosts most of their content online and free of charge. The shift towards online resources for computer education is allowing many who would otherwise never have the opportunity to acquire first-rate skills in a field that will be among the most marketable for years to come. In the coming years, as computers and technology are only expected to become further engrained into our lives, these resources will allow ambitious and focused students to lead the way. Reference: Computer Science Education in High Demand from our W4G partner Olivia Leonardi....
software-development-2-logo

Pragmatic Thinking and Learning – how to think consciously about thinking and learning

Firstly, I think every programmer should read this book, even more, anyone whose career requires constant learning new things, skills of effective thinking and problem solving, should read this book as well. Why? Because in this publication author really carefully gathered the available scientific knowledge about how our brain works, how it processes information and how it stores the new knowledge. And more importantly, he described how we can change our behavior to make the process of learning and problem solving the most effective. And I don’t think I need to convince anyone that effective learning and thinking is very useful (if not necessary) in our work. The book itself covers various areas of how our brain works, how we learn and solve problems. The list of most interesting and useful topics included in this title below. Dreyfus skill model Skill level, according to Dreyfus, is divided into five levels: – Novice, when you need step by step instructions (this new Hello World application using new language/web framework you have done recently?) to get things done. You have problems with troubleshooting or doing something what is not described in “recipe” – Advanced Beginner: you can do something on your own, add a component which is not described in the tutorial, etc. but it is still difficult to solve problems. – Competent: you can troubleshoot, you can do a lot on your own without tutorials and detailed instructions. You actions are based on past experience. – Proficient: you are able to understand a big picture, a concept around which framework/library was built. You can also apply your knowledge to a similar context – Expert: when you simply, intuitively know the answer and (what can be most surprising) sometimes can’t explain why you choose that way and not the other. Majority of people in most areas are in one of three first levels. Proficiency and being an expert requires a lot of learning, trying, failing and derive knowledge from other’s experience. Only about 1 to 5 percent of people are experts in something. L-Mode and R-Mode Our mind is working in two modes: Linear Mode and Rich Mode. Most of the time we use the first but sometimes when we are stuck with a problem, it is good to give Rich Mode some space to start working. We can do it by just going for a walk, taking a shower, mowing a lawn, etc. Any tedious or not-requiring full concentration task will do the job. When our mind isn’t busy with constant thinking, it could switch to Rich Mode and surprisingly deliver the answer while we weren’t (consciously) thinking about the problem. I guess that is why I liked one of my previous job where my desk was quite far from toilets so I needed to have a small walk a few times a day. Write Drunk, Revise Sober Don’t strive for perfection, try to unleash your creativity. Not aiming in being 100% accurate and perfect is the most important when doing first drafts, sketching a prototype and researching new areas. Just relax and allow yourself to be creative. Don’t care about some inaccuracies and minor errors. They will be taken into account later, now it’s time to create something new. Target SMART Objectives It is very important to pick a proper objectives in your life. We all have heard about those New Year’s resolutions which are abandoned by the end of January. Their problem is that they are not well-thought-out objectives. To make our life better and our objectives easier to achieve we should follow those five rules. Objective should be: – Specific, so the more detailed it is the better. Instead of “I will loose weight” you should say “I will loose 10kg”. – Measurable, connected with the first one. if objective is specific it will be also easy to measure. 10 kilos are easy to check. So you should try to define your goals so they are easy to measure. – Achievable. Its not the best idea to say “I will learn Scala in one week” because it is not specific, hard to measure and, most important, very hard or even impossible to achieve in such short amount of time. Instead you should say “I will learn how to create Console Calculator in one week using Scala”. And it is definitely doable. – Relevant. If you don’t like Microsoft and all its products, picking a C# as a new language to learn won’t work. You should care about your objective. The more warm feelings you have about your objective the better. You will find impossible to motivate yourself if you hate/don’t like what you are trying to achieve. In your example, you should pick a language you like or you have positive associations, maybe it is Scala (the next Java), maybe Kotlin (because it is also the name of Ketchup producer in Poland) – Time-Boxed. You should always include information about deadline in your objective. “I will pass this certificate” seems ok but when? In a three months or in a five years? And five years makes this objective almost useless. Read Deliberately – SQ3R When reading a book about subject you want to learn, try to follow SQ3R rule: – Scan table of contents and chapter summaries to get a rough idea what is book about and what knowledge it contains – Question: write down questions that comes to your mind after scanning – Read entirely – Recite: summarize, take some notes using your own words and your understanding of the subject – Review: reread, update your notes, join or start a discussion with someone about what you’ve learnt/read Similar technique of reading called PQ RAR is described here. Manage your knowledge You should have a place to gather your knowledge and things you think might be useful in the future. It can be personal wiki, notes written in Evernote or Springpad. And another important thing, every time you have an idea you should be able to write it down, using either a classic paper notepad or application in your mobile phone. You should choose something you can easily take with you everywhere or almost everywhere. Some valuable quotes And for the end, some quotes from the book I found really intriguing: – You got to be careful if you don’t know where you’re going, because you might not get there. – Time can’t be created or destroyed, only allocated. – Give yourself permission to fail; it’s the path to success. – Inaction is the enemy, not error. – Remember the danger doesn’t lie in doing something wrong; it lies in doing nothing at all. Don’t be afraid to make mistakes. Summary As I wrote in the beginning, this books is really, really worth reading if you want to squeeze more out of your brain. It will help you to optimize learning and thinking process so you could be more effective at your day-to-day work without spending more on hardware, software or more comfortable furniture. And what about you? Do you have your own special tricks to learn faster or solve problems easier? If yes, please share them in the comments. Reference: Pragmatic Thinking and Learning – how to think consciously about thinking and learning from our JCG partner Tomasz Dziurko at the Code Hard Go Pro blog....
java-logo

A Generic and Concurrent Object Pool

In this post we will take a look at how we can create an object pool in Java. In recent years, the performance of the JVM has multiplied manifold that object pooling for better performance has been made almost redundant for most type of objects. In essence, creation of objects are no longer considered as expensive as it was done before. However there are some kind of objects that certainly proves costly on creation. Objects such as Threads, database connection objects etc are not lightweight objects and are slightly more expensive to create. In any application we require the use of multiple objects of the above kind. So it would be great if there was a very way easy to create and mantain an object pool of that type so that objects can be dynamically used and reused, without the client code being bothered about the live cycle of the objects. Before actually writing the code for an object pool, let us first identify the main requirements that any object pool must answer.The pool must let clients use an object if any is available. It must reuse the objects once they are returned to the pool by a client. If required, it must be able to create more objects to satisfy growing demands of the client. It must provide a proper shutdown mechanism, such that on shutdown no memory leaks occur.Needless to say, the above points will form the basis of the interface that we will expose to our clients. So our interface declaration will be as follows: package com.test.pool;/** * Represents a cached pool of objects. * * @author Swaranga * * @param < T > the type of object to pool. */ public interface Pool< T > { /** * Returns an instance from the pool. * The call may be a blocking one or a non-blocking one * and that is determined by the internal implementation. * * If the call is a blocking call, * the call returns immediately with a valid object * if available, else the thread is made to wait * until an object becomes available. * In case of a blocking call, * it is advised that clients react * to {@link InterruptedException} which might be thrown * when the thread waits for an object to become available. * * If the call is a non-blocking one, * the call returns immediately irrespective of * whether an object is available or not. * If any object is available the call returns it * else the call returns < code >null< /code >. * * The validity of the objects are determined using the * {@link Validator} interface, such that * an object < code >o< /code > is valid if * < code > Validator.isValid(o) == true < /code >. * * @return T one of the pooled objects. */ T get(); /** * Releases the object and puts it back to the pool. * * The mechanism of putting the object back to the pool is * generally asynchronous, * however future implementations might differ. * * @param t the object to return to the pool */ void release(T t); /** * Shuts down the pool. In essence this call will not * accept any more requests * and will release all resources. * Releasing resources are done * via the < code >invalidate()< /code > * method of the {@link Validator} interface. */ void shutdown(); }The above interface is intentionally made very simple and generic to support any type of objects. It provides methods to get/return an object from/to the pool. It also provides a shutdown mechanism to dispose of the objects. Now we try to create an implementation of the above interface. But before doing that it is important to note that an ideal release() method will first try to check if the object returned by the client is still reusable. If yes then it will return it to the pool else the object has to be discarded. We want every implementation of the Pool interface to follow this rule. So before creating a concrete implementation, we create an abstract implementation hat imposes this restriction on subsequent implementations. Our abstract implementation will be called, surprise, AbstractPool and its definition will be as follows: package com.test.pool;/** * Represents an abstract pool, that defines the procedure * of returning an object to the pool. * * @author Swaranga * * @param < T > the type of pooled objects. */ abstract class AbstractPool < T > implements Pool < T > { /** * Returns the object to the pool. * The method first validates the object if it is * re-usable and then puts returns it to the pool. * * If the object validation fails, * some implementations * will try to create a new one * and put it into the pool; however * this behaviour is subject to change * from implementation to implementation * */ @Override public final void release(T t) { if(isValid(t)) { returnToPool(t); } else { handleInvalidReturn(t); } } protected abstract void handleInvalidReturn(T t); protected abstract void returnToPool(T t); protected abstract boolean isValid(T t); } In the above class, we have made it mandatory for object pools to validate an object before returning it to the pool. To customize the behaviour of their pools the implementations are free to chose the way they implement the three abstract methods. They will decide using their own logic, how to check if an object is valid for reuse [the validate() method], what to do if the object returned by a client is not valid [the handleInvalidReturn() method] and the actual logic to return a valid object to the pool [the returnToPool() method]. Now having the above set of classes we are almost ready for a concrete implementation. But the catch is that since the above classes are designed to support generic object pools, hence a generic implementation of the above classes will not know how to validate an object [since the objects will be generic :-)]. Hence we need something else that will help us in this. What we actually need is a common way to validate an object so that the concrete Pool implementations will not have to bother about the type of objects being validated. So we introduce a new interface, Validator, that defines methods to validate an object. Our definition of the Validator interface will be as follows: package com.test.pool;/** * Represents the functionality to * validate an object of the pool * and to subsequently perform cleanup activities. * * @author Swaranga * * @param < T > the type of objects to validate and cleanup. */ public static interface Validator < T > { /** * Checks whether the object is valid. * * @param t the object to check. * * @return true * if the object is valid else false. */ public boolean isValid(T t); /** * Performs any cleanup activities * before discarding the object. * For example before discarding * database connection objects, * the pool will want to close the connections. * This is done via the * invalidate() method. * * @param t the object to cleanup */ public void invalidate(T t); }The above interface defines methods to check if an object is valid and also a method to invalidate and object. The invalidate method should be used when we want to discard an object and clear up any memory used by that instance. Note that this interface has little significance by itself and makes sense only when used in context of an object pool. So we define this interface inside the top level Pool interface. This is analogous to the Map and Map.Entry interfaces in the Java Collections Library. Hence our Pool interface becomes as follows: package com.test.pool;/** * Represents a cached pool of objects. * * @author Swaranga * * @param < T > the type of object to pool. */ public interface Pool< T > { /** * Returns an instance from the pool. * The call may be a blocking one or a non-blocking one * and that is determined by the internal implementation. * * If the call is a blocking call, * the call returns immediately with a valid object * if available, else the thread is made to wait * until an object becomes available. * In case of a blocking call, * it is advised that clients react * to {@link InterruptedException} which might be thrown * when the thread waits for an object to become available. * * If the call is a non-blocking one, * the call returns immediately irrespective of * whether an object is available or not. * If any object is available the call returns it * else the call returns < code >null< /code >. * * The validity of the objects are determined using the * {@link Validator} interface, such that * an object < code >o< /code > is valid if * < code > Validator.isValid(o) == true < /code >. * * @return T one of the pooled objects. */ T get(); /** * Releases the object and puts it back to the pool. * * The mechanism of putting the object back to the pool is * generally asynchronous, * however future implementations might differ. * * @param t the object to return to the pool */ void release(T t); /** * Shuts down the pool. In essence this call will not * accept any more requests * and will release all resources. * Releasing resources are done * via the < code >invalidate()< /code > * method of the {@link Validator} interface. */ void shutdown();/** * Represents the functionality to * validate an object of the pool * and to subsequently perform cleanup activities. * * @author Swaranga * * @param < T > the type of objects to validate and cleanup. */ public static interface Validator < T > { /** * Checks whether the object is valid. * * @param t the object to check. * * @return true * if the object is valid else false. */ public boolean isValid(T t); /** * Performs any cleanup activities * before discarding the object. * For example before discarding * database connection objects, * the pool will want to close the connections. * This is done via the * invalidate() method. * * @param t the object to cleanup */ public void invalidate(T t); } }We are almost ready for a concrete implementation. But before that we need one final weapon, which is actually the most important weapon of an object pool. It is called ‘the ability to create new objects’.c Sine our object pools will be generic, they must have knowledge of how to create new objects to populate its pool. This functionality must also not depend on the type of the object pool and must be a common way to create new objects. The way to do this will be an interface, called ObjectFactory that defines just one method, which is ‘how to create a new object’. Our ObjectFactory interface is as follows: package com.test.pool;/** * Represents the mechanism to create * new objects to be used in an object pool. * * @author Swaranga * * @param < T > the type of object to create. */ public interface ObjectFactory < T > { /** * Returns a new instance of an object of type T. * * @return T an new instance of the object of type T */ public abstract T createNew(); } We are finally done with our helper classes and now we will create a concrete implementation of the Pool interface. Since we want a pool that can be used in concurrent applications, we will create a blocking pool that blocks the client if no objects are available in the pool. The blocking mechanism will block indefinitely until an objects becomes available. This kind of implementation begets that another method be there which will block only for a given time-out period, if any object becomes available before the time out that object is returned otherwise after the timeout instead of waiting for ever, a null object is returned. This implementation is analogous to a LinkedBlockingQueue implementation of the Java Concurrency API and thus before implementing the actual class we expose another implementation, BlockingPool, which is analogous to the BlockingQueue interface of the Java Concurrency API. Hence the Blockingpool interface declaration is as follows: package com.test.pool;import java.util.concurrent.TimeUnit;/** * Represents a pool of objects that makes the * requesting threads wait if no object is available. * * @author Swaranga * * @param < T > the type of objects to pool. */ public interface BlockingPool < T > extends Pool < T > { /** * Returns an instance of type T from the pool. * * The call is a blocking call, * and client threads are made to wait * indefinitely until an object is available. * The call implements a fairness algorithm * that ensures that a FCFS service is implemented. * * Clients are advised to react to InterruptedException. * If the thread is interrupted while waiting * for an object to become available, * the current implementations * sets the interrupted state of the thread * to true and returns null. * However this is subject to change * from implementation to implementation. * * @return T an instance of the Object * of type T from the pool. */ T get(); /** * Returns an instance of type T from the pool, * waiting up to the * specified wait time if necessary * for an object to become available.. * * The call is a blocking call, * and client threads are made to wait * for time until an object is available * or until the timeout occurs. * The call implements a fairness algorithm * that ensures that a FCFS service is implemented. * * Clients are advised to react to InterruptedException. * If the thread is interrupted while waiting * for an object to become available, * the current implementations * set the interrupted state of the thread * to true and returns null. * However this is subject to change * from implementation to implementation. * * * @param time amount of time to wait before giving up, * in units of unit * @param unit a TimeUnit determining * how to interpret the * timeout parameter * * @return T an instance of the Object * of type T from the pool. * * @throws InterruptedException * if interrupted while waiting */ T get(long time, TimeUnit unit) throws InterruptedException; } And our BoundedBlockingPool implementation will be as follows: package com.test.pool;import java.util.concurrent.BlockingQueue; import java.util.concurrent.Callable; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.TimeUnit;public final class BoundedBlockingPool < T > extends AbstractPool < T > implements BlockingPool < T > { private int size; private BlockingQueue < T > objects; private Validator < T > validator; private ObjectFactory < T > objectFactory; private ExecutorService executor = Executors.newCachedThreadPool(); private volatile boolean shutdownCalled; public BoundedBlockingPool( int size, Validator < T > validator, ObjectFactory < T > objectFactory) { super(); this.objectFactory = objectFactory; this.size = size; this.validator = validator; objects = new LinkedBlockingQueue < T >(size); initializeObjects(); shutdownCalled = false; } public T get(long timeOut, TimeUnit unit) { if(!shutdownCalled) { T t = null; try { t = objects.poll(timeOut, unit); return t; } catch(InterruptedException ie) { Thread.currentThread().interrupt(); } return t; } throw new IllegalStateException( 'Object pool is already shutdown'); } public T get() { if(!shutdownCalled) { T t = null; try { t = objects.take(); } catch(InterruptedException ie) { Thread.currentThread().interrupt(); } return t; } throw new IllegalStateException( 'Object pool is already shutdown'); } public void shutdown() { shutdownCalled = true; executor.shutdownNow(); clearResources(); } private void clearResources() { for(T t : objects) { validator.invalidate(t); } } @Override protected void returnToPool(T t) { if(validator.isValid(t)) { executor.submit(new ObjectReturner(objects, t)); } } @Override protected void handleInvalidReturn(T t) { } @Override protected boolean isValid(T t) { return validator.isValid(t); } private void initializeObjects() { for(int i = 0; i < size; i++) { objects.add(objectFactory.createNew()); } } private class ObjectReturner < E > implements Callable < Void > { private BlockingQueue < E > queue; private E e; public ObjectReturner(BlockingQueue < E > queue, E e) { this.queue = queue; this.e = e; } public Void call() { while(true) { try { queue.put(e); break; } catch(InterruptedException ie) { Thread.currentThread().interrupt(); } } return null; } } } The above is a very basic object pool backed internally by a LinkedBlockingQueue. The only method of interest is the returnToPool() method. Since the internal storage is a blocking pool, if we tried to put the returned element directly into the LinkedBlockingPool, it might block he client if the queue is full. But we do not want a client of an object pool to block just for a mundane task like returning an object to the pool. So we have made the actual task of inserting the object into the LinkedBlockingQueue as an asynchronous task and submit it to an Executor instance so that the client thread can return immediately. Now we will use the above object pool into our code. We will use the object pool to pool some database connection objects. Hence we will need a Validator to validate our database connection objects. Our JDBCConnectionValidator will look like this: package com.test;import java.sql.Connection; import java.sql.SQLException;import com.test.pool.Pool.Validator;public final class JDBCConnectionValidator implements Validator < Connection > { public boolean isValid(Connection con) { if(con == null) { return false; } try { return !con.isClosed(); } catch(SQLException se) { return false; } } public void invalidate(Connection con) { try { con.close(); } catch(SQLException se) { } } } And our JDBCObjectFactory, that will enable the object pool to create new objects will be as follows: package com.test;import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException;import com.test.pool.ObjectFactory;public class JDBCConnectionFactory implements ObjectFactory < Connection > { private String connectionURL; private String userName; private String password; public JDBCConnectionFactory( String driver, String connectionURL, String userName, String password) { super(); try { Class.forName(driver); } catch(ClassNotFoundException ce) { throw new IllegalArgumentException( 'Unable to find driver in classpath', ce); } this.connectionURL = connectionURL; this.userName = userName; this.password = password; } public Connection createNew() { try { return DriverManager.getConnection( connectionURL, userName, password); } catch(SQLException se) { throw new IllegalArgumentException( 'Unable to create new connection', se); } } } Now we create a JDBC object pool using the above Validator and ObjectFactory: package com.test; import java.sql.Connection;import com.test.pool.Pool; import com.test.pool.PoolFactory;public class Main { public static void main(String[] args) { Pool < Connection > pool = new BoundedBlockingPool < Connection > ( 10, new JDBCConnectionValidator(), new JDBCConnectionFactory('', '', '', '') ); //do whatever you like } } As a bonus for reading the entire post. I will provide another implementation of the Pool interface that is essentially a non blocking object pool. The only difference of this implementation for the previous one is that this implementation does not block the client if an element is unavailable, rather return null. Here it goes: package com.test.pool;import java.util.LinkedList; import java.util.Queue; import java.util.concurrent.Semaphore;public class BoundedPool < T > extends AbstractPool < T > { private int size; private Queue < T > objects; private Validator < T > validator; private ObjectFactory < T > objectFactory; private Semaphore permits; private volatile boolean shutdownCalled; public BoundedPool( int size, Validator < T > validator, ObjectFactory < T > objectFactory) { super(); this.objectFactory = objectFactory; this.size = size; this.validator = validator; objects = new LinkedList < T >(); initializeObjects(); shutdownCalled = false; } @Override public T get() { T t = null; if(!shutdownCalled) { if(permits.tryAcquire()) { t = objects.poll(); } } else { throw new IllegalStateException( 'Object pool already shutdown'); } return t; }@Override public void shutdown() { shutdownCalled = true; clearResources(); } private void clearResources() { for(T t : objects) { validator.invalidate(t); } }@Override protected void returnToPool(T t) { boolean added = objects.add(t); if(added) { permits.release(); } } @Override protected void handleInvalidReturn(T t) { }@Override protected boolean isValid(T t) { return validator.isValid(t); } private void initializeObjects() { for(int i = 0; i < size; i++) { objects.add(objectFactory.createNew()); } } }Considering we are now two implementations strong, it is better to let users create our pools via factory with meaningful names. Here is the factory: package com.test.pool;import com.test.pool.Pool.Validator;/** * Factory and utility methods for * {@link Pool} and {@link BlockingPool} classes * defined in this package. * This class supports the following kinds of methods: * * * Method that creates and returns a default non-blocking * implementation of the {@link Pool} interface. * * * Method that creates and returns a * default implementation of * the {@link BlockingPool} interface. * * * * @author Swaranga */public final class PoolFactory { private PoolFactory() { } /** * Creates a and returns a new object pool, * that is an implementation of the {@link BlockingPool}, * whose size is limited by * the size parameter. * * @param size the number of objects in the pool. * @param factory the factory to create new objects. * @param validator the validator to * validate the re-usability of returned objects. * * @return a blocking object pool * bounded by size */ public static < T > Pool < T > newBoundedBlockingPool( int size, ObjectFactory < T > factory, Validator < T > validator) { return new BoundedBlockingPool < T > ( size, validator, factory); } /** * Creates a and returns a new object pool, * that is an implementation of the {@link Pool} * whose size is limited * by the size parameter. * * @param size the number of objects in the pool. * @param factory the factory to create new objects. * @param validator the validator to validate * the re-usability of returned objects. * * @return an object pool bounded by size */ public static < T > Pool < T > newBoundedNonBlockingPool( int size, ObjectFactory < T > factory, Validator < T > validator) { return new BoundedPool < T >(size, validator, factory); } } Thus our clients now can create object pools in a more readable manner: package com.test; import java.sql.Connection;import com.test.pool.Pool; import com.test.pool.PoolFactory;public class Main { public static void main(String[] args) { Pool < Connection > pool = PoolFactory.newBoundedBlockingPool( 10, new JDBCConnectionFactory('', '', '', ''), new JDBCConnectionValidator()); //do whatever you like } } And so ends our long post. This one was long overdue. Feel free to use it, change it, add more implementations. Happy coding and don’t forget to share! Reference: A Generic and Concurrent Object Pool from our JCG partner Sarma Swaranga at the The Java HotSpot blog....
software-development-2-logo

Don’t Prioritize Features!

Estimating the “value” of features is a waste of time. I was in a JAD session once where people argued about if the annoying beeping (audible on the conference line) was a smoke alarm or a fire alarm. Yes, you can get to an answer, but so what?! The important thing is to solve the problem. Solutions Versus Features Everyone on that conference call had an immediate and visceral appreciation of the value of making the beeping stop. That’s the power of solving a problem. The methods of solving the problem – mute the offender, replace the battery, throw the alarm out the window – do not have implicit value. They have an indirect value, in an “end justifies the means” kind of way. But not direct value. The same sort of thing applies when talking about prioritizing features. Eric Krock (@voximate) just wrote a really good article, Per-Feature ROI Is (Usually) a Stupid Waste of Time, where he does two great things, and (barely) missed an opportunity for a hat trick. The first great thing Eric did was look at the challenges of determining relative (ordinal or cardinal) value of “several things.” He points out several real world challenges:When you have a product with several things already and you want to determine the value of yet another thing – how do you allocate a portion of future revenue to the new thing versus the things you already have? When thing A and thing B have to be delivered together, to realize value, how do you prioritize things A & B? Relative to each other? The opportunity cost of having your product manager do a valuation exercise on a bunch of things is high. She could be doing more valuable things. You won’t perform a retrospective on the accuracy of your valuation. So you won’t know if it was a waste of time, and you won’t get better at future exercises.The second great thing Eric did was reference a Tyner Blain article from early 2007 on measuring the costs of features. I mean “great” on three levels.As a joke (for folks who don’t know me, figured I’d mention that I’m kidding, just in case you get the wrong idea). There is some good stuff in that earlier costing article about allocation of fixed and variable costs (with a handy reminder. Eric’s article gives me an opportunity to shudder at the language I was using in 2007, see how much some of my thinking has evolved in four years, and improve a bit of it here and now.What Eric slightly missed is the same thing I completely missed in 2007 – features don’t have inherent value. Solutions to problems do have value. He only slightly missed it because he got the problem manifestation right – it takes a lot of effort, for little reward, to spend time thinking about what features are worth. I also missed the opportunity in an article looking at utility curves as an approach to estimating benefits, written two days after the one on cost allocation. We were both so close! People don’t buy features. They buy solutions. Valuing Solutions Instead of Features Estimating the value of solutions addresses a lot of the real problems that Eric calls out. It also has a side benefit of keeping your perspective outside-in versus inside-out. Or as others often say, it keeps you “market driven.” Anything that you’re doing, as a product manager, that has you focused on understanding your market and your customers and their problems is a good thing. It may even be the most important thing. I would contend that it eliminates objection 3 – the opportunity cost of estimating the value of solutions is minimal or zero. There may be activities with more urgency, but off the top of my head, none that are more important, for a product manager. Comment if I’m missing something (it’s late and I just got home from another week on the road). The way I approach determining the value of a solution is by developing a point of view about how much incremental profit I will get when my product starts solving this additional problem. Revenue can increase from additional sales, or from the ability to increase prices. Cost can increase if new marketing and other operations (launches, PR campaigns, etc) are required to realize the incremental revenue. I start with a customer-centric market model. A given solution, or improved solution (as in “solves the problem better,” or “solves more of the problem”) – which only applies to some problems – is interesting to some customers, in some market segments. A solution has value when it brings in incremental customers, in a targeted market segment. It also has value when it reduces or prevents erosion of your current customer base (in a SaaS or maintenance-revenue model) to competitive solutions. The time you spend thinking about buyer and user personas, the problems they care about, and the nature of those problems (which varies by persona) is not time wasted – or even spent “at the cost of doing something else.” To make this useful, you have to have a forecast – without solution A, we will sell X; with solution A we will sell Y (and to whom). A good product manager will be looking at sales, and will be able to reconcile the sales with the projections. That helps with objection 4 (but doesn’t completely address it – you don’t know if your projections were accurate, so you can’t really know if your estimation is accurate). This also helps you deal with challenge #1. You’ve got a model that says “the current product works great for high school students, but not college students, because they also have problem A, which they solve today by…” Your intention is to create solution A, making your product viable to college students. Allocate the incremental profits from college-student sales to solution A. My approach to challenge #2 is a little more tactical.Coupled Solutions There are a couple ways that Eric’s “must deliver A and B” scenario are interesting, when looking at the value of solutions. Scenario 1: Solution A solves part of problem X for persona M. Solution B solves part of problem X for persona M. Combined, they solve more of problem X for persona M. This makes sense for “more is better” problems – where “more” solution yields “more” value. In this case, I have a forecast (the more time I spend on it, the better it will be) that maps incremental sales to improved solutions. The “first” solution to be released will have more value than the second. If they are being released together, then I don’t care about the allocation – I combine them. Scenario 2: If, however, the two solutions are valuable to different personas, then I treat them separately – even if they solve “the same problem,” it is not the same problem (for the same person).Conclusion Prioritization by “Bang For the Buck” is worth doing.Just make sure you are prioritizing solutions, not features. Also note: this article talked about valuation – what you do with that valuation, prioritizing by market, can be trickier. Reference: Don’t Prioritize Features! from our JCG partner Scott Sehlhorst at the Business Analysis | Product Management | Software Requirements blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close