Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

jcg-logo

JCG Flashback – 2011 – W37

Hello guys, Time for another post in the JCG Flashback series. These posts will take you back in time and list the hot articles on Java Code Geeks from one year ago. So, let’s see what was popular back then: 5. Simple Twitter: Play Framework, AJAX, CRUD on Heroku This is a cool tutorial on how to build a very basic Ajax-based Twitter clone with Play! Framework which will be deployed on Heroku. Check it out! 4. Java Concurrency Tutorial – Semaphores Java concurrency is always a topic of interest and this article discusses semaphores and how they can be used in order to develop concurrent, multi-threaded code. 3. The OpenJDK as the default Java on Linux This was an announcement back then mentioning that OpenJDK will be the default Java on Linux from now on (with Java SE 7 being the reference implementation). 2. The Java Logging Mess An interesting article discussing the logging approaches used in a Java environment and how those have come to be a bit complicated. Of course don’t miss our other hot article on this topic, 10 Tips for Proper Application Logging. 1. Scala use is less good than Java use for at least half of all Java projects Another article by David Pollak, discussing Scala and how it can be an alternative to Java in many cases and by a certain class of developers. Don’t miss it! That’s all guys. Stay tuned for more, here at Java Code Geeks. And don’t forget to share! Cheers, Ilias ...
spring-interview-questions-answers

Spring 3.1 Caching and @CacheEvict

My last blog demonstrated the application of Spring 3.1’s @Cacheable annotation that’s used to mark methods whose return values will be stored in a cache. However, @Cacheable is only one of a pair of annotations that the Guys at Spring have devised for caching, the other being @CacheEvict. Like @Cacheable, @CacheEvict has value, key and condition attributes. These work in exactly the same way as those supported by @Cacheable, so for more information on them see my previous blog: Spring 3.1 Caching and @Cacheable. CacheEvict supports two additional attributes: allEntries and beforeInvocation. If I were a gambling man I’d put money on the most popular of these being allEntries. allEntries is used to completely clear the contents of a cache defined by @CacheEvict‘s mandatory value argument. The method below demonstrates how to apply allEntries: @CacheEvict(value = "employee", allEntries = true) public void resetAllEntries() { // Intentionally blank }resetAllEntries() sets @CacheEvict’s allEntries attribute to “true” and, assuming that thefindEmployee(...) method looks like this: @Cacheable(value = "employee") public Person findEmployee(String firstName, String surname, int age) {return new Person(firstName, surname, age); }…then in the following code, resetAllEntries(), will clear the “employees” cache. This means that in the JUnit test below employee1 will not reference the same object as employee2: @Test public void testCacheResetOfAllEntries() {Person employee1 = instance.findEmployee("John", "Smith", 22); instance.resetAllEntries(); Person employee2 = instance.findEmployee("John", "Smith", 22);assertNotSame(employee1, employee2); }The second attribute is beforeInvocation. This determines whether or not a data item(s) is cleared from the cache before or after your method is invoked. The code below is pretty nonsensical; however, it does demonstrate that you can apply both @CacheEvict and @Cacheable simultaneously to a method. @CacheEvict(value = "employee", beforeInvocation = true) @Cacheable(value = "employee") public Person evictAndFindEmployee(String firstName, String surname, int age) {return new Person(firstName, surname, age); }In the code above, @CacheEvict deletes any entries in the cache with a matching key before @Cacheable searches the cache. As @Cacheable won’t find any entries it’ll call my code storing the result in the cache. The subsequent call to my method will invoke @CacheEvict which will delete any appropriate entries with the result that in the JUnit test below the variable employee1 will never reference the same object asemployee2: @Test public void testBeforeInvocation() {Person employee1 = instance.evictAndFindEmployee("John", "Smith", 22); Person employee2 = instance.evictAndFindEmployee("John", "Smith", 22);assertNotSame(employee1, employee2); }As I said above, evictAndFindEmployee(...) seems somewhat nonsensical as I’m applying both@Cacheable and @CacheEvict to the same method. But, it’s more that that, it makes the code unclear and breaks the Single Responsibility Principle; hence, I’d recommend creating separate cacheable and cache-evict methods. For example, if you have a cacheing method such as: @Cacheable(value = "employee", key = "#surname") public Person findEmployeeBySurname(String firstName, String surname, int age) {return new Person(firstName, surname, age); }then, assuming you need finer cache control than a simple ‘clear-all’, you can easily define its counterpart: @CacheEvict(value = "employee", key = "#surname") public void resetOnSurname(String surname) { // Intentionally blank}This is a simple blank marker method that uses the same SpEL expression that’s been applied to@Cacheable to evict all Person instances from the cache where the key matches the ‘surname’ argument. @Test public void testCacheResetOnSurname() {Person employee1 = instance.findEmployeeBySurname("John", "Smith", 22); instance.resetOnSurname("Smith"); Person employee2 = instance.findEmployeeBySurname("John", "Smith", 22); assertNotSame(employee1, employee2); }In the above code the first call to findEmployeeBySurname(...) creates a Person object, which Spring stores in the “employee” cache with a key defined as: “Smith”. The call to resetOnSurname(...) clears all entries from the “employee” cache with a surname of “Smith” and finally the second call tofindEmployeeBySurname(...) creates a new Person object, which Spring again stores in the “employee” cache with the key of “Smith”. Hence, the variables employee1, and employee2 do not reference the same object. Having covered Spring’s caching annotations, the next piece of the puzzle is to look into setting up a practical cache: just how do you enable Spring caching and which caching implementation should you use? More on that later… Happy coding and don’t forget to share! Reference: Spring 3.1 Caching and @CacheEvict from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....
career-logo

Does It Get Boring To Be A Programmer?

Programmers are people who create computer programs. (I’ll skip the discussion whether it should be “programmer”, “developer”, “engineer”, “coder” or whatever. You know what I’m talking about). What makes programming different than most professions is that it’s way more diverse – you can do new things every day, because new technologies emerge all the time. Not only that, but programming is actually a creative job – given a couple of rules and foundations you can build whatever you like – something nobody else has built. Just like the poet starts with words, metric and rhyme rules and comes up with a poem. Or the compose starts with notes and the rules of harmony and comes up with a song. The good thing about programming is that there are always new things to start with and different rules to adhere to. So, programming is a well-paid, creative profession that gives you the opportunity to do a lot of different things. Not exactly. Programming has another feature – it has to be practical, to serve a business purpose. That’s why many programmers tend to do the same thing over and over again – website after website, ERP customization after ERP customization. Then they change jobs to do a very similar task. Partly because they are already experienced with a given technology or process and other companies want them because of that experience, partly because most companies do the same – they build websites for clients, they build or customize ERPs, or they have their own online service that has to be supported/created, but which is essentially the same as what you previously did. So, in fact, you handle HTTP requests and access the database all day long, day after day, with a couple of scheduled jobs or indexing thrown in, using the same technology for years and years? Sadly, yes. So it’s not so creative now, is it? How come people don’t get bored? They do. At that point, how can programmers make their work interesting, if what they do is writing very similar, mundane functionality all the time? They learn new technologies. If lucky, you can work on new projects and choose new technologies at work every year or so. If not so lucky, you can still write pet-projects at home, using the cutting-edge technologies, which later you can transfer to your workplace. There are new languages to be learnt, new frameworks to be explored and new storage engines to be used every day. Scala, Groovy, Go. NoSQL. Node.js. MapReduce. Hadoop. These are new paradigms that serve new purposes, and if you are a real programmer you should be fascinated and interested to at least read about them. And probably use some of them for a “proof of concept” at least. Having the ability to always explore something new is what makes the professional life of a programmer so much less boring. Even if at work you do the same thing, you can use your skills and make your own projects. And if they are good, you can open-source them so that other people use it, or they can become popular and you can eventually quit your job. And these are very realistic opportunities, which makes it even less boring. But does it get boring? Yes, it does. For two reasons. The first one is that some programmers are just lazy and not that interested in anything else other than the paycheck. These are the ones, for whom programming is not a hobby, it’s just a mere profession. It’s their choice, so I’ll leave them aside. The other type are people like me, who like what they are doing, who stay up to date, like to learn new things all the time. People like me can get bored when at some point each new technology becomes too easy. When you are proficient in everything you use and you learn a new framework in six hours and a new language in two days. New concepts like MapReduce, the CAP-theorem, API design, become easily mastered, because you already have so much experience and have seen so many things. Each new step is now easy, there is no challenge anymore, so you get bored. There are a couple of options from this point:compensate the boredom of your professional life with something really interesting in your “real” life. But when you get from the programming world of infinite options, to a pretty limited real world, not many things seem challenging. I don’t say this is not a good option – by all means it is. It’s just not interesting from a programming point of view, but feel free to “get out in the real world”, if you are less bored there. you seek to get promoted to management, that is – change the nature of your work and start using your expertise to direct the process in your company rather than program. But usually a great programmer doesn’t like not to program. You would at least like to be technically involved in the development process. remain bored, get the paycheck. You are still a professional and there’s nothing wrong with this choice. And at some point you may forget that you are bored. you start thinking of something new, something nobody else has done or thought of. A product, a framework, doesn’t matter. It occupies your creative thinking and all your current skills. Eventually you may end up with some great new technology or product. You may get bored after that, of course, but you can iterate this step. It can be done at home, at work, or if your idea requires a lot of time and dedication, you can do it as a research project in a university. I guess this is how the internet, Google, p2p, Linux, and many more great technologies are born.Thanks to the people choosing the 4th option, professional life for the majority of developers remains interesting and intriguing. If you get to the point where you are bored by everything, please choose the 4th option. I haven’t yet fully reached that point, but I’ll certainly try to utilize all my programming skills to create something new and cool, rather than sitting quiet and getting my paycheck. Don’t forget to share! Reference: Do Programmers Get Bored? from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
java-logo

Duck typing in Java ? Well, not exactly

According to Wikipedia duck typing is: style of dynamic typing in which an object’s methods and properties determine the valid semantics, rather than its inheritance from a particular class or implementation of a specific interface In simplier words When I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck In languages with dynamic typing this feature allows creating function that are not checking type of passed object but instead rely on existence of particular methods/properties within it and throws runtime exception when those properties not found. For instance, in groovy we could have method for printing info about some entity def printEntity = {entity -> println 'id: ${entity.id}, name: ${entity.name}' } Let’s say we have following class class Entity { Long id String name }So we can invoke our function printEntity(new Entity(id: 10L, name: 'MyName1')) id: 10, name: MyName1But the same time we could pass map as argument printEntity(['id':10L, 'name':'MyName2']) id: 10, name: MyName2Using some metaprogramming magic we could write even following class Ghost { def propertyMissing(String name) { if (name == 'id') { return -1L } else if (name == 'name') { return 'StubName' } } }And we will be still able to call our function printEntity(new Ghost()) id: -1, name: StubName Welcome to the real world Fortunately this concept can be used not only for languages with dynamic typing but for ones with more strict typing model, as Java. Wikipedia has good example of duck typing implementation in Java using Proxy class. Well, you say, what is the practical usage of this except feeling oneself the wisest guru :) Let me show some real life task that was solved in Java using duck typing technique. From the beginning I had simple report generator that queries DB of products and outputs id and name of certain entity. But then customer says: ‘I’d like to also have link to the entity detail page at our site. Beautiful, SEO friendly link. Could you do it to me’. ‘Sure ‘, I said. After digging our codebase I’ve discovered cool function generateSeoUrl() that does the job. The function takes one argument of type Entity, which is interface. So my intention was to observe implementations of Entity and try to use one of them for the report generation. How surprised was I after discovering that all of them are part of some self made ORM tool and their constructors accept query DB to get the entire information about product. So if I were using Entity implementations I had to deal with one extra query per row of my report and this is unacceptable since report was comprised of huge number of rows. So I decided to try other approach and implement Entity interface, overriding methods that are used by generateSeoUrl(). I clicked my IDE shortcut and got surprised again. Entity had about 50 (!!!) methods. Well, I already knew that only getEntityId() and getName() are used by generateSeoUrl() function, but then again, having new class with 50 empty methods just to override 2 of them doing useful action seemed not good idea for me. Thus I’ve decided stop trying coding and start to think :) Extend some of the Entity implementation to prevent querying DB or copy + paste generateSeoUrl() and adopt it for my needs were the options but still it was not beautiful. Especially when I reminded duck typing. I said to myself, we have a function that takes instance of Entity but only uses two method of this interface, so to complete my task I need something that looks like Entity and able to handle getEntityId() and getName() methods. Since entityId and name were already present in data used for generating my report I could reuse them in my new class to stub data for getEntityId() and getName(). To achieve duck typing we need to create Proxy that also implements InvocationHandler interface and static method to retrieve instance of Proxy. Final code of my class looks like public class ReportEntitySupport implements InvocationHandler {public static Entity newInstance(Long entityId, String name) { return (Entity) Proxy.newProxyInstance( Product.class.getClassLoader(), Product.class.getInterfaces(), new ReportEntitySupport(entityId, name) ); }private final String name; private final Long entityId;private ReportEntitySupport(Long entityId, String name) { this.name = name; this.entityId = entityId; }@Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { if (method.getName().equals('getName')) { return this.name; } else if (method.getName().equals('getEntityId')) { return this.entityId; } return null; } } So how to use it ? Inside my report generator class while iterating over ResultSet I’m using following Long entityId; String name; .... Entity entity = ReportEntitySupport.newIntance(entityId, name); String seoUrl = generateSeoUrl(entity); .... P.S. This post just illustrates that some uncommon for Java language concepts could be successfully applied for completing real life tasks improving your programming skills and making your code more beautiful. Reference: Duck typing in Java ? Well, not exactly from our JCG partner Evgeny Shepelyuk at the jk’s blog blog....
apache-wicket-logo

Apache Wicket: Remember Me functionality

It is quite common in web applications to have “Remember Me” functionality that allows user to be automatically logged in each time when he visits our website. Such feature can be implemented using Spring Security, but in my opinion using request based authentication framework with component based web framework is not the best idea. These two worlds just do not fit well together so I prefer to use my own baked solution which I will present below. Base project We start with a simple web application written using latest, still hot Apache Wicket 6. You can download complete sources from GitHub and start application using mvn clean compile jetty:run . Base application consists of two pages:Home Page: displays welcome message for logged and not-logged users or either logout or login link. Login Page: allows user to login basing on simple in-memory collection of users. Some working login/password pairs: John/john, Lisa/lisa, Tom/tom .Remember Me functionality Standard way to implement Remember Me functionality looks as follows:Ask user if he wants to be remembered and auto-logged in the future. If so, save cookies with login and password on his computer. For every new user coming to our website, check if cookies from step 2 are present and if so, auto login user. When he manually logs out, remove cookies so it is possible to clear data used to auto-login.Point 2 needs some explanation. In this example app we are going to save login and not hashed, not encrypted password in cookies. In real scenario this is unacceptable. Instead of it, you should consider storing hashed and salted password so even if someone intercepts user cookie, password still will be secret and more work will be needed to decode it. Update: Micha? Mat?oka posted two very interesting links how this could be done in real systems. Those approaches do not even use password nor password hashes. For more details please look into his comment below this post. Step 1: As a User I want to decide if I want to use “Remember Me” featureLink to commit with this step To allow user to notify application that he wants to use ‘Remember Me’ functionality we will simply add a checkbox to login page. So we need to amend LoginPage java and html file a bit (new stuff is highlighted): <form wicket:id='form' class='form-horizontal'> <fieldset> <legend>Please login</legend> </fieldset><div class='control-group'> <div wicket:id='feedback'></div> </div> <div class='control-group'> <label class='control-label' for='login'>Login</label> <div class='controls'> <input type='text' id='login' wicket:id='login' /> </div> </div> <div class='control-group'> <label class='control-label' for='password'>Password</label> <div class='controls'> <input type='password' id='password' wicket:id='password' /> </div> </div> <div class='control-group'> <div class='controls'> <label class='checkbox'> <input type='checkbox' wicket:id='rememberMe'> Remember me on this computer </label> </div> </div> <div class='form-actions'> <input type='submit' wicket:id='submit' value='Login' title='Login' class='btn btn-primary'/> </div> </form>private String login; private String password; private boolean rememberMe;public LoginPage() {Form<Void> loginForm = new Form<Void>('form'); add(loginForm);loginForm.add(new FeedbackPanel('feedback')); loginForm.add(new RequiredTextField<String>('login', new PropertyModel<String>(this, 'login'))); loginForm.add(new PasswordTextField('password', new PropertyModel<String>(this, 'password'))); loginForm.add(new CheckBox('rememberMe', new PropertyModel<Boolean>(this, 'rememberMe')));Button submit = new Button('submit') { // (...) };loginForm.add(submit); } Now we are ready for next step. Step 2: As a System I want to save login and password in cookiesLink to commit with this step First we need a CookieService that will encapsulate all logic responsible for working with cookies: saving, listing and clearing cookie when needed. Code is rather simple, we work with WebResponse and WebRequest classes to modify cookies in user’s browser. public class CookieService {public Cookie loadCookie(Request request, String cookieName) {List<Cookie> cookies = ((WebRequest) request).getCookies();if (cookies == null) { return null; }for (Cookie cookie : cookies) { if(cookie.getName().equals(cookieName)) { return cookie; } }return null; }public void saveCookie(Response response, String cookieName, String cookieValue, int expiryTimeInDays) { Cookie cookie = new Cookie(cookieName, cookieValue); cookie.setMaxAge((int) TimeUnit.DAYS.toSeconds(expiryTimeInDays)); ((WebResponse)response).addCookie(cookie); }public void removeCookieIfPresent(Request request, Response response, String cookieName) { Cookie cookie = loadCookie(request, cookieName);if(cookie != null) { ((WebResponse)response).clearCookie(cookie); } } } Then when user checks ‘Remember Me’ on LoginPage, we have to save cookies in his browser: Button submit = new Button('submit') { @Override public void onSubmit() { UserService userService = WicketApplication.get().getUserService();User user = userService.findByLoginAndPassword(login, password);if(user == null) { error('Invalid login and/or password. Please try again.'); } else { UserSession.get().setUser(user);if(rememberMe) { CookieService cookieService = WicketApplication.get().getCookieService(); cookieService.saveCookie(getResponse(), REMEMBER_ME_LOGIN_COOKIE, user.getLogin(), REMEMBER_ME_DURATION_IN_DAYS); cookieService.saveCookie(getResponse(), REMEMBER_ME_PASSWORD_COOKIE, user.getPassword(), REMEMBER_ME_DURATION_IN_DAYS); }setResponsePage(HomePage.class); } } };Step 3: As a User I want to be auto-logged when I return to web applicationLink to commit with this step To check if user entering our application is a “returning user to auto-login” we have to enrich logic responsible for creating new user session. Currently it is done in WicketApplication class which when requested, creates new WebSession instance. So every time new session is created, we have to check for cookies presence and if they are valid user/password pair, auto-login this user. So let’s start with extracting session related logic into separate class called SessionProvider. It will need UserService and CookieService to check for existing users and cookies so we pass them as a references in the constructor. public class WicketApplication extends WebApplication {private UserService userService = new UserService(); private CookieService cookieService = new CookieService(); private SessionProvider sessionProvider = new SessionProvider(userService, cookieService);@Override public Session newSession(Request request, Response response) { return sessionProvider.createNewSession(request); } } Role of SessionProvider is to create new UserSession, check if proper cookies are present and if so, set logged user. Additionally we add feedback message to inform user that he was auto logged. So let’s look into the code: public class SessionProvider {public SessionProvider(UserService userService, CookieService cookieService) { this.userService = userService; this.cookieService = cookieService; }public WebSession createNewSession(Request request) { UserSession session = new UserSession(request);Cookie loginCookie = cookieService.loadCookie(request, REMEMBER_ME_LOGIN_COOKIE); Cookie passwordCookie = cookieService.loadCookie(request, REMEMBER_ME_PASSWORD_COOKIE);if(loginCookie != null && passwordCookie != null) { User user = userService.findByLoginAndPassword(loginCookie.getValue(), passwordCookie.getValue());if(user != null) { session.setUser(user); session.info('You were automatically logged in.'); } }return session; } } To show feedback message on HomePage.java we must add FeedbackPanel there, but for the brevity I will omit this in this post. You can read commit to check how to do that. So we after three steps we should have ‘Remember Me’ working. To check it quickly please modify session timeout in web.xml file by adding: <session-config> <session-timeout>1</session-timeout> </session-config> and then start application mvn clean compile jetty:run, go to login page, login, close browser and after over a 1 minute (when session expires) open it again on http://localhost:8080. You should see something like this:So it works. But we still need one more thing: allow user to remove cookies and turn-off auto-login. Step 4: As a User I want to be able to logout and clear my cookiesLink to commit with this step In the last step we have to allow user to clear his data and disable “Remember Me” for his account. This will be achieved by clearing both cookies when user explicitly clicks Logout link. Link<Void> logoutLink = new Link<Void>('logout') { @Override public void onClick() { CookieService cookieService = WicketApplication.get().getCookieService(); cookieService.removeCookieIfPresent(getRequest(), getResponse(), SessionProvider.REMEMBER_ME_LOGIN_COOKIE); cookieService.removeCookieIfPresent(getRequest(), getResponse(), SessionProvider.REMEMBER_ME_PASSWORD_COOKIE);UserSession.get().setUser(null); UserSession.get().invalidate(); } }; logoutLink.setVisible(UserSession.get().userLoggedIn()); add(logoutLink);Summary So that’s all. In this port we have implemented simple ‘Remember Me’ functionality in web application written using Apache Wicket without using any external authentication libraries. Happy coding and don’t forget to share! Reference: Remember Me functionality in Apache Wicket from our JCG partner Tomasz Dziurko at the Code Hard Go Pro blog....
openjdk-logo

Building OpenJDK on Windows

Doing some experiments, I found that it often useful to have JDK source code available in hand to make some changes, play with it, etc. So I decided to download and compile that beast. Apparently, it took me some time to do that, although my initial thought was that it’s should be as simple as running make command :). As you can guess, I found that it’s not a trivial task and to simplify my life in future, it would be useful to keep some records of what I was doing. Below are steps which I had to do to make it happen. I assume machine already has Visual Studio 2010 installed. I have a feeling that Express version should work just fine, but I haven’t tried.Install cygwin. Ensure that you have installed all packages listed here, some of them are not installed by default. Just in case, here is the copy of that table, but it is recommended to verify with the master source:Binary Name Category Package Descriptionar.exe Devel binutils The GNU assembler, linker and binary utilitiesmake.exe Devel make The GNU version of the ‘make’ utility built for CYGWIN.m4.exe Interpreters m4 GNU implementation of the traditional Unix macro processorcpio.exe Utils cpio A program to manage archives of filesgawk.exe Utils awk Pattern-directed scanning and processing languagefile.exe Utils file Determines file type using ‘magic’ numberszip.exe Archive zip Package and compress (archive) filesunzip.exe Archive unzip Extract compressed files in a ZIP archivefree.exe System procps Display amount of free and used memory in the systemDo not forget to add cygwin’s ‘bin’ folder into the PATH. Install Mercurial from here and add ‘hg’ to the PATH. Install Microsoft Windows SDK for Windows 7 and .NET Framework 4. Install DirectX SDK. JDK requires v9.0, but I couldn’t find it easily. So I decided not to bother and installed the latest one. Seems like it’s working just fine. Bootstrap JDK is required for the build. It just happened that I used JDK6, but suppose, any version >JDK6 will work with no probs. Download and install Ant. I used version 1.8.2. Add Ant to the PATH. Checkout sources. For a number of reasons it was the most complicated part. ‘hg’ is not particularly stable, so some things which are supposed to be done my scripts, had be done manually. So, to start run this in command line: hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u <some_folder>\openjdk7'This should download root folder with some helper scripts. Then in cygwin go to just created ‘openjdk7′ folder and run ‘get_source.sh’. ‘get_source.sh’ may fail or just hang (and that’s exactly what happened to me). If it does, then you may try to to use ‘–pull’ flag (pull protocol for metadata). I’m not absolutely sure why, but it helped me. Unfortunately, scripts are not written in very friendly manner and there is no way to pass any ‘hg’ arguments to sources retrieval script. So you need to go to ‘make\scripts\hgforest.sh’ and add ‘–pull’ to every invocation of ‘hg clone’. And if even after adding ‘–pull’ it still fails, well… just give up and run these commands manually: hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/corba corba hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/hotspot hotspot hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/jaxp jaxp hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/jaxws jaxws hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/jdk jdk hg clone --verbose --pull http://hg.openjdk.java.net/jdk7u/jdk7u/langtools langtoolsHopefully now you have sources and can contunue :)Build requires some external binaries and a version of ‘make.exe’, which works under windows. ‘make’ which comes with cygwin doesn’t really work, because has some problems with drive letters in path names. Next is we need to compile a couple of files. One is fixed version of ‘make.exe’. The other is FreeType library, which is only available to download as source. If you are not interested in compiling all that stuff and want just compile JDK with less hassle, I would recommend to download binaries from here (that’s my Drive). Unpack ‘make.exe’ into ‘openjdk7/bin’. Note, that ‘make.exe’ from the package is quite old and requires cygintl-3.dll, which is not provided with current cygwin. To fix it, just copy cygintl-8.dll -> cygintl-3.dll. Freetype lib and dll has to be put in folder referenced by ‘ALT_FREETYPE_LIB_PATH’ conf variable (see Step 13). Also, some Freetype headers are still required and are located by make via ‘ALT_FREETYPE_HEADERS_PATH’ variable (see Step 13). It means you will also need to download the source code. If you are not looking for a simple solution and want to compile these binaries yourself, then follow these instructions:Download make 3.82 from here and unpack it. Find ‘config.h.W32′ and uncomment line with ‘HAVE_CYGWIN_SHELL’ definition. Open make_msvc_net2003.sln solution in Visual Studio, select ‘Release’ configuration and make a build. In ‘Release’ folder, you will get ‘make_msvc.net2003.exe’, rename it to ‘make.exe’. Now compile FreeType:Download source of FreeType v.2.4.7 from here. Unpack it somewhere and open ‘\builds\win32\vc2010\freetype.sln’ in Visual Studio. Goto project properties (right click on project in project tree) and in ‘Configuration Properties/General/Configuration type’ select ‘Dynamic Library (.ddl)’ and rename output to ‘freetype’. Update ftoption.h, add following two lines: #define FT_EXPORT(x) __declspec(dllexport) x #define FT_BASE(x) __declspec(dllexport) x Make a build and you will get dll & lib in ‘objs\win32\vc2010′. Do not forget to assign appropriate values to ‘ALT_FREETYPE_LIB_PATH’ and ‘ALT_FREETYPE_HEADERS_PATH’ variables (see Step 13).I had some problems with javadoc generation, which was failing with OutOfMemory. In order to fix it, I had to change ‘openjdk7\jdk\make\docs\Makefile’. This code: ifeq ($(ARCH_DATA_MODEL),64) MAX_VM_MEMORY = 1024 else ifeq ($(ARCH),universal) MAX_VM_MEMORY = 1024 else MAX_VM_MEMORY = 512 endifhas to be replaed with this: ifeq ($(ARCH_DATA_MODEL),64) MAX_VM_MEMORY = 1024 else ifeq ($(ARCH),universal) MAX_VM_MEMORY = 1024 else MAX_VM_MEMORY = 1024 endifCopy ‘msvcr100.dll’ to drops: cp /cygdrive/c/Program\ Files\ \(x86\)/Microsoft\ Visual\ Studio\ 10.0/Common7/Packages/Debugger/X64/msvcr100.dll ./drops/Ensure that cygwin’s ‘find.exe’ in the PATH before the Windows’ one. The easiest way to do so is to copy it into ‘openjdk7/bin’, which is then set at the beginning of current PATH. Create a batch file similar to the following one. Do not forgot to update paths appopriately: ALT_BOOTDIR=C:/Stuff/java_libs/jdk1.6.0_25 ANT_HOME=C:/Stuff/java_libs/apache-ant-1.8.2 JAVA_HOME= CLASSPATH= PATH=C:/Stuff/openjdk7/bin;%PATH% ALLOW_DOWNLOADS=true ALT_MSVCRNN_DLL_PATH=C:/Stuff/java_libs/openjdk7/dropsC:\WINDOWS\system32\cmd.exe /E:ON /V:ON /K 'C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd' /Release /xp /x86Run batch file. Now you have fully configured environment, which is ready for build. Run ‘bash’ and from the shell execute ‘make': make ARCH_DATA_MODEL=32 ALT_OUTPUTDIR=C:/Users/Stas/Stuff/java_libs/openjdk7/output_32 ALT_FREETYPE_LIB_PATH=C:/Users/Stas/Stuff/java_libs/openjdk7/freetype-2.4.7/objs/win32/vc2010 ALT_FREETYPE_HEADERS_PATH=C:/Users/Stas/Stuff/java_libs/openjdk7/freetype-2.4.7/include ALT_BOOTDIR=C:/Users/Stas/Stuff/java_libs/jdk1.6.0_25 ALT_DROPS_DIR=c:/OpenJDK/ALT_DROPS_DIR ALT_DROPS_DIR=C:/Users/Stas/Stuff/java_libs/openjdk7/drops HOTSPOT_BUILD_JOBS=4 PARALLEL_COMPILE_JOBS=4 2>&1 | tee C:/Stuff/java_libs/openjdk7/output_32.logThis will start build of 32bit JDK.  Have a coffee, tea, or whatever you prefer to have and then after about an hour or so you should see something like this: #-- Build times ---------- Target all_product_build Start 2012-09-01 23:08:55 End 2012-09-01 23:55:48 00:02:35 corba 00:06:46 hotspot 00:00:30 jaxp 00:00:51 jaxws 00:35:30 jdk 00:00:37 langtools 00:46:53 TOTAL -------------------------Reference: Building OpenJDK on Windows from our JCG partner Stanislav Kobylansky at the Stas’s blog blog....
java-logo

Java Annotations – Retention

Consider a Java annotation: public @interface AnAnnotaton {} A class with this annotation applied on it: @AnAnnotaton class AnAnnotatedClass{ } And a test which checks if this annotation is present on a class: import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.is;import java.lang.annotation.Annotation; import org.junit.Test;public class AnAnnotationTest { @Test public void testAnAnnotation() throws Exception { AnAnnotatedClass anAnnotatedClass = new AnAnnotatedClass(); Annotation[] annotationsOnClass = anAnnotatedClass.getClass().getAnnotations(); assertThat(annotationsOnClass.length, is(1)); }} Sounds reasonable right, one would expect the above test to pass since the class does have an annotation of AnAnnotation on it. But, this test fails, and the reason is… a missing meta annotation(@Retention) on the annotation which indicates how long the annotation is to be retained, if the annotation above is changed as follows, the test would work as expected. @Retention(RetentionPolicy.RUNTIME) public @interface AnAnnotaton {} So what does @Retention do – to quote from the Javadoc: Indicates how long annotations with the annotated type are to be retained. If no Retention annotation is present on an annotation type declaration, the retention policy defaults to RetentionPolicy.CLASS there are three different retention policies: 1. SOURCE – where the annotations are removed by the compiler 2. CLASS – Annotations are present in the bytecode, but are not present at runtime and hence not available when trying to reflectively determine if a class has the annotation. 3. RUNTIME – Annotations are retained in the byte code and is available at runtime and hence can be reflectively found on a class. This is the reason why when the annotation definition was changed to include @Retention(RetentionPolicy.RUNTIME), the test now runs through. Something basic, but easy to miss out. Reference: Java Annotations – Retention from our JCG partner Biju Kunjummen at the all and sundry blog....
software-development-2-logo

Can you get by without estimating? Should you try?

Estimating remains one of the hardest problems in software development. So hard in fact that more people lately are advocating that we shouldn’t bother estimating at all. David Anderson, the man behind Kanban, says that we should stop estimating, and that estimates are a waste of time. In his case study about introducing Kanban ideas at Microsoft, one of the first steps that they took to improve a team’s productivity was to get them to stop estimating and start focusing instead on prioritizing work and getting the important work done. Then you have experts like Ron Jeffries saying things like I believe that most estimation is waste and that it is more common to use estimation as a replacement for proper steering, and to use it as a whip on the developers, than it is to use it for its only valid purpose in Release Planning, which is more like ‘decide whether to do this project’ than ‘decide just how long this thing we just thought of is going to take, according to people who don’t as yet understand it or know how they’ll do it” and Estimation is clearly ‘waste’. It’s not software…If estimation IS doing you some good, maybe you should think about it as a kind of waste, and try to get rid of it. And, from others on the “If you do bother estimating, there’s no point in putting a lot of effort into it” theme: Spending effort beyond some minutes to make an estimate ‘less wrong’ is wasted time. Spending effort calculating the delta between estimates and actuals is wasted time. Spending effort training, working and berating people to get ‘less wrong’ estimates is wasted time and damaging to team performance. In “Software estimation considered harmful?” Peter Seibel talks about a friend running a startup, who found that it was more important to keep people focused and motivated on delivering software as quickly as possible. He goes on to say If the goal is simply to develop as much software as we can per unit time, estimates (and thus targets), may be a bad idea. He bases this on a 1985 study in Peopleware which showed that programmers were more productive when working against their own estimates than estimates from somebody else, but that people were most productive on projects where no estimates were done at all. Seibel then admits that maybe “estimates are needed to coordinate work with others” – so he looks at estimating as a “tool for communication”. But from this point of view, estimates are an expensive and inefficient way to communicate information that is of low-quality – because of the cone of uncertainty all estimates contain variability and error anyways.What’s behind all of this? Most of this thinking seems to come out of the current fashion of applying Lean to everything, treating anything that you do as potential waste and eliminating waste wherever you find it. It runs something like: Estimating takes time and slows you down. You can’t estimate perfectly anyways, so why bother trying? A lot of this talk and examples focus on startups and other small-team environments where predictability isn’t as important as delivering. Where it’s more important to get something done than to know when everything will be done or how much it will cost.Do you need to estimate or not? I can accept that estimates aren’t always important in a startup – once you’ve convinced somebody to fund your work anyways. If you’re firefighting, or in some kind of other emergency, there’s not much point in stopping and estimating either – when it doesn’t matter how much something costs, when all you care about is getting whatever it is that you have to do done as soon as possible. Estimating isn’t always important in maintenance – the examples where Kanban is being followed without estimating are in maintenance teams. This is because most maintenance changes are small by definition – maintenance is usually considered to be fixing bugs and making changes that take less than 5 days to complete. In order to really know how long a change is going to take, you need to review the code to know what and where to make changes. This can take up to half of the total time of making the change – and if you’re already half way there, you might as well finish the job rather than stopping and estimating the rest of the work. Most of the time, a rule of thumb or placeholder is a good enough estimate. In my job, we have an experienced development team that has been working on the same system for several years. Almost all of the people were involved in originally designing and coding the system and they all know it inside-out. The development managers triage work as it comes in. They have a good enough feel for the system to recognize when something looks big or scary, when we need to get some people involved upfront and qualify what needs to get done, work up a design or a proof of concept before going further. Most of the time, developers can look at what’s in front of them, and know what will fit in the time box and what won’t. That’s because they know the system and the domain and they usually understand what needs to be done right away – and if they don’t understand it, they know that right away too. The same goes for the testers – most of the time they have a good idea of how much work testing a change or fix will take, and whether they can take it on. Sure sometimes people will make mistakes, and can’t get done what they thought they could and we have to delay something or back it out. But spending a little more time on analysis and estimating upfront probably wouldn’t have changed this. It’s only when they get deep into a problem, when they’ve opened the patient up and there’s blood everywhere, it’s only then that they realize that the problem is a lot worse than they expected. We’re not getting away without estimates. What we’re doing is taking advantage of the team’s experience and knowledge to make decisions quickly and efficiently, without unnecessary formality. This doesn’t scale of course. It doesn’t work for large projects and programs with lots of inter-dependencies and interfaces, where a lot of people need to know when certain things will be ready. It doesn’t work for large teams where people don’t know the system, the platform, the domain or each other well enough to make good quick decisions. And it’s not good enough when something absolutely must be done by a drop dead date – hard industry deadlines and compliance mandates. In all these cases, you have to spend the time upfront to understand and estimate what needs to get done, and probably re-estimate again later as you understand the problem better. Sometimes you can get along without estimates. But don’t bet on it. Reference: Can you get by without estimating? Should you try? from our JCG partner Jim Bird at the Building Real Software blog....
software-development-2-logo

BAM, SOA & Big Data

Leveraging Big Data has become a commodity for most IT departments. It’s like the mobile phone. You can’t remember the times when you couldn’t just call someone from your mobile, no matter where you are in the world, can you? Similarly, IT folks can’t remember the days when files were too big to summarize, or grep, or even just store. Setup a Hadoop cluster and everything can be stored, analyzed and made sense of. But, then I tried to ask the question, what if the data is not stored in a file? What if it was all flying around in my system?Shown above is a setup that is not uncommon deployment of a production SOA setup. Let’s summarize briefly what each server does:An ESB cluster fronts all the traffic and does some content based routing (CBR). Internal and external app server clusters host apps that serve different audiences. A Data Services Server cluster exposes Database operations as a service. A BPS cluster coordinates a bunch of processes between the ESB, one App server cluster and the DSS cluster.Hard to digest? Fear not. It’s a complicated system that would serve a lot of complex requirements while enhancing re-use, interoperability and all other good things SOA brings. Now, in this kind of system whether it’s SOA enabled or not, there lies a tremendous amount of data. And No, they are not stored as files. They are transferred between your servers and systems. Tons and tons of valuable data are going through your system everyday. What if you could excavate this treasure of data and make use of all the hidden gems to derive business intelligence? The answer to this can be achieved through Business Activity Monitoring (BAM-ing). It would involve the process of aggregating, analyzing and presenting data. SOA and BAM was always a love story. As system functions were exposed as services, monitoring these services meant you were able to monitor the whole system. Most of the time, if the system architects were smart, they used open standards, that made plugging and monitoring systems even easier. But even with BAM, it was impossible to capture every message and every request that passed through the server. The data growth alone would be tremendous for a fairly active deployment. So, here we have a Big Data problem, but it is not a typical one. A big data problem that concerns live data. So to actually fully monitor all the data that passes through your system you need a BAM solution that is Big Data ready. In other words, to make full sense of the data and derive intelligence out of the data that passes through modern systems, we need a Business Activity Monitor that is Big Data ready. Now, a system architect has to worry about BAM, SOA and Big Data as they are essentially interwined. A solution that delivers anything less, is well short of a visionary. Don’t forget to share! Reference: BAM, SOA & Big Data from our JCG partner Mackie Mathew at the dev_religion blog....
codehaus-cargo-logo

Maven Cargo plugin for Integration Testing

A very common need in the lifecycle of a project is setting up integration testing. Luckily, Maven has built-in support for this exact scenario, with the following phases of the default build lifecycle (from the Maven documentation):pre-integration-test: Perform actions required before integration tests are executed. This may involve things such as setting up the required environment. integration-test: Process and deploy the package if necessary into an environment where integration tests can be run. post-integration-test: Perform actions required after integration tests have been executed. This may including cleaning up the environment.First, the maven-surefire-plugin is configured so that integration tests are excluded from the standard build lifecycle: <plugin> <groupId>org.apache.maven.plugins<groupId> <artifactId>maven-surefire-plugin<artifactId> <version>2.10<version> <configuration> <excludes> <exclude>***IntegrationTest.java<exclude> <excludes> <configuration> <plugin> Exclusions are done via ant-style path expressions, so all integration tests must follow this pattern and end with “IntegrationTest.java“. Next, the cargo-maven2-plugin is used, as Cargo comes with top-notch out of the box support for embedded web servers. Of course if the server environment requires specific configuration, cargo also knows how to construct the server out of an archived package as well as deploy to an external server. <plugin> <groupId>org.codehaus.cargo<groupId> <artifactId>cargo-maven2-plugin<artifactId> <version>1.1.3<version> <configuration> <wait>true<wait> <container> <containerId>jetty7x<containerId> <type>embedded<type> <container> <configuration> <properties> <cargo.servlet.port>8080<cargo.servlet.port> <properties> <configuration> <configuration> <plugin> An embedded Jetty 7 web server is defined, listening on port 8080. Notice the wait flag being set to true – this is because for the newer versions of cargo (1.1.0 upwards), the default value of the of the flag has changed from true to false, due to this bug. We want to be able to start the project by simply running mvn cargo:start, especially during the development phase, so the flag should be active. However, when running the integration tests we want the server to start, allow the tests to run and then stop, which is why the flag will be overridden later on. In order for the package maven phase to generate a deployable war file, the packaging of the project must be: <packaging>war</packaging>. Next, a new integration Maven profile is created to enable running the integration tests only when this profile is active, and not as part as the standard build lifecycle. <profiles> <profile> <id>integration<id> <build> <plugins> ... <plugins> <build> <profile> <profiles> It is this profile that will contain all the remaining configuration. Now, the Jetty server is configured to start in the pre-integration-test phase and stop in the post-integration-test phase. <plugin> <groupId>org.codehaus.cargo<groupId> <artifactId>cargo-maven2-plugin<artifactId> <configuration> <wait>false<wait> <configuration> <executions> <execution> <id>start-server<id> <phase>pre-integration-test<phase> <goals> <goal>start<goal> <goals> <execution> <execution> <id>stop-server<id> <phase>post-integration-test<phase> <goals> <goal>stop<goal> <goals> <execution> <executions> <plugin> This ensures the cargo:start goal and cargo:stop goals will execute before and after the integration-test phase. Note that because there are two individual execution definitions, the id element must be present (and different) in both, so that Maven can accepts the configuration. Next, maven-surefire-plugin configuration needs to be overridden inside the integration profile, so that the integration tests which were excluded in the default lifecycle are will now included and run: <plugins> <plugin> <groupId>org.apache.maven.plugins<groupId> <artifactId>maven-surefire-plugin<artifactId> <executions> <execution> <phase>integration-test<phase> <goals> <goal>test<goal> <goals> <configuration> <excludes> <exclude>none<exclude> <excludes> <includes> <include>***IntegrationTest.java<include> <includes> <configuration> <execution> <executions> <plugin> <plugins> There are a few things worth noting: 1. The test goal of the maven-surefire-plugin is executed in integration-test phase; at this point, Jetty is already started with the project deployed, so the integration tests should run with no problems. 2. The integration tests are now included in the execution. In order to achieve this, the exclusions are also overridden – this is because the way Maven handles overriding plugin configurations inside profiles. The base configuration is not completely overridden, but rather augmented with new configuration elements inside the profile. Because of this, the original <excludes> configuration, which excluded the integration tests in the first place, is still present in the profile, and needs to be overridden, or it would conflict with the <includes> configuration and the tests would still not run. 3. Note that, since there is only a single <execution> element, there is no need for an id to be defined. Now, the entire process can run: mvn clean install -PintegrationConclusion The step by step configuration of Maven covers the entire process of setting up integration process as part of the project lifecycle. Usually this is set up to run in a Continuous Integration environment, preferably after each commit. If the CI server already has a server running and consuming ports, then the cargo configuration will have to deal with that scenario, which I will cover in a future post. Reference: How to set up Integration Testing with the Maven Cargo plugin from our JCG partner Eugen Paraschiv at the baeldung blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close