What's New Here?


Thymeleaf integration with Spring (Part 1)

1.Introduction This article is focused on how Thymeleaf can be integrated with the Spring framework. This will let our MVC web application take advantage of Thymeleaf HTML5 template engine without losing any of the Spring features. The data layer uses Spring Data to interact with a mongoDB database. The example consists in a Hotel’s single page web application from where we can send two different requests:    Insert a new guest: A synchronous request that shows how Thymeleaf is integrated with Spring’s form backing beans. List guests: An asynchronous request that shows how to handle fragment rendering with AJAX.This tutorial expects you to know the basics of Thymeleaf. If not, you should first read this article. Here’s an example of the application flow:This example is based on Thymeleaf 2.1 and Spring 4 versions.The source code can be found at github.2.Configuration This tutorial takes the JavaConfig approach to configure the required beans. This means xml configuration files are no longer necessary. web.xml Since we want to use JavaConfig, we need to specify AnnotationConfigWebApplicationContext as the class that will configure the Spring container. If we don’t specify it, it will use XmlWebApplicationContext by default. When defining where the configuration files are located, we can specify classes or packages. Here, I’m indicating my configuration class. <!-- Bootstrap the root context --> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener><!-- Configure ContextLoaderListener to use AnnotationConfigWebApplicationContext --> <context-param> <param-name>contextClass</param-name> <param-value> org.springframework.web.context.support.AnnotationConfigWebApplicationContext </param-value> </context-param><!-- @Configuration classes or package --> <context-param> <param-name>contextConfigLocation</param-name> <param-value>xpadro.thymeleaf.configuration.WebAppConfiguration</param-value> </context-param><!-- Spring servlet --> <servlet> <servlet-name>springServlet</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextClass</param-name> <param-value>org.springframework.web.context.support.AnnotationConfigWebApplicationContext</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>springServlet</servlet-name> <url-pattern>/spring/*</url-pattern> </servlet-mapping> Spring Configuration My configuration is split in two classes: thymeleaf-spring integration ( WebAppConfiguration class) and mongoDB configuration (MongoDBConfiguration class). WebAppConfiguration.java @EnableWebMvc @Configuration @ComponentScan("xpadro.thymeleaf") @Import(MongoDBConfiguration.class) public class WebAppConfiguration extends WebMvcConfigurerAdapter { @Bean @Description("Thymeleaf template resolver serving HTML 5") public ServletContextTemplateResolver templateResolver() { ServletContextTemplateResolver templateResolver = new ServletContextTemplateResolver(); templateResolver.setPrefix("/WEB-INF/html/"); templateResolver.setSuffix(".html"); templateResolver.setTemplateMode("HTML5"); return templateResolver; } @Bean @Description("Thymeleaf template engine with Spring integration") public SpringTemplateEngine templateEngine() { SpringTemplateEngine templateEngine = new SpringTemplateEngine(); templateEngine.setTemplateResolver(templateResolver()); return templateEngine; } @Bean @Description("Thymeleaf view resolver") public ThymeleafViewResolver viewResolver() { ThymeleafViewResolver viewResolver = new ThymeleafViewResolver(); viewResolver.setTemplateEngine(templateEngine()); return viewResolver; } @Bean @Description("Spring message resolver") public ResourceBundleMessageSource messageSource() { ResourceBundleMessageSource messageSource = new ResourceBundleMessageSource(); messageSource.setBasename("i18n/messages"); return messageSource; } @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry.addResourceHandler("/resources/**").addResourceLocations("/WEB-INF/resources/"); } } Things to highlight from looking at the above code:@EnableWebMvc: This enables Spring MVC annotations like @RequestMapping. This would be the same as the xml namespace <mvc:annotation-driven /> @ComponentScan(“xpadro.thymeleaf”): Activates component scanning in the xpadro.thymeleaf package and subpackages. Classes annotated with @Component and related annotations will be registered as beans. We are registering three beans which are necessary to configure Thymeleaf and integrate it with the Spring framework.template resolver: Resolves template names and delegates them to a servlet context resource resolver. template engine: Integrates with Spring framework, establishing the Spring specific dialect as the default dialect. view resolver: Thymeleaf implementation of the Spring MVC view resolver interface in order to resolve Thymeleaf views.MongoDBConfiguration.java @Configuration @EnableMongoRepositories("xpadro.thymeleaf.repository") public class MongoDBConfiguration extends AbstractMongoConfiguration { @Override protected String getDatabaseName() { return "hotel-db"; } @Override public Mongo mongo() throws Exception { return new Mongo(); } } This class extends AbstracMongoConfiguration, which defines mongoFactory and mongoTemplate beans. The @EnableMongoRepositories will scan the specified package in order to find interfaces extending MongoRepository. Then, it will create a bean for each one. We will see this later, at the data access layer section. 3.Thymeleaf – Spring MVC Integration HotelController The controller is responsible for accessing the service layer, construct the view model from the result and return a view. With the configuration that we set in the previous section, now MVC Controllers will be able to return a view Id that will be resolved as a Thymeleaf view. Below we can see a fragment of the controller where it handles the initial request (http://localhost:8080/th-spring-integration/spring/home): @Controller public class HotelController { @Autowired private HotelService hotelService; @ModelAttribute("guest") public Guest prepareGuestModel() { return new Guest(); } @ModelAttribute("hotelData") public HotelData prepareHotelDataModel() { return hotelService.getHotelData(); } @RequestMapping(value = "/home", method = RequestMethod.GET) public String showHome(Model model) { prepareHotelDataModel(); prepareGuestModel(); return "home"; } ... } A typical MVC Controller that returns a “home” view id. Thymeleaf template resolver will look for a template named “home.html” which is located in /WEB-INF/html/ folder, as indicated in the configuration. Additionally, a view attribute named “hotelData” will be exposed to the Thymeleaf view, containing hotel information that needs to be displayed on the initial view. This fragment of the home view shows how it accesses some of the properties of the view attribute by using Spring Expression Language (Spring EL): <span th:text="${hotelData.name}">Hotel name</span><br /> <span th:text="${hotelData.address}">Hotel address</span><br /> Another nice feature is that Thymeleaf will be able to resolve Spring managed message properties, which have been configured through the MessageSource interface. <h3 th:text="#{hotel.information}">Hotel Information</h3> Error handling Trying to add a new user will raise an exception if a user with the same id already exists. The exception will be handled and the home view will be rendered with an error message. Since we only have one controller, there’s no need to use @ControllerAdvice. We will instead use a @ExceptionHandler annotated method. You can notice that we are returning an internationalized message as the error message: @ExceptionHandler({GuestFoundException.class}) public ModelAndView handleDatabaseError(GuestFoundException e) { ModelAndView modelAndView = new ModelAndView(); modelAndView.setViewName("home"); modelAndView.addObject("errorMessage", "error.user.exist"); modelAndView.addObject("guest", prepareGuestModel()); modelAndView.addObject("hotelData", prepareHotelDataModel()); return modelAndView; } Thymeleaf will resolve the view attribute with ${} and then it will resolve the message #{}: <span class="messageContainer" th:unless="${#strings.isEmpty(errorMessage)}" th:text="#{${errorMessage}}"></span> The th:unless Thymeleaf attribute will only render the span element if an error message has been returned.4.The Service layer The service layer accesses the data access layer and adds some business logic. @Service("hotelServiceImpl") public class HotelServiceImpl implements HotelService { @Autowired HotelRepository hotelRepository; @Override public List<Guest> getGuestsList() { return hotelRepository.findAll(); } @Override public List<Guest> getGuestsList(String surname) { return hotelRepository.findGuestsBySurname(surname); } @Override public void insertNewGuest(Guest newGuest) { if (hotelRepository.exists(newGuest.getId())) { throw new GuestFoundException(); } hotelRepository.save(newGuest); } } 5.The Data Access layer The HotelRepository extends the Spring Data class MongoRepository. public interface HotelRepository extends MongoRepository<Guest, Long> { @Query("{ 'surname' : ?0 }") List<Guest> findGuestsBySurname(String surname); } This is just an interface, we won’t implement it. If you remember the configuration class, we added the following annotation: @EnableMongoRepositories("xpadro.thymeleaf.repository") Since this is the package where the repository is located, Spring will create a bean and inject a mongoTemplate to it. Extending this interface provides us with generic CRUD operations. If you need additional operations, you can add them with the @Query annotation (see code above). 6.Conclusion We have configured Thymeleaf to resolve views in a Spring managed web application. This allows the view to access to Spring Expression Language and message resolving. The next part of this tutorial is going to show how forms are linked to Spring form backing beans and how we can reload fragments by sending an AJAX request.   Reference: Thymeleaf integration with Spring (Part 1) from our JCG partner Xavier Padro at the Xavier Padró’s Blog blog. ...

Programming Language Job Trends Part 2 – February 2014

In part 1 of the programming language job trends, we looked at Java, C++, C#, Objective C, and Visual Basic. In today’s installment, we review the trends for PHP, Python, JavaScript, Ruby, and PERL. Watch for part 3 in the next few days, where we will look at Erlang, Groovy, Scala, Lisp, and Clojure. First, let’s look at the trends from Indeed.com:            As you can see, there has been a general downward trend for all of these languages in 2013. This is fairly consistent with the trends seen in part 1 of the language trends. The really surprising part of this is the 50% decrease in JavaScript demand in the past two years. This is a much higher rate of decline than the others. Perl is in steady decline, now running on 4 years. PHP’s decline is much more recent, really only appearing in the last year. Python and Ruby, while showing a slight decline in the past 6 months, are the most stable. We can likely attribute this to their overall rise in popularity in both startup and enterprise software shops. Now onto the short-term trends from SimplyHired.com:Given that SimplyHired’s data is about 6 months old, it is difficult to compare to Indeed. However, only JavaScript is showing a real decline during 2013 with a slight uptick during July 2013. PHP shows a huge jump during July, almost a 100% increase, which is abnormal for these trends. I am guessing we are dealing with bad data for this even if there is a decent increase. The other trends all show a small increase during July, which is at least somewhat reassuring. Lastly, we look at the relative growth trends from Indeed.com. This compares percentage growth as opposed to percentage of all postings:Given that overall percentage of jobs could be a misleading metric, the relative growth shows a different perspective. Ruby demand has obviously been increasing at a rapid rate, however that growth seems to have slowed a bit in 2013. Python shows solid growth, hovering at 500% since the middle of 2010. While you can’t really see accurate trends for the others, you can review the graph without Ruby. Perl is actually showing a purely negative trend for about 18 months. Javascript growth has slowed to about 50%, while PHP has dipped below 100%. Overall, the demand is similar to the first part of the job trends posts, a slight decline in demand everywhere. The growth trends for these languages are not great either with the exception of Ruby. Given that Ruby and Python are still growing significantly, and based on the buzz on blogs, my guess is that these two languages are really starting to make inroads into the enterprise which has typically been the home of Java and C#. Stay tuned for part 3 where we look at some younger languages, and one oldie that seems to never go away.   Reference: Programming Language Job Trends Part 2 – February 2014 from our JCG partner Rob Diana at the Regular Geek blog. ...

Ideas Aren’t Worthless

It’s common knowledge that “ideas are worthless”. An idea will bring you nowhere – you need implementation, focus, a good team, the right environment, luck, etc. And I won’t argue with that – obviously, an idea doesn’t bring you anywhere by itself. Just google for “ideas are worthless” and you can find dozens of convincing articles. Ideas change, ideas evolve, initial idea doesn’t matter. But that’s the perspective of the businessman. A person whose interest is to make profit (nothing wrong with that, of course). Emphasizing the worthlessness of ideas brings a whole slew of startups that are there for the sake of being a startup. We’ll build something. We’ll figure it out along the way. Then they master the skills of fundraising, they focus on Lean, MVP, time-to-market, exit strategies, etc. etc. And that’s all great. Except that it’s not technology. The perspective of a “hacker” is different. A hacker is thrilled by technology, he wants to be challenged. The ultimate point is not to have a startup, to get acquired, to make money. The ultimate point is to make something cool, something different, something technologically innovative. I can have a dozen ideas per day. Things that may actually be turned into a startup. I know all of the above buzzwords, like Lean and MVP, so probably I can turn them into some sort of company. Do I do that? No. And not because I’m afraid not to have a stable income or wasting a couple thousand bucks. I don’t do that just because I’m not convinced they are cool, technological and challenging enough. Probably, that’s the difference between great startups and mediocre ones. Not between successful and failed, because even mediocre ones can be “successful”, in terms of some revenue, some investment, being acquired. But between game-changers and the rest. Yes, your ideas may be worthless, if they are worthless to you. If your idea is motivating you by itself, rather than by making you envision the success of the future company (i.e. “fall in love with the process, not with the end result”), then it’s the most important thing. Not for “success”, but for your experience. So I’d choose carefully how to invest my time. I may be more valuable to the world and to myself by doing my job for an existing company, than by starting a new company for the sake of starting a company. Ideas are what’s important for us.   Reference: Ideas Aren’t Worthless from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog. ...

How I broke our continuous deployment

This post is about a failure – more precisely about how I managed to bring our release processes to their knees. I can recommend reading especially if you are planning to ruin your release train any time soon. Following my footsteps is a darn good way to bring down your automated  processes for weeks. But let me start with a rough description of how we release new Plumbr versions to end users. For this we have created two environments – production and test, both on dedicated servers. Both environments have LiveRebel orchestrating our Continuous Integration – built version deployments. The test environment is updated after each CI run, production environment is updated on a daily basis. On the 21st of January we stopped nightly production updates as one of the larger changes needed some more time in QA processes. The intention was to stop the updates for just a few days. At the same time the test environment was still being updated several times a day. As you might guess, those few days turned into ten days. So it was not until 30th of January until we were again able to re-enable the automatic updates in production environment. Only to discover the next morning that the update had failed with the following error message during application initialization: ‘java.security.ProviderException: Could not initialize NSS’ As we were using LiveRebel, the update was rolled back automatically and the production site kept using the version deployed 10 days earlier. So no major harm was caused, especially after I discovered that manual push to production worked just fine. But the automated update failed again on the following night. And the next one. So it was already on 5th of February when I finally found time to investigate the issue more thoroughly. 30 minutes of googling revealed only one clue – in the format of a discussion thread about libnss3 configuration errors. Making the proposed changes to the nss.cfg indeed seemed to work and the automated releases started rolling out. After the fix, three important questions rose:What broke this configuration? Nobody admitted altering the machine configuration during the ten days the build idled. Why was the test environment working just fine? Why did manual updates work and just the LiveRebel orchestrated releases kept failing?Answers started to take shape when we investigated the logs. Apparently we had automated updates enabled for the production JVM, so on January 23rd the automated update had pulled both the fresh openjdk-7-jdk_7u51 patch and libnss3 patch and applied the upgrade. So we had found the answer to our first question. The second answer surfaced when we compared environment configurations. Our test environment was running on 32-bit JVMs, as opposed to 64-bit production machines. Why the 32-bit JDK upgrade did not break the backwards compatibility to libnss is another question, but we again had found the difference which, when removed, exposed the problem also in test environment. The answer to the third and last question became clear when we correlated the uptimes of test and production LiveRebel instances. As the test LiveRebel had been restarted and had picked up new libnss3 configuration, we had found the reason for it running without configuration issues. Having spent more than a day figuring out all of the above, I can only conclude that the following has to be kept in mind when creating or maintaining your build:The environments in your release stream must be identical. No excuses allowed. Creating the environments must be fully automated, including the OS-level configuration. We were automating our builds only from the JVM level onwards, but the OSes had been configured manually. When something is failing, it ain’t gonna fix itself. The sooner you find the root cause the sooner you can switch back to productive work. As opposed to dealing with the consequences as I was when rolling out manual updates.Admittedly, these are obvious conclusions. But sometimes the obvious things have to be reminded.   Reference: How I broke our continuous deployment from our JCG partner Vladimir Šor at the Plumbr Blog blog. ...

Creating software for sysops – make sure you do not suck

Plumbr is all about detecting performance problems from within Java applications. Whether this application is residing in a desktop machine under developer’s desk or hidden in a production vault guarded by the Bastard Operator From Hell – does not matter. We have designed our software to cover both ends of the spectrum. Or so we thought. Past few months have made us doubt about our wisdom. Something just didn’t feel right – as there was suddenly more friction between the users and the product. So we finally took the time to stand up and make sure if our thoughts correspond to the reality. The results were staggering – on occasions it seems as if  someone had deliberately designed certain aspects of our service with “getting even with the operations” in mind. I will illustrate this with some examples, most of which should be applicable to any B2B software company: Installation. Installation has to be easy. What can be easier than clicking on the downloaded JAR file and pressing NEXT for a few times? Wrong, especially when your software is intended to run in dark corners of the server room. Instead of Swing-based UI’s you should start thinking in terms of rpm -ivh yourpackage.rpm and its close relatives like dpkg or yum, To make things worse, organizations use different tools to support their release processes. Before you can say “I got this package management thing covered” your support channel will be overrun. Requests to embed your installation into shell scripts, continuous integration tools, release management software or configuration management tools will be flocking in. And just when you think you have it covered there will be the next Ansible just around the corner requiring your attention. License server. Many B2B solutions are licensed in capacity-limited formats. Whether a particular piece of software is licensed by the number of users, server capacity by transaction volume does not matter that much. Your enterprise level clients wish to have transparency and control over their licensing deals. So you toss in your own custom-built licensing server. Only to discover hate mail filling your inbox the day after releasing your license server. Apparently installing 80 different license servers from 80 different vendors is not something ops people are too eager to deal with. So before designing your proprietary solution, take a look at common licensing formats and making sure your licensing solution can easily be integrated into corporate licensing solutions. API support. Of course you need to provide an API to  your service. What could be a more appropriate than publish your interfaces as MBeans? Well, if you bother to look up from your Java developer seat then you might again be surprised. JMX is about as well-known and widely used as Microsoft’s WebTV. But give operations an API they actually can and want to use and you will discover them using your product in ways you did not even think about. Publishing alerts created by your service to instant messaging solutions or embedding the statistical information into custom built company-wide dashboards. I bet you did not think about that. The key here is also in publishing raw data. Apparently operations want the freedom to aggregate their data themselves. Rolling out updates. Notifying users when a new version of your software is out and recommending an upgrade – this should be a no-brainer, right? Well, try to deploy your software to 100’s of people within a single organization and wait for the next update. Voilà, you have just summoned most of them to contact helpdesk on the same day asking about these upgrades. Toss in frequent upgrades and you have created a nightmare for your customer. Packaging. If you are born as a -javaagent, then it should be obvious. You want to live your life packaged as a JAR file and if lucky, get promoted to an IDE plugin. If you somehow scrolled over the installation section above, then let me repeat the key take-away: your solution must be embeddable to different tools and processes. You can and you will have your own face, but do not be surprised being bundled into other tools and solutions. Logging. It shouldn’t really matter, does it? log4j, slf4j, logback – pick one and be happy with your pick. Unless you are Ceki Gülcü, you could actually just roll a dice and be done with it? Nope. Operations seem to have a sweet tooth to control what, when and how is being logged. Some of them just wish to log everything just in case and toss it to Logstash for future pattern detection analysis. And then there are guys who wish to add tampering-proof certification to their logs. And then the security audit team steps in to make sure that you do not log any financial or otherwise sensitive information. Why we din’t discover this sooner? The roots of the discoveries covered in this blog post are hidden deep inside the company genome. Plumbr founders have a strong background in software development. We have all had our sleepless nights trying to pull yet another milestone together. So we kind-of know the mindset, tools and problems software developers are dealing with. But so far, we have not had anyone with operations background on board. Another potential reason for the problem to surface only now is the change we see in the product usage patterns. During the past two quarters customers have discovered they are getting more out of the product when they attach it to their production environments. This allows Plumbr to constantly monitor the evolution of their software and discover potential performance bottlenecks. Naturally, this means that our target audience started shifting more and more from developers to ops. We acknowledged the findings, but did not really verify with the actual target audience. Summary. For you, our reader – know thy users. The way they work, the tools they use, the way they think about the problem and the situation where they would actually use your solution. Techniques for this are not in scope of this post, but if you are not actively engaging with your end users and do not understand their work process, day-to-day problems or tools: stop. The world is already full of crappy software, take a step back and make sure you are not on the path of creating yet another one. For us – we have definitely learned a lot from you, dear friends from operations. Even though it might take some time, we are already and will be rolling out improvement releases constantly. And I can only promise that those built keeping the lessons learned in mind.   Reference: Creating software for sysops – make sure you do not suck from our JCG partner Ivo Mägi at the Plumbr Blog blog. ...

Manual testing sinful?

One of the asides I made in “Programmers without TDD will be unemployable” which caused a bit of outrage in the testing community was my comment “Manual testing is a sin.” While I have been unfair to many testers, and somewhat simplistic, I still stand by the statement. Let me expand on why I still stand by the comment and why I am wrong. It is all a question of context. Look at the full line the quote appeared in: “Unit testing will be overwhelmingly automated. Manual testing is a sin. Manual unit testing doubly so.” In the context of Unit Testing, as one who has undertaken formal and informal manual unit testing I believe unit testing can and should be fully automated. Informal unit testing is too ad hoc, prone to error, time consuming and difficult to replicate. Formal manual unit testing suffers all these problems and is hideously expensive. In the context of unit testing I stand by “Manual testing is a sin.” I will also go further. The vast majority of testing I see performed by professional testers – i.e. not unit testing – is manual, the vast majority of this testing could be automated. If you automate the testing the initial costs might go up – I say might because a good natural language test script is expensive to write itself – but the cost of executing the test will fall. More importantly the time it takes to run the test will fall dramatically. The combination of low cost and fast turn execution means the tests become a very very effective feedback loop. Failure to seize this amazing opportunity is something I consider sinful. It may be expensive but it is necessary. Even seizing 20% of this testing would be massively beneficial. What I do not consider sinful is Exploratory Testing which almost by definition cannot be automated and therefore needs to be a manual process. Exploratory testing is about looking for the unusual, the unexpected, the thing you didn’t think of before, the thing you only think of because you now see it. Automated exploratory testing might even be a contradiction in terms therefore manual exploratory testing is not sinful. But, and this is a pretty big BUT: if the initial quality is low (e.g. unit testing is absent or poor) and the testing (e.g. system, integration, acceptance testing) that comes before exploratory testing is ineffective then exploratory testing will be effective at finding defects (because the earlier rounds didn’t find them) but it will be ineffective as exploratory testing. The time and effort spent on “exploratory testing” is really being spent doing system testing. Real exploratory testing itself will not be possible when there are many defects because the exploration will continually be interrupted. Unfortunately the subtlety of this logic is lost on most organizations who just label everything that happens after writing actual production code “testing” and is made worse when testing is separated from the developers and, more importantly, those who understand what the system should do (the business, the BAs, the product managers, the users, etc.) Worse still, the easy availability of cheap offshore testing capability means that rather than address the root cause of defects (low initial quality, system ineffective testing, misunderstanding of exploratory testing) means the whole thing can be hidden away cheaply. There are probably some other examples of testing which cannot be automated and are not sinful. But I repeatedly find individuals and organizations who too ready to jump to “it cannot be automated” after very limited, or no, effort is spent on trying to automate it. Similarly the fact that a professional tester cannot automate the testing of a piece of software system does not mean it cannot be automated. Automated testing – at least the setup – involves programming skills which many testers simply do not have, indeed it is wrong to assume they would have them. Finally, there are a class of systems which as far as I know defy automated testing. Or rather defy automation at the moment. They defy automation because the creators of these systems have little or no interest in allowing automation. This set includes: SAP, Microsoft Sharepoint, most CRM systems such as Microsoft Dynamics, Murex (although I believe efforts are afoot here) and some other large packages. I find it incredible that companies like SAP, Oracle and Microsoft can sell systems which cannot be effectively tested. But actually this is part of the sales-pitch. Because these systems are marketed and sold as “no programming required” to offer automated test tools would expose the lie. If you are involved with selecting one of these systems ask: how will we test it? The inability to effectively test these systems is their Achilles heal, it is one of the reasons I don’t like these systems and it makes me wonder about whether organizations are right to use these systems.   Reference: Manual testing sinful? from our JCG partner Allan Kelly at the Agile, Lean, Patterns blog. ...

Never Test Logging

Technical logging is usually not tested. As commentator write on stack overflow: I practice TDD/BDD pretty religiously and I almost never test logging. With some exceptions logging is either a developer convenience or a usability factor, not part of the method’s core specification. There is also a technical side why developers are reluctant, as Jon writes on the same page: It’s a pain, either making the production code messy (due to injecting the logger) or the test smelly (replacing the static logger with a mock). After those two statements we have to stop and think for a while. (After all, thinking never hurts, does it?) When we are talking about logging, do we mean the logging as a function or the tools that we use? Many times there is no difference: we use logging tools for logging. Absolutely logical. On the other hand when somebody asks a question about how to test logging there is a good chance that s/he is using the logging tool for something else than logging. Using logging tools and logging functionality are sometimes not the same. When testing logging comes into picture you should feel code smell. Testing Logging Functionality The first question that we have to answer is : what is logging as a functionality? What is it for? (And this time this is not about deforestation.) When you write statements, like log.debug("accountIsDisabled() returned true");, is there any functional specification that you fulfill? I bet there is none. Technical logging is not a functional requirement. Logging is used to help the developer and the support people to better understand the behavior of the program, when something non expected happens in the program. This is not something that is inherent to the core functionality of the code. The important fraction of the above sentences is “when something non expected happens”. I hear the roar of junior and semi senior developers: “We also log when something expected but exception occurs, like database connection dropped.” Well, my friend, let me tell you that you only think you log. You actually do not log. You alert. You presumably use some logging tool to perform alerting and this is what makes you think that you do logging. In reality, however, you are not. And this is very important. I do not say that you should not use a logging tool for anything else other that logging. You can send alerts to a file, send SMS, tweet, whatever using a special log4j appender. No problem. However make sure that this is the best choice from the available tools. If you think you are logging, if you are not aware that you are actually alerting you prevent yourself realizing that you perhaps use a sub optimal tool for the purpose. When you send anything through your log tool’s drain to a log file that describes something, which is the description and the details of a well expected behavior then you should ask yourself the question: am I logging, or am I doing something else? (Note: that something non-expected may happen outside of the program as well, in which case we also need logging. However that is not technical logging. Typically this is legal audit logging. You should test such logging.) After we defined what I really mean when I talk about logging, my next statement is the following: You should not test technical logging! The statement may be shocking the first time. Why did not I write “you need not test”? Simply because there is nothing in programming that you “may but need not do” if you are a professional. You and your team have a goal. It includes product, time, budget, quality and all other “such” things. You get there on a way paved with effort. You have to minimize this effort. Not for your own good, or because you are lazy, but for the shareholders sake. Effort is cost. They provide the budget not for your enjoyment, but rather for achieving a business goal. That is the way businesses work, and professional programmers operate in business. That is one of the mandatory requirements to be professional. If you need not do something to achieve the goals, then you should not. Otherwise you waste the money that is not yours. If you still feel that there is a real business need to test logging then start to sniff. This is code smell again. You probably are not logging, only using logging tools. Testing Logging Tools Functionality When you use a logging tool for something other than logging then you may well want to do some testing. Assume you decided after careful and professional assessment of all the possible technical solutions that you will use a logging framework for something, which is not logging. Alerting for example. In that case you want to test that your code uses the logging appropriately. Then comes the issue with the private static final loggers that you can not overwrite even using reflection. (You may succeed if you try, but that is against the JLS and JVM standards and may not always work.) But again: this is not logging, this is using only the logging tool for some function, say alerting. Alerting functionality should be coded testable. In that case put aside the static loggers and focus on functionality. Separate the technical logging from alerting and properly inject dependency and mock the objects as usual during testing. Wrap alerting into a separate class, package and mock that while testing and test the wrap separately. Whenever you program something to be tested you have to code it testable. Which is obvious since you develop your code using TDD.   Reference: Never Test Logging from our JCG partner Peter Verhas at the Java Deep blog. ...

Law of Demeter

Reduce coupling and improve encapsulation… General In this post I want to go over Law of Demeter (LoD). I find this topic an extremely important for having the code clean, well-designed and maintainable. In my experience, seeing it broken is a huge smell for bad design. Following the law, or refactoring based on it, leads to much improved, readable and more maintainable code. So what is Law of Demeter? I will start by mentioning the 4 basic rules: Law of Demeter says that a method M of object O can access / invoke methods of:O itself M’s input arguments Any object created in M O’s parameters / dependenciesThese are fairly simple rules. Let’s put this in other words: Each unit (method) should have limited knowledge about other units. Metaphors The most common one is: Don’t talk to strangers How about this: Suppose I buy something at 7-11. When I need to pay, will I give my wallet to the clerk so she will open it and get the money out? Or will I give her the money directly? How about this metaphor: When you take your dog out for a walk, do you tell it to walk or its legs? Why do we want to follow this rule?We can change a class without having a ripple effect of changing many others. We can change called methods without changing anything else. Using LoD makes our tests much easier to construct. We don’t need to write so many ‘when‘ for mocks that return and return and return. It improves the encapsulation and abstraction (I’ll show in the example below). But basically, we hide “how things work”. It makes our code less coupled. A caller method is coupled only in one object, and not all of the inner dependencies. It will usually model better the real world. Take as an example the wallet and payment.Counting Dots? Although usually many dots imply LoD violation, sometimes it doesn’t make sense to “merge the dots”. Does: getEmployee().getChildren().getBirthdays() suggest that we do something like: getEmployeeChildrenBirthdays() I am not entirely sure. Too Many Wrapper Classes This is another outcome of trying to avoid LoD. In this particular situation, I strongly believe that it’s another design smell which should be taken care of. As always, we must have common sense while coding, cleaning and / or refactoring. Example Suppose we have a class: Item. The item can hold multiple attributes. Each attribute has a name and values (it’s a multiple value attribute) The simplest implementations would be using Map. public class Item { private final Map<String, Set<String>> attributes;public Item(Map<String, Set<String>> attributes) { this.attributes = attributes; } public Map<String, Set<String>> getAttributes() { return attributes; } }Let’s have a class ItemsSaver that uses the Item and attributes: (please ignore the unstructured methods. This is an example for LoD, not SRP) public class ItemSaver { private String valueToSave; public ItemSaver(String valueToSave) { this.valueToSave = valueToSave; }public void doSomething(String attributeName, Item item) { Set<String> attributeValues = item.getAttributes().get(attributeName); for (String value : attributeValues) { if (value.equals(valueToSave)) { doSomethingElse(); } } }private void doSomethingElse() { } }Suppose I know that it’s a single value (from the context of the application). And I want to take it. Then the code would look like: Set<String> attributeValues = item.getAttributes().get(attributeName); String singleValue = attributeValues.iterator().next();// String singleValue = item.getAttributes().get(attributeName).iterator().next(); I think that it is clear to see that we’re having a problem. Wherever we use the attributes of the Item, we know how it works. We know the inner implementation of it. It also makes our test much harder to maintain. Let’s see an example of a test using mock (Mockito): You can see imagine how much effort it should take to change and maintain it. Item item = mock(Item.class); Map<String, Set<String>> attributes = mock(Map.class); Set<String> values = mock(Set.class); Iterator<String> iterator = mock(Iterator.class); when(iterator.next()).thenReturn("the single value"); when(values.iterator()).thenReturn(iterator); when(attributes.containsKey("the-key")).thenReturn(true); when(attributes.get("the-key")).thenReturn(values); when(item.getAttributes()).thenReturn(attributes);We can use real Item instead of mocking, but we’ll still need to create lots of pre-test data. Let’s recap:We exposed the inner implementation of how Item holds Attributes In order to use attributes, we needed to ask the item and then to ask for inner objects (the values). If we ever want to change the attributes implementation, we will need to make changes in the classes that use Item and the attributes. Probably a-lot classes. Constructing the test is tedious, cumbersome, error-prone and lots of maintenance.Improvement The first improvement would be to ask let Item delegate the attributes. public class Item { private final Map<String, Set<String>> attributes; public Item(Map<String, Set<String>> attributes) { this.attributes = attributes; }public boolean attributeExists(String attributeName) { return attributes.containsKey(attributeName); }public Set<String> values(String attributeName) { return attributes.get(attributeName); }public String getSingleValue(String attributeName) { return values(attributeName).iterator().next(); } }And the test becomes much simpler. Item item = mock(Item.class); when(item.getSingleValue("the-key")).thenReturn("the single value"); We are (almost) hiding totally the implementation of attributes from other classes. The client classes are not aware of the implementation expect two cases:Item still knows how attributes are built. The class that creates Item (whichever it is), also knows the implementation of attributes.The two points above mean that if we change the implementation of Attributes (something else than a map), at least two other classes will need to be change. This is a great example for High Coupling. The Next Step Improvement The solution above will sometimes (usually?) be enough. As pragmatic programmers, we need to know when to stop. However, let’s see how we can even improve the first solution. Create a class Attributes: public class Attributes { private final Map<String, Set<String>> attributes;public Attributes() { this.attributes = new HashMap<>(); }public boolean attributeExists(String attributeName) { return attributes.containsKey(attributeName); }public Set<String> values(String attributeName) { return attributes.get(attributeName); } public String getSingleValue(String attributeName) { return values(attributeName).iterator().next(); }public Attributes addAttribute(String attributeName, Collection<String> values) { this.attributes.put(attributeName, new HashSet<>(values)); return this; } }And the Item that uses it: public class Item { private final Attributes attributes;public Item(Attributes attributes) { this.attributes = attributes; } public boolean attributeExists(String attributeName) { return attributes.attributeExists(attributeName); } public Set<String> values(String attributeName) { return attributes.values(attributeName); }public String getSingleValue(String attributeName) { return attributes.getSingleValue(attributeName); } }(Did you noticed? The implementation of attributes inside item was changed, but the test did not need to. This is thanks to the small change of delegation.) In the second solution we improved the encapsulation of Attributes. Now even Item does not know how it works. We can change the implementation of Attributes without touching any other class. We can make different implementations of Attributes:An implementation that holds a Set of values (as in the example). An implementation that holds a List of values. A totally different data structure that we can think of.As long as all of our tests pass, we can be sure that everything is OK. What did we get?The code is much more maintainable. Tests are simpler and more maintainable. It is much more flexible. We can change implementation of Attributes (map, set, list, whatever we choose). Changes in Attribute does not affect any other part of the code. Not even those who directly uses it. Modularization and code reuse. We can use Attributes class in other places in the code.  Reference: Law of Demeter from our JCG partner Eyal Golan at the Learning and Improving as a Craftsman Developer blog. ...

Counteroffers, Secrecy, and Fear

Counteroffers are a fairly common occurrence in technology and other competitive job markets, and much of what we think we know about counteroffers seems to originate from agency recruiters. Google counteroffer and we find articles with fear-inducing titles.Check the bylines for these articles and you’ll discover that many are written by recruiters. Some agencies are so bold as to direct their counteroffer article right at the candidates they represent. Or a page of Counteroffer Facts, which go on to list things like “Your loyalty will always be questioned” (which is probably not a fact). It’s pretty clear how recruiters feel about counteroffers. There is nothing wrong with recruiters writing articles about counteroffers. You are reading one now. Many provide ‘statistics’, while others contain anecdotes revealing the unfortunate fate of those who accepted. Spoiler Alert – In recruiter articles, those who accept counteroffers always die alone and penniless. These pieces aren’t necessarily biased or untrue just based on the source, but when recruiters are the source you need to understand one thing. Recruiters NEVER benefit from an accepted counteroffer.¹ Counteroffers are the bane of a recruiter’s existence. Put yourself in a recruiter’s shoes for a minute. Over a few weeks you’ve identified a strong candidate, screened, set up interviews, debriefed, negotiated a salary, checked references, and an offer was accepted. You will be rewarded handsomely, as a fee might be in the neighborhood of 30K (sometimes less, sometimes more). If the recruiter is independent or self-employed, that’s a car. And then you get a call saying “My company made me a counteroffer, so I’m staying.” You just ‘lost’ a car. I’ve had it happen, and it burns. Even if a counteroffer makes all the candidate’s wishes come true, it’s still never in the recruiter’s best interest for it to be accepted. Recruiters get paid when they get you a job with their client – not when their efforts result in a better job at your company. This is part of the reason the recruiter/candidate relationship has flaws, as the interests of the two parties are not always aligned. It’s no wonder some recruiters invest significant time in talking about counteroffers. How do recruiters prevent counteroffer acceptance? The best way to prevent a counteroffer from being accepted is to prevent a counteroffer from even being presented, and recruiters may take a two-pronged approach to protect their fee and the interests of their client. Here is a peek into the methods. (NOTE: These details are provided to benefit job seekers, and not as any endorsement.) 1.  Find out why a candidate is considering new jobs.  “Why are we talking?“ This question sounds innocuous and usually comes across as an ice breaker, yet the answers prove valuable if a counteroffer is presented. 2. Get the candidate’s opinion on counteroffer early. “Have ever accepted a counteroffer?” may come up in your first conversation with a recruiter. Candidates rarely admit that they would consider a counteroffer. If the recruiter can get that opinion in an email, expect to see that email when a counteroffer is made. 3. Tell stories and provide ‘statistics’. Many recruiters cite a National Employment Association study that says 50-89% (depending on where you find the statistic) of people who accept counteroffers will not be with the company after six months. I was unable to find any reliable source for that (date, sample size). Of the 34,000 Google hits on “National Employment Association”, over half contain the word counter or counteroffer. Seems odd, no? It appears the organization somehow merged or morphed into a couple others (NAPC or NAPS), but the statistic lives on. I’ve seen anecdotes that this stat has been used since the 70′s. Others mention a Business Week figure at 90%, but all I was able to find was an article from 2004 that cites “some studies” and puts the number over 50%. One may get the impression that recruiting firms are using statistics from other recruiting firms, as these studies seem difficult to verify. Anyone got a source? To get decent statistics, you need consistent methods of gathering data. Even the definition of what qualifies as a counteroffer is up for debate, so any stats are questionable. Even without numbers, anecdotes work just as well. 4. Fear, shame, worthlessness. The statistic is delivered as “Most who accept counteroffers are gone in six months, either by choice or (dramatic pause) … termination.” People have a strong fear of being fired. Candidates will be told how they will never be promoted or given raises, will be a social outcast, and they will not be given interesting work. These are all possibilities, but obviously not guarantees, and someone who habitually forgets to buy the donuts could suffer a similar fate. The recruiter may also remind you that your employer never really valued you in the first place. They might say “Sure, you got a raise – but you had to hold a gun to their head to get it. How much do they really care about you?” 5. Resignation. If a recruiter has ever offered to provide you with a resignation template, or to even tender your resignation on your behalf (it’s rare but it happens), it may have been a genuine act of kindness to save you time. A recruiter’s resignation template may include language intended to prevent counteroffer, and some are direct. “Let’s not waste time discussing what it would take for me to stay, as I have already passed on my firm commitment to a new employer.” If the candidate emphasizes that he/she does not want a counteroffer, resigning with a letter that says is appropriate. 6. Transitioning. Once a candidate accepts an offer, a recruiter wants the candidate’s loyalty and interest to shift to the new employer. A lunch may be scheduled before start date to discuss projects. If there are skills that will need to be learned, the candidate might choose to get a head start. Interactions between employers and new hires have obvious benefits beyond the prevention of a counteroffer acceptance, but that will be in the recruiter’s mind. I remember discussions in one recruiting office about which day of the week was most common for counteroffers to be given, and then suggesting that the recruiter get together with the candidate on that day. When It Happens When a candidate accepts a counteroffer, it usually sets off recruiter panic mode. The recruiter will remind the candidate of some things – why they were looking in the first place, opinions expressed about counteroffer, the dangers, their firm commitment to the new company, etc. As a last resort the recruiter might make a personal plea and mention the fee, or even their own family. Some people will say a lot of things for 30K. I don’t have a problem with some of the things recruiters say as warnings against accepting a counteroffer, as much of the script contains real possibilities. Accepters might not get promotions, raises, or interesting work. We can question the morality of using fear to get someone to do what you want them to do. If the recruiter is honest and believes that the new job is best for the candidate’s career, it’s easier to justify what is said. I do have a problem with the statistics, and the recruiting industry’s willingness to throw around questionable numbers. Secrecy Statistics about counteroffers are impossible to measure when you consider the interests and incentives of the parties involved. Companies that counteroffer departing employees are best served to keep that fact private. as employees may pursue offer letters just for the sake of a raise or improvement and outsiders may question if the company pays market rates. Likewise, those who accept counteroffers may be concerned with the word getting out, as it may genuinely impact attitudes towards the employee. Employees who accept counteroffers are likely asked to keep quiet about what happened, and it’s usually in their best interest to do so. Based on the secrecy incentives on both sides, one might assume that the prevalence of counteroffers is likely higher than reported, and success/failure rate of counteroffer is difficult to assess. Conclusion Regurgitating outdated or questionable statistics as fact hurts the credibility of the recruiting industry. Counteroffers are all very different and impossible to classify or evaluate without knowing countless other details about the candidate’s situation and the organizations involved. When listening to recruiters’ opinions (including this article), it’s vital to keep the source in mind and make important career decisions based on all the information available. Ask yourself if the counteroffer will solve the issues you had with the employer, and if so whether your means of achieving these results (resignation) will make it difficult to stay. ¹ There is one extremely rare instance where a recruiter might benefit from a counteroffer. Say the recruiter places a candidate at Company A, and during the recruiter’s guarantee period (usually 90 days) the candidate accepts an offer from Company B, and then gets a counteroffer Company A. In this case it benefits the recruiter for the placement to accept the counteroffer from Company A so the recruiter will not have to refund a fee.   Reference: Counteroffers, Secrecy, and Fear from our JCG partner Dave Fecak at the Job Tips For Geeks blog. ...

Android Weather app using Yahoo weather provider and AutoCompleteTextView (Part 1)

This post is the first of a series where we will explore Yahoo Weather API. Our goal is building an app that gives us the actual weather condition using yahoo weather provider. We already talked about other weather provider like openweathermap, in other two post. You can give a look here and here if you are interested. In this first post, we want to analyze how we can retrieve city information using yahoo api. We will assume you have already an yahoo developer account, if not you can create it using this link. It is important you have your appid, that is totally free but it is necessary to use yahoo api. While we will explore yahoo api, we have the chance to describe some interesting Android UI component like AutoCompleteTextView and XML parsing in Android. Our goal is having an Android app that is shows by now the list of cities that matches a partial name entered by the user as shown below:Yahoo Woeid The first step to get weather information is retrieving woeid. This a spatial id that yahoo uses to identify city/region. We have, then, to find a way to get this woeid from the city name entered by the user. From the UI perspective, we want that an user inserts just the name of the city or a partial name and we have to find a way to get a list of cities that matches the data entered by the user with the corresponding woeid:  we can use the api shown below to get a list of cities matching a pattern we entered: <a href="http://where.yahooapis.com/v1/places.q(city_name_pattern);count=MAX_RESULT_SIZE?appid=your_app_id">http://where.yahooapis.com/v1/places.q(city_name_pattern);count=MAX_RESULT_SIZE?appid=your_app_id</a> If you use this api in your browser you will get an XML file with a list of information about cities that match the city_name_pattern. Android Yahoo XML data Parser It is time we create an XML parser so that we can extract information from the data we get when using the api shown above. The first step is creating a data model that holds the city information, in our case it is quite simple: public class CityResult {private String woeid; private String cityName; private String country;public CityResult() {}public CityResult(String woeid, String cityName, String country) { this.woeid = woeid; this.cityName = cityName; this.country = country; }// get and set methods@Override public String toString() { return cityName + "," + country; } } Now we create a parser class that we call YahooClient. This class is in charge of retrieving XML data and parse it. It has a simple static method that accepts the pattern we have to use to get the city list. At the beginning, we open an HTTP connection and pass the stream to the XML parser: yahooHttpConn= (HttpURLConnection) (new URL(query)).openConnection(); yahooHttpConn.connect(); XmlPullParser parser = XmlPullParserFactory.newInstance().newPullParser(); parser.setInput(new InputStreamReader(yahooHttpConn.getInputStream())); then, we start parsing the data, looking for some tags we are interested on. By now, considering our data model too, we are just interested on the woeid, city name and country. There are other information in the XML but we don’t want to extract them by now. int event = parser.getEventType();CityResult cty = null; String tagName = null; String currentTag = null;// We start parsing the XML while (event != XmlPullParser.END_DOCUMENT) { tagName = parser.getName();if (event == XmlPullParser.START_TAG) { if (tagName.equals("place")) { // place Tag Found so we create a new CityResult cty = new CityResult(); Log.d("Swa", "New City found"); } currentTag = tagName; Log.d("Swa", "Tag ["+tagName+"]"); } else if (event == XmlPullParser.TEXT) { // We found some text. let's see the tagName to know the tag related to the text if ("woeid".equals(currentTag)) cty.setWoeid(parser.getText()); else if ("name".equals(currentTag)) cty.setCityName(parser.getText()); else if ("country".equals(currentTag)) cty.setCountry(parser.getText());// We don't want to analyze other tag at the moment } else if (event == XmlPullParser.END_TAG) { if ("place".equals(tagName)) result.add(cty); }event = parser.next(); } The code below is very simple, at line 1 we simply get the first XML event and we start traversing the XML document until we reach the end.  At the method end, we have a list of cities with the woied that is the information we were looking for. AutoCompleteTextView and ArrayAdapter with Filter Once we know how to get data from the XML we have to show to the user a list of items we get using yahoo api. There are several way to get this goal, we will use AutoCompleteTextView. This component is defined in Android doc as “ An editable text view that shows completion suggestions automatically while the user is typing. The list of suggestions is displayed in a drop down menu from which the user can choose an item to replace the content of the edit box with.”. It satisfies our needs! Using this component is trivial but a little more complex is using the array adapter and how we filter the result. Usually this component is used with a static item list, in our case we have to retrieve it from a remote server. First we implement a custom adapter that extends the ArrayAdapter and this is very simple: private class CityAdapter extends ArrayAdapter<CityResult> {private Context ctx; private List<CityResult> cityList = new ArrayList<CityResult>();public CityAdapter(Context ctx, List<CityResult> cityList) { super(ctx, R.layout.cityresult_layout, cityList); this.cityList = cityList; this.ctx = ctx; }@Override public CityResult getItem(int position) { if (cityList != null) return cityList.get(position);return null; }@Override public int getCount() { if (cityList != null) return cityList.size();return 0; }@Override public View getView(int position, View convertView, ViewGroup parent) { View result = convertView;if (result == null) { LayoutInflater inf = (LayoutInflater) ctx.getSystemService(Context.LAYOUT_INFLATER_SERVICE); result = inf.inflate(R.layout.cityresult_layout, parent, false);}TextView tv = (TextView) result.findViewById(R.id.txtCityName); tv.setText(cityList.get(position).getCityName() + "," + cityList.get(position).getCountry());return result; }@Override public long getItemId(int position) { if (cityList != null) return cityList.get(position).hashCode();return 0; } ... } Much more interesting is how we can retrieve the data from a remote server. If we implement the Filterable interface in our custom adapter we can get the result we are looking for. So we have: private class CityAdapter extends ArrayAdapter<CityResult> implements Filterable { .... @Override public Filter getFilter() { Filter cityFilter = new Filter() {@Override protected FilterResults performFiltering(CharSequence constraint) { FilterResults results = new FilterResults(); if (constraint == null || constraint.length() < 2) return results;List<CityResult> cityResultList = YahooClient.getCityList(constraint.toString()); results.values = cityResultList; results.count = cityResultList.size(); return results; }@Override protected void publishResults(CharSequence constraint, FilterResults results) { cityList = (List) results.values; notifyDataSetChanged(); } };return cityFilter; } .. } At line 4, we implement our Filter, that has two other methods we have to implement. In performFiltering method we make the HTTP call and retrieve the data (line 12). Apparently we could have ANR problem, and we know very well we shouldn’t do an HTTP call in the main thread. But if you read the documentation about performFiltering you will find out that this method runs in a separate thread so we won’t have any problem. Finally, we can set the adapter in our UI component and handle the even when user clicks on an item: AutoCompleteTextView edt = (AutoCompleteTextView) rootView.findViewById(R.id.edtCity); CityAdapter adpt = new CityAdapter(this.getActivity(), null); edt.setAdapter(adpt); edt.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // We handle the onclick event and select the city chosen by the user } }); In the next post we will retrieve weather information using the woied, so stay tuned!Source code available soon  Reference: Android Weather app using Yahoo weather provider and AutoCompleteTextView (Part 1) from our JCG partner Francesco Azzola at the Surviving w/ Android blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books