Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

spring-interview-questions-answers

Java Bootstrap: Dropwizard vs. Spring Boot

How to get a production ready Java application off the ground in the shortest time possible?I’m not a morning person, so sometimes it takes a while for the “all systems are go” cue to kick in. This used to be true for Java applications until not too long ago, but unlike the invention of the snooze function on the alarm clock, the solution we’re going to discuss here actually makes much better sense. With modern open source frameworks like Dropwizard, Spring Boot, Groovy’s Grails and Scala’s Play! you’re able to build a production ready application from scratch in a matter of minutes. Even if you’re not a morning person. And even if you’re not fond of wizard hats. In this post we’ll discuss the similarities and differences of the lightweight Java based frameworks with Dropwizard and Spring Boot. New Post: Java Bootstrap: Dropwizard vs. Spring Boot http://t.co/5QtUj7eoPc pic.twitter.com/QPoOt1ztOu — Takipi (@takipid) March 2, 2015 The Trade-off: Freedom of choice vs. The need for speed Either framework you go with, you’re sacrificing some freedom of choice since both Dropwizard and Spring boot are highly opinionated and strongly believe in convention over configuration. How strong? You’ll discover just that in a side by side comparison we’ve done, examining the different flavors of 3rd party libraries that each of them adds to the mix. Most if not all core features a production grade application requires come out of the box or available as integrations. The upside of this sacrifice is speed, although it’s fun at times to fiddle around with new libraries and customize your own perfect environment. When you need to get something quickly off the ground and start rolling, you’re better off delegating those decisions and getting rid of the complexity that comes with it. It’s not exactly a blue pill vs. red pill scenario: Further down the road when you’re up and running, you will most likely be able to customize and tweak it if needed. Now just direct your favorite build tool, be it Gradle or Maven, to Dropwizard & Spring Boot and you’re good to go. Let’s dig in and discover which aspects of each framework will leave you locked-in and where you can be more flexible. Spoiler alert: We faced a similar dilemma here at Takipi and decided to go with Dropwizard for an on-premise flavor of Takipi for enterprises. But what once could be seen as the default (and only) choice with Dropwizard, led us to break some prejudice we had with Spring boot and exhausting XML configurations. Dropwizard vs. Spring Boot: Who’s got your backend? A production grade application relies on many components and each framework has made its choices for us. All the weapons of choice to get a RESTful web application off the ground are laid down right here in this table with Dropwizard in the left corner with a wizard hat and Spring Boot in the right corner with the green shorts. The core out-of-the-box libraries and add-ons are separated by color with Spring’s internal dependencies marked in white.Ok, so now that we have a better view of the land, let’s see what this actually tells us. I’d also recommend taking a closer look at each of the frameworks, since everything is open-source and available for your viewing pleasure right on GitHub: Here are the Dropwizard source files, and here’s Spring Boot. Spring Dependencies Like it says on the tin, Spring Boot is focused on Spring applications. So if you’d like to get into the Spring ecosystem or already familiar with it and need to set up a quick application, this would probably be the way to go. The REST support, and DevOps features which we’ll talk about soon like metrics and health checks are based on Spring Framework’s core while DropWizard puts its REST support with Jersey. It’s pretty much the only aspect where Spring Boot leaves you locked-in, although it’s more flexible on other fronts. HTTP Server Here we can see just how Spring Boot can be more flexible. Dropwizard takes the convention over configuration approach a bit more extreme than Spring Boot and is completely based on Jetty, while Spring Boot takes the embeddable version of Tomcat by default but leaves you a way out if preferer Jetty or even RedHat’s Undertow. Logging This is another example of the same convention over configuration issue, Dropwizard switched from log4j to Logback in v0.4. I’m guessing that with log4j2’s recent GA release, this might be subject to change. On Spring Boot’s front, we need to make a choice between Logback, log4j and log4j2 if we require logging. By the way, if you’re using Logback, you should definitely check out this benchmark we ran to compare the performance of different logging methods. Dependency Injection A main difference between both frameworks is dependency injection support. As some of you know, Spring’s core comes with it’s built in dependency injection support while with Dropwizard this doesn’t come out of the box and you’ll have to choose one of the community integrations that support it. A popular choice would be going with Google’s Guice, and using one of the community led integrations. Testing – Fest vs. Hamcrest Both frameworks have a special module for testing, dropwizard-testing and spring-boot-starter-test, including JUnit and Mockito dependencies. Spring Boot naturally uses Spring Test as well and the main difference here comes in the shape of matcher objects, checking if different objects match the same pattern. Dropwizard favors FEST matchers (which are no longer developed) and Spring Boot goes with Hamcrest. Debugging in Production Unlike built in solutions for testing during the development stage, when your application is deployed in production there’s no guaranty that everything will go as planned. Especially when you deploy code fast. With Takipi in the mix, you’re able to know which errors pose the highest risk, prioritize them, and receive actionable information on how to fix them. No Dev Without Ops To earn the title of a production grade application, each framework’s core features include support for metrics, health checks and tasks. In a nutshell, metrics allow you to keep track of stats like memory usage and timing of how long does it take to execute areas in your code. Health checks are a way of testing on the go and answering questions like, is this socket still open? Or is the database connection alive? And with tasks support you can schedule maintenance operations or periodic tasks. The Dropwizard metrics library gained some popularity on it’s own and you can add it to any project, or even use it with Spring Boot’s metrics, to gain insight into what your code does in production. One cool feature is reporting to services like Graphite or Ganglia and some 20+ available integrations. Health checks also come with Dropwizard metrics and tasks are implemented as part of the framework. On the Spring Boot front, the framework uses Spring’s core features to support its Ops angle. A note on going container-less The key driver that enabled the creation of Dropwizard, followed by Spring Boot a few years later, are container-less Java HTTP servers. Unlike a standalone container, you can simply add an HTTP server like any other library dependency in your app. Straightforward, easy to update, and you don’t have to deal with any WAR files whatsoever. XML configurations are kept to a minimum. As to the deployment end of the story, both Dropwizard and Spring Boot make use of fat JARs to pack all JARs and their dependencies in one file, making it easier to deploy using a quick one-liner. Community and Release Cycle Dropwizard was initially released by Coda Hale in late 2011 back in his days at Yammer. Since then it passed some 20 versions, and currently stands on 0.7.1, enjoying great community support as the go-to guide for modern Java applications. On the downside, new releases are slowing down, after they used to come out every couple of months. In the upcoming 0.8 version we’re expecting to mostly see 3rd party version updates and minor fixes. Dropwizard currently supports Java 7 and above, to use it on Java 8 you can check out this partial update to enjoy some of its benefits and new features (or if you just don’t like joda-time for some reason). Today you can see mostly commits from Jochen Schalanda with over 160 individual contributors and tens of community supported integrations, like dropwizard-extra from Datasift. Of the available Dropwizard integrations, Spring support is also included. One more thing you should definitely check out is the official user group right here. With Pivotal backed Spring Boot joining the game with a 1.0 in 2014, there are over 40 official integrations (Starter POMs) to pretty much any 3rd party library you can think of. This includes anything from logging to social API integrations. One new Spring Boot project worth mentioning here is JHipster, a Yeoman generator for Spring Boot and Angular. On the bottom line you could say that Dropwizard has a larger community around it and Spring Boot enjoys better official and structured support, together with Spring’s existing user base. ConclusionIf you’re looking to get into the Spring ecosystem, then choosing Spring Boot is probably a no brainer. More than a way to bootstrap a RESTful Java application, it also acts as a gateway to Spring with integrations to tens of services. Maybe the real question here is whether you should start looking / go back into Spring? Which could be a whole other subject to discuss. Otherwise, Dropwizard would suit your needs best. A second issue here is how dependant are you on dependency injection? If Guice is your choice then going Dropwizard and using one of the community integrations would be an easy fix rather than doing the Spring way of dependency injection. And last but not least, taking a look at the side-by-side comparison, which framework makes the choices that you would take if you were to build an application from scratch yourself? Keep in mind the default picks since spending even more time configuring this bootstrap kind of betrays its cause.I hope you’ve found this comparison useful and I’d be happy to hear your comments about it and the factors that made you choose one over the other.Reference: Java Bootstrap: Dropwizard vs. Spring Boot from our JCG partner Alex Zhitnitsky at the Takipi blog....
java-logo

Create your own AOP in Java

Introduction As you know AOP is one of the best features provided by Spring framework which provides utmost flexibility while achieving cross cutting concerns. Have you thought of how AOP works in Spring ? Sometimes this is the question asked in case of senior level technical interview. Sometimes this question becomes more significant when it comes to only core java. Recently one of my friend went to attend the interview and he faced an embarrassing question about how to use AOP only in core java without using Spring and related libraries. In this article I will provide you an outline about how to create your own AOP only in core java of course with certain limitations. This is not a comparative study between Spring AOP and Java AOP. However you can achieve AOP in java to certain extent using proper design patterns. ...
software-development-2-logo

How to use SQL PIVOT to Compare Two Tables in Your Database

This can happen ever so easily. You adapt a table by adding a new column:                     ALTER TABLE payments ADD code NUMBER(3); You go on, implementing your business logic – absolutely no problem. But then, later on (perhaps in production), some batch job fails because it makes some strong assumptions about data types. Namely, it assumes that the two tables payments and payments_archive are of the same row type: CREATE TABLE payments ( id NUMBER(18) NOT NULL, account_id NUMBER(18) NOT NULL, value_date DATE, amount NUMBER(25, 2) NOT NULL );CREATE TABLE payments_archive ( id NUMBER(18) NOT NULL, account_id NUMBER(18) NOT NULL, value_date DATE, amount NUMBER(25, 2) NOT NULL ); Being of the same row type, you can simply move a row from one table to the other, e.g. using a query like this one: INSERT INTO payments_archive SELECT * FROM payments WHERE value_date < SYSDATE - 30; (not that using the above syntax is a good idea in general, it’s actually a bad idea. but you get the point) What you’re getting now is this: ORA-00913: too many values The fix is obvious, but probably, the poor soul who has to fix this is not you, but someone else who has to figure out among possibly hundreds of columns, which ones don’t match. Here’s how (in Oracle): Use PIVOT to compare two tables! You could of course not use PIVOT and simply select all columns from either table from the dictionary views: SELECT table_name, column_name FROM all_tab_cols WHERE table_name LIKE 'PAYMENTS%' This will produce the following result: TABLE_NAME COLUMN_NAME ------------------ --------------- PAYMENTS ID PAYMENTS ACCOUNT_ID PAYMENTS VALUE_DATE PAYMENTS AMOUNT PAYMENTS CODE PAYMENTS_ARCHIVE ID PAYMENTS_ARCHIVE ACCOUNT_ID PAYMENTS_ARCHIVE VALUE_DATE PAYMENTS_ARCHIVE AMOUNT Not very readable. You could of course use set operations and apply INTERSECT and MINUS (EXCEPT) to filter out matching values. But much better: SELECT * FROM ( SELECT table_name, column_name FROM all_tab_cols WHERE table_name LIKE 'PAYMENTS%' ) PIVOT ( COUNT(*) AS cnt FOR (table_name) IN ( 'PAYMENTS' AS payments, 'PAYMENTS_ARCHIVE' AS payments_archive ) ) t; And the above now produces: COLUMN_NAME PAYMENTS_CNT PAYMENTS_ARCHIVE_CNT ------------ ------------ -------------------- CODE 1 0 ACCOUNT_ID 1 1 ID 1 1 VALUE_DATE 1 1 AMOUNT 1 1 It is now very easy to identify the column that is missing from the PAYMENTS_ARCHIVE table. As you can see, the result from the original query produced one row per column AND per table. We took that result and pivoted it “FOR” the table name, such that we will now only get one row per column How to read PIVOT? It’s easy. Comments are inline: SELECT *-- This is the table that we're pivoting. Note that -- we select only the minimum to prevent side-effects FROM ( SELECT table_name, column_name FROM all_tab_cols WHERE table_name LIKE 'PAYMENTS%' )-- PIVOT is a keyword that is applied to the above -- table. It generates a new table, similar to JOIN PIVOT (-- This is the aggregated value that we want to -- produce for each pivoted value COUNT(*) AS available-- This is the source of the values that we want to -- pivot FOR (table_name)-- These are the values that we accept as pivot -- columns. The columns names are produced from -- these values concatenated with the corresponding -- aggregate function name IN ( 'PAYMENTS' AS payments, 'PAYMENTS_ARCHIVE' AS payments_archive ) ) t; That’s it. Not so hard, was it? The nice thing about this syntax is that we can generate as many additional columns as we want, very easily: SELECT * FROM ( SELECT table_name, column_name, cast(data_type as varchar(6)) data_type FROM all_tab_cols WHERE table_name LIKE 'PAYMENTS%' ) PIVOT ( COUNT(*) AS cnt, MAX(data_type) AS type -- new function here FOR (table_name) IN ( 'PAYMENTS' AS p, 'PAYMENTS_ARCHIVE' AS a ) ) t; … producing (after additional erroneous DDL) … COLUMN_NAME P_CNT P_TYPE A_CNT A_TYPE ----------- ---------- ------ ---------- ------ CODE 1 NUMBER 0 ACCOUNT_ID 1 NUMBER 1 NUMBER ID 1 NUMBER 1 NUMBER VALUE_DATE 1 DATE 1 TIMESTAMP AMOUNT 1 NUMBER 1 NUMBER This way, we can discover even more flaws between the different row types of the tables. In the above example, we’ve used MAX(), because we have to provide an aggregation function, even if each pivoted column corresponds to exactly one row in our example – but that doesn’t have to be. What if I’m not using Oracle? SQL Server also supports PIVOT, but other databases don’t. You can always emulate PIVOT using GROUP BY and CASE. The following statement is equivalent to the previous one: SELECT t.column_name, count(CASE table_name WHEN 'PAYMENTS' THEN 1 END) p_cnt, max (CASE table_name WHEN 'PAYMENTS' THEN data_type END) p_type, count(CASE table_name WHEN 'PAYMENTS_ARCHIVE' THEN 1 END) a_cnt, max (CASE table_name WHEN 'PAYMENTS_ARCHIVE' THEN data_type END) a_type FROM ( SELECT table_name, column_name, data_type FROM all_tab_cols WHERE table_name LIKE 'PAYMENTS%' ) t GROUP BY t.column_name; This query will now produce the same result on all the other databases as well. Isn’t that… ? Yes, it is! The above usage of aggregate functions in combination with CASE can be shortened even more, using the SQL standard FILTER clause, which we’ve blogged about recently. So, in PostgreSQL, you could write the following query: SELECT t.column_name, count(table_name) FILTER (WHERE table_name = 'payments') p_cnt, max(data_type) FILTER (WHERE table_name = 'payments') p_type, count(table_name) FILTER (WHERE table_name = 'payments_archive') a_cnt, max(data_type) FILTER (WHERE table_name = 'payments_archive') a_type FROM ( SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name LIKE 'payments%' ) t GROUP BY t.column_name;Reference: How to use SQL PIVOT to Compare Two Tables in Your Database from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
java-logo

Interface Evolution With Default Methods – Part I: Methods

A couple of weeks back we took a detailed look into default methods – a feature introduced in Java 8 which allows to give interface methods an implementation, i.e. a method body, and thus define behavior in an interface. This feature was introduced to enable interface evolution. In the context of the JDK this meant adding new methods to interfaces without breaking all the code out there. But while Java itself is extremely committed to keeping backwards compatibility, the same is not necessarily true for other projects. If those are willing, they can evolve their interfaces at the cost of having clients change their code. Before Java 8 this often involved client-side compile errors so changes were avoided or clients had to migrate in one go. With default methods interface evolution can become an error free process where clients have time between versions to update their code step by step. This greatly increases the feasibility of evolving interfaces and makes it a regular library development tool. Let’s have a look at how this is possible for adding, replacing and removing interface methods. A future post will look into ways to replace whole interfaces. Overview The post first defines some terminology before covering ways to add, replace and remove interface methods. It is written from the perspective of a developer who changes an interface in her library. I felt that this topic does not need examples so I didn’t write any. If you disagree and would like to see something, leave a comment and – time permitting – I will write some. Terminology Interfaces have implementations and callers. Both can exist within the library, in which case they are called internal, or in client code, called external. This adds up to four different categories of using an interface. Depending on how the interface is to be evolved and which uses exist different patterns have to be applied. Of course if neither external implementations nor external callers exist, none of this is necessary so the rest of the article assumes that at least one of those cases do exist. Interface Evolution – Methods So let’s see how we can add, replace or remove interface methods without breaking client code. This is generally possible by following this process:New VersionA new version of the library is released where the interface definition is transitional and combines the old as well as the new, desired outline. Default methods ensure that all external implementations and calls are still valid and no compile errors arise on an update.TransitionThen the client has time to move from the old to the new outline. Again, the default methods ensure that adapted external implementations and calls are valid and the changes are possible without compile errors. New VersionIn a new version, the library removes residues of the old outline. Given the client used her time wisely and made the necessary changes, releasing the new version will not cause compile errors.This process enables clients to update their code smoothly and on their own schedule which makes interface evolution much more feasible than it used to be.When following the detailed steps below, make sure to check when internal and external implementations are updated and when internal and external callers are allowed to use the involved method(s). Make sure to follow this procedure in your own code and properly document it for your clients so they know when to do what. The Javadoc tags @Deprecated and @apiNote are a good way to do that. It is not generally necessary to perform the steps within the transition in that order. If it is, this is explicitly pointed out. Tests are included in these steps for the case that you provide your customers with tests which they can run on their interface implementations. Add This process is only necessary if external interface implementations exist. Since the method is new, it is of course not yet called, so this case can be ignored. It makes sense to distinguish whether a reasonable default implementation can be provided or not. Reasonable Default Implementation ExistsNew Versiondefine tests for the new method add the method with the default implementation (which passes the tests) internal callers can use the method internal implementations can override the method where necessaryTransitionexternal callers can use the method external implementations can override the method where necessaryNothing more needs to be done and there is no new version involved. This is what happened with the many new default methods which were added in Java 8. Reasonable Default Implementation Does Not ExistsNew Versiondefine tests for the new method; these must accept UnupportedOperationExceptions add the method:include a default implementation which throws an UnupportedOperationException (this passes the tests) @apiNote comment documents that the default implementation will eventually be removedoverride the method in all internal implementationsTransitionThe following steps must happen in that order:external implementations must override the method external callers can use the methodNew Versiontests no longer accept UnupportedOperationExceptions make the method abstract:remove the default implementation remove the @apiNote commentinternal callers can use the methodThe barely conformant default implementation allows external implementations to update gradually. Note that all implementations are updated before the new method is actually called either internally or externally. Hence no UnupportedOperationException should ever occur. Replace In this scenario a method is replaced by another. This includes the case where a method changes its signature (e.g. its name or number of parameters) in which case the new version can be seen as replacing the old. Applying this pattern is necessary when external implementations or external callers exist. It only works if both methods are functionally equivalent. Otherwise it is a case of adding one and removing another function.New Versiondefine tests for the new method add new method:include a default implementation which calls the old method @apiNote comment documents that the default implementation will eventually be removeddeprecate old method:include a default implementation which calls the new method (the circular calls are intended; if a default implementation existed, it can remain) @apiNote comment documents that the default implementation will eventually be removed @Deprecation comment documents that the new method is to be usedinternal implementations override the new instead of the old method internal callers use the new instead of the old methodTransitionexternal implementations override the new instead of the old method external callers use the new instead of the old methodNew Versionmake the new method abstract:remove the default implementation remove the @apiNote commentremove the old methodWhile the circular calls look funny they ensure that it does not matter which variant of the methods is implemented. But since both variants have default implementations the compiler will not produce an error if neither is implemented. Unfortunately this would produce an infinite loop, so make sure to point this out to clients. If you provide them with tests for their implementations or they wrote their own, they will immediately recognize this though. Remove When removing a method different patterns can be applied depending on whether external implementations exist or not. External Implementations ExistNew Versiontests for the method must accept UnupportedOperationExceptions deprecate the method:include a default implementation which throws an UnupportedOperationException (this passes the updated tests) @Deprecation comment documents that the method will eventually be removed @apiNote comment documents that the default implementation only exists to phase out the methodinternal callers stop using the methodTransitionThe following steps must happen in that order:external callers stop using the method external implementations of the method are removedNew Versionremove the methodNote that internal and external implementations are only removed after no more calls to the method exist. Hence no UnupportedOperationException should ever occur. External Implementations Do Not Exist In this case a regular deprecation suffices. This case is only listed for the sake of completeness.New Versiondeprecate the method with @Depreated internal callers stop using the methodTransitionexternal callers stop calling the methodNew Versionremove the methodReflection We have seen how interface evolution is possible by adding, replacing and removing methods: a new interface version combines old and new outline, the client moves from the former to the latter and a final version removes residues of the old outline. Default implementatins of the involved methods ensure that the old as well as the new version of the client’s code compile and behave properly.Reference: Interface Evolution With Default Methods – Part I: Methods from our JCG partner Nicolai Parlog at the CodeFx blog....
java-logo

Using Java 8 Lambda expressions in Java 7 or older

I think nobody declines the usefulness of Lambda expressions, introduced by Java 8. However, many projects are stuck with Java 7 or even older versions. Upgrading can be time consuming and costly. If third party components are incompatible with Java 8 upgrading might not be possible at all. Besides that, the whole Android platform is stuck on Java 6 and 7. Nevertheless, there is still hope for Lambda expressions! Retrolambda provides a backport of Lambda expressions for Java 5, 6 and 7.     From the Retrolambda documentation: Retrolambda lets you run Java 8 code with lambda expressions and method references on Java 7 or lower. It does this by transforming your Java 8 compiled bytecode so that it can run on a Java 7 runtime. After the transformation they are just a bunch of normal .class files, without any additional runtime dependencies. To get Retrolambda running, you can use the Maven or Gradle plugin. If you want to use Lambda expressions on Android, you only have to add the following lines to your gradle build files: <project>/build.gradle: buildscript {   dependencies {     classpath 'me.tatarka:gradle-retrolambda:2.4.0'       } } <project>/app/build.gradle: apply plugin: 'com.android.application'// Apply retro lambda plugin after the Android plugin apply plugin: 'retrolambda' android {   compileOptions {     // change compatibility to Java 8 to get Java 8 IDE support     sourceCompatibility JavaVersion.VERSION_1_8     targetCompatibility JavaVersion.VERSION_1_8   } }Reference: Using Java 8 Lambda expressions in Java 7 or older from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....
software-development-2-logo

Code For the User, Not for Yourself

First, no matter what the methodology is, we all write software for our users (a.k.a. customers, project sponsors, end users, or clients). Second, no matter what the methodology is, we write incrementally, releasing features and bug fixes one by one. Maybe I’m saying something absolutely obvious here, but it’s important to remember that each new version should first of all satisfy the needs of the user, not of us programmers. In other words, the way we decompose a big task into smaller pieces should be user-targeted, and that’s why you always work top down. Let’s see what I mean through a practical example.      Say I’m contracted by a friend of mine to create a word-counting command line tool very similar to wc. He promised to pay me $200 for this work, and I promised him I’d deliver the product in two increments — an alpha and beta version. I promised him I’d release the alpha version on Saturday and the beta version on Sunday. He is going to pay me $100 after the first release and the rest after the second release. I’ll write in C, and he will pay in cash. The tool is very primitive, and it only took me a few minutes to write. Take a look at it: #include <stdio.h> #include <unistd.h> int main() { char ch; int count = 0; while (1) { if (read(STDIN_FILENO, &ch, 1) <= 0) { break; } if (ch == ' ') { ++count; } } if (count > 0) { ++count; } printf("%d\n", count); return 0; } But let’s be professional and not forget about build automation and unit testing. Here is a simple Makefile that does them both: all: wc test wc: wc.c gcc -o wc wc.c test: wc echo '' | ./wc | grep '0' echo 'Hello, world! How are you?' | ./wc | grep '5' Now I run make from a command line and get this output: $ make echo '' | ./wc | grep '0' 0 echo 'Hello, world! How are you?' | ./wc | grep '5' 5 All clean! I’m ready to get my $200. Wait, the deal was to deliver two versions and get cash in two installments. Let’s back up a little and think — how can we break this small tool into two parts? On first thought, let’s release the tool itself first and build automation and testing next. Is that a good idea? Can we deliver any software without running it first with a test? How can I be sure that it works if I don’t ship tests together with it? What will my friend think about me releasing anything without tests? This would be a total embarassment. Okay, let’s release Makefile first and wc.c next. But what will my friend do with a couple of tests and no product in hand? This first release will be absolutely pointless, and I won’t get my $100. Now we’re getting to the point of this article. What I’m trying to say is that every new increment must add some value to the product as it is perceived by the customer, not by us programmers. The Makefile is definitely a valuable artifact, but it provides no value to my friend. He doesn’t need it, but I need it. Here is what I’m going to do. I’ll release a skeleton of the tool, backed by the tests but with an absolutely dummy implementation. Look at it: #include <stdio.h> int main() { printf("5\n"); return 0; } And I will modify the Makefile accordingly. I will disable the first test to make sure the build passes. Does my tool work? Yes, it does. Does it count words? Yes, it does for some inputs. Does it have value to my friend. Obviously! He can run it from the command line, and he can pass a file as an input. He will always get number “5” as a result of counting, though. That’s a bummer, but it’s an alpha version. He doesn’t expect it to work perfectly. However, it works, it is backed by tests, and it is properly packaged. What I just did is a top-down approach to design. First of all, I created something that provides value to my customer. I made sure it also satisfies my technical objectives, like proper unit test coverage and build automation. But the most important goal for me was to make sure my friend received something … and paid me.Reference: Code For the User, Not for Yourself from our JCG partner Yegor Bugayenko at the About Programming blog....
java-logo

Utility Classes Have Nothing to Do With Functional Programming

I was recently accused of being against functional programming because I call utility classes an anti-pattern. That’s absolutely wrong! Well, I do consider them a terrible anti-pattern, but they have nothing to do with functional programming. I believe there are two basic reasons why. First, functional programming is declarative, while utility class methods are imperative. Second, functional programming is based on lambda calculus, where a function can be assigned to a variable. Utility class methods are not functions in this sense. I’ll decode these statements in a minute. In Java, there are basically two valid alternatives to these ugly utility classes aggressively promoted by Guava, Apache Commons, and others. The first one is the use of traditional classes, and the second one is Java 8 lambda. Now let’s see why utility classes are not even close to functional programming and where this misconception is coming from.Here is a typical example of a utility class Math from Java 1.0: public class Math { public static double abs(double a); // a few dozens of other methods of the same style } Here is how you would use it when you want to calculate an absolute value of a floating point number: double x = Math.abs(3.1415926d); What’s wrong with it? We need a function, and we get it from class Math. The class has many useful functions inside it that can be used for many typical mathematical operations, like calculating maximum, minimum, sine, cosine, etc. It is a very popular concept; just look at any commercial or open source product. These utility classes are used everywhere since Java was invented (this Math class was introduced in Java’s first version). Well, technically there is nothing wrong. The code will work. But it is not object-oriented programming. Instead, it is imperative and procedural. Do we care? Well, it’s up to you to decide. Let’s see what the difference is. There are basically two different approaches: declarative and imperative. Imperative programming is focused on describing how a program operates in terms of statements that change a program state. We just saw an example of imperative programming above. Here is another (this is pure imperative/procedural programming that has nothing to do with OOP): public class MyMath { public double f(double a, double b) { double max = Math.max(a, b); double x = Math.abs(max); return x; } } Declarative programming focuses on what the program should accomplish without prescribing how to do it in terms of sequences of actions to be taken. This is how the same code would look in Lisp, a functional programming language: (defun f (a b) (abs (max a b))) What’s the catch? Just a difference in syntax? Not really. There are many definitions of the difference between imperative and declarative styles, but I will try to give my own. There are basically three roles interacting in the scenario with this f function/method: a buyer, a packager of the result, and a consumer of the result. Let’s say I call this function like this: public void foo() { double x = this.calc(5, -7); System.out.println("max+abs equals to " + x); } private double calc(double a, double b) { double x = Math.f(a, b); return x; } Here, method calc() is a buyer, method Math.f() is a packager of the result, and method foo() is a consumer. No matter which programming style is used, there are always these three guys participating in the process: the buyer, the packager, and the consumer. Imagine you’re a buyer and want to purchase a gift for your (girl|boy)friend. The first option is to visit a shop, pay $50, let them package that perfume for you, and then deliver it to the friend (and get a kiss in return). This is an imperative style. The second option is to visit a shop, pay $50, and get a gift card. You then present this card to the friend (and get a kiss in return). When he or she decides to convert it to perfume, he or she will visit the shop and get it. This is a declarative style. See the difference? In the first case, which is imperative, you force the packager (a beauty shop) to find that perfume in stock, package it, and present it to you as a ready-to-be-used product. In the second scenario, which is declarative, you’re just getting a promise from the shop that eventually, when it’s necessary, the staff will find the perfume in stock, package it, and provide it to those who need it. If your friend never visits the shop with that gift card, the perfume will remain in stock. Moreover, your friend can use that gift card as a product itself, never visiting the shop. He or she may instead present it to somebody else as a gift or just exchange it for another card or product. The gift card itself becomes a product! So the difference is what the consumer is getting — either a product ready to be used (imperative) or a voucher for the product, which can later be converted into a real product (declarative). Utility classes, like Math from JDK or StringUtils from Apache Commons, return products ready to be used immediately, while functions in Lisp and other functional languages return “vouchers”. For example, if you call the max function in Lisp, the actual maximum between two numbers will only be calculated when you actually start using it: (let (x (max 1 5)) (print "X equals to " x)) Until this print actually starts to output characters to the screen, the function max won’t be called. This x is a “voucher” returned to you when you attempted to “buy” a maximum between 1 and 5. Note, however, that nesting Java static functions one into another doesn’t make them declarative. The code is still imperative, because its execution delivers the result here and now: public class MyMath { public double f(double a, double b) { return Math.abs(Math.max(a, b)); } } “Okay,” you may say, “I got it, but why is declarative style better than imperative? What’s the big deal?” I’m getting to it. Let me first show the difference between functions in functional programming and static methods in OOP. As mentioned above, this is the second big difference between utility classes and functional programming. In any functional programming language, you can do this: (defun foo (x) (x 5)) Then, later, you can call that x: (defun bar (x) (+ x 1)) // defining function bar (print (foo bar)) // passing bar as an argument to foo Static methods in Java are not functions in terms of functional programming. You can’t do anything like this with a static method. You can pass a static method as an argument to another method. Basically, static methods are procedures or, simply put, Java statements grouped under a unique name. The only way to access them is to call a procedure and pass all necessary arguments to it. The procedure will calculate something and return a result that is immediately ready for usage. And now we’re getting to the final question I can hear you asking: “Okay, utility classes are not functional programming, but they look like functional programming, they work very fast, and they are very easy to use. Why not use them? Why aim for perfection when 20 years of Java history proves that utility classes are the main instrument of each Java developer?” Besides OOP fundamentalism, which I’m very often accused of, there are a few very practical reasons (BTW, I am an OOP fundamentalist): Testability. Calls to static methods in utility classes are hard-coded dependencies that can never be broken for testing purposes. If your class is calling FileUtils.readFile(), I will never be able to test it without using a real file on disk. Efficiency. Utility classes, due to their imperative nature, are much less efficient than their declarative alternatives. They simply do all calculations right here and now, taking processor resources even when it’s not yet necessary. Instead of returning a promise to break down a string into chunks, StringUtils.split() breaks it down right now. And it breaks it down into all possible chunks, even if only the first one is required by the “buyer”. Readability. Utility classes tend to be huge (try to read the source code of StringUtils or FileUtils from Apache Commons). The entire idea of separation of concerns, which makes OOP so beautiful, is absent in utility classes. They just put all possible procedures into one huge .java file, which becomes absolutely unmaintainable when it surpasses a dozen static methods. To conclude, let me reiterate: Utility classes have nothing to do with functional programming. They are simply bags of static methods, which are imperative procedures. Try to stay as far as possible away from them and use solid, cohesive objects no matter how many of them you have to declare and how small they are.Reference: Utility Classes Have Nothing to Do With Functional Programming from our JCG partner Yegor Bugayenko at the About Programming blog....
software-development-2-logo

Interpret Page Fault Metrics

A page fault occurs when a program requests an address on a page that is not in the current set of memory resident pages.  What happens when a page fault occurs is that the thread that experienced the page fault is put into a Wait state while the operating system finds the specific page on disk and restores it to physical memory. It is important to distinguish between minor/soft and major/hard page fault;Minor – occurs when the page is resident at an alternate location in the memory. It may happen becasue the page is no longer part of the working set but not yet moved to disk or it was resident in memory as result of prefetch operation Major – occurs when the page is not located in physical memory or in the memory mapped files created by the process.Why bother?Poor Latency – You may ignore minor faults however, major faults can be detrimental to your application performance in the presence of insufficient physical memory and excessive hard fault and as such needs to be fixed immediately. Poor CPU utilization - as a direct result of thrashingWhat next?Increase physical memory – this might be the easy one to start with, although, if you already own a large real estate chances you need to go back to design room as this might just delay the problem Reduce overall memory usage – think right data type, de-duplication, effective (de)serialization Improve memory locality – think about your choice of algorithm based on data access pattern to reduce page faultReference: Interpret Page Fault Metrics from our JCG partner Nitin Tripathi at the ZERO blog....
software-development-2-logo

How to Extract a Date Part in SQL

The Modern SQL Twitter account (by Markus Winand) published a hint about how to extract a date part in SQL:                     The right way to get a part of a date/time is: EXTRACT(YEAR FROM CURRENT_DATE) = 2015 http://t.co/UNLyUoQdVb Retweet to spread the word! — Modern SQL (@ModernSQL) February 24, 2015Is it true? Yes it is, in the SQL standard and in a variety of standards-compliant databases. But let’s check what jOOQ does when you run the following program on all 18 currently supported RDBMS: import static org.jooq.impl.DSL.currentDate; import static org.jooq.impl.DSL.extract; import static org.jooq.impl.DSL.using;import java.util.stream.Stream;import org.jooq.DatePart; import org.jooq.SQLDialect;public class Extract { public static void main(String[] args) { // Get all distinct SQLDialect families Stream .of(SQLDialect.values()) .map(SQLDialect::family) .distinct() .forEach(family -> { System.out.println(); System.out.println(family);// Get all supported date parts Stream .of(DatePart.values())// For each family / part, get the // EXTRACT() function .map(part -> extract(currentDate(), part)) .forEach(expr -> { System.out.println( using(family).render(expr) ); }); }); } } The output is: Open Source databases DEFAULT extract(year from current_date()) extract(month from current_date()) extract(day from current_date()) extract(hour from current_date()) extract(minute from current_date()) extract(second from current_date())CUBRID extract(year from current_date()) extract(month from current_date()) extract(day from current_date()) extract(hour from current_date()) extract(minute from current_date()) extract(second from current_date())DERBY year(current_date) month(current_date) day(current_date) hour(current_date) minute(current_date) second(current_date)FIREBIRD extract(year from current_date) extract(month from current_date) extract(day from current_date) extract(hour from current_date) extract(minute from current_date) extract(second from current_date)H2 extract(year from current_date()) extract(month from current_date()) extract(day from current_date()) extract(hour from current_date()) extract(minute from current_date()) extract(second from current_date())HSQLDB extract(year from current_date) extract(month from current_date) extract(day from current_date) extract(hour from current_date) extract(minute from current_date) extract(second from current_date)MARIADB extract(year from current_date()) extract(month from current_date()) extract(day from current_date()) extract(hour from current_date()) extract(minute from current_date()) extract(second from current_date())MYSQL extract(year from current_date()) extract(month from current_date()) extract(day from current_date()) extract(hour from current_date()) extract(minute from current_date()) extract(second from current_date())POSTGRES extract(year from current_date) extract(month from current_date) extract(day from current_date) extract(hour from current_date) extract(minute from current_date) extract(second from current_date)SQLITE strftime('%Y', current_date) strftime('%m', current_date) strftime('%d', current_date) strftime('%H', current_date) strftime('%M', current_date) strftime('%S', current_date) Commercial databases ACCESS datepart('yyyy', date()) datepart('m', date()) datepart('d', date()) datepart('h', date()) datepart('n', date()) datepart('s', date())ASE datepart(yy, current_date()) datepart(mm, current_date()) datepart(dd, current_date()) datepart(hh, current_date()) datepart(mi, current_date()) datepart(ss, current_date())DB2 year(current_date) month(current_date) day(current_date) hour(current_date) minute(current_date) second(current_date)HANA extract(year from current_date) extract(month from current_date) extract(day from current_date) extract(hour from current_date) extract(minute from current_date) extract(second from current_date)INFORMIX year(current year to day) month(current year to day) day(current year to day) current year to day::datetime hour to hour::char(2)::int current year to day::datetime minute to minute::char(2)::int current year to day::datetime second to second::char(2)::intINGRES extract(year from current_date) extract(month from current_date) extract(day from current_date) extract(hour from current_date) extract(minute from current_date) extract(second from current_date)ORACLE (in jOOQ 3.5) to_char(trunc(sysdate), 'YYYY') to_char(trunc(sysdate), 'MM') to_char(trunc(sysdate), 'DD') to_char(trunc(sysdate), 'HH24') to_char(trunc(sysdate), 'MI') to_char(trunc(sysdate), 'SS')ORACLE (in jOOQ 3.6) extract(year from current_date) extract(month from current_date) extract(day from current_date) extract(hour from cast(current_date as timestamp)) extract(minute from cast(current_date as timestamp)) extract(second from cast(current_date as timestamp))SQLSERVER datepart(yy, convert(date, current_timestamp)) datepart(mm, convert(date, current_timestamp)) datepart(dd, convert(date, current_timestamp)) datepart(hh, convert(date, current_timestamp)) datepart(mi, convert(date, current_timestamp)) datepart(ss, convert(date, current_timestamp))SYBASE datepart(yy, current date) datepart(mm, current date) datepart(dd, current date) datepart(hh, current date) datepart(mi, current date) datepart(ss, current date) Yes. The standard… If only it were implemented thoroughly…Reference: How to Extract a Date Part in SQL from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
grails-logo

GGTS: Clean up Grails 2.0 output

Have you ever had in Groovy/Grails Tool Suite (GGTS) that console output, by a running Grails application, which is exactly the same as the previous output, just isn’t displayed? This can often be seen with println statements for debug-purposes e.g. in a Controller, which you think should output some line to the console every time, but simply doesn’t.         class TestController { def index() { println "index called" } } When http://localhost:8080/test/test/index is invoked repeatedly in the browser, you just keep seeing only the first occurence. ....index called When the same message repeatedly is sent to the console a certain convenience feature of GGTS swallows some output – if it looks the same. It has to do with the – since Grails 2.0 introduced – ANSI codes to make some output to the console coloured or re-appear on the same line. Kris de Volder gives a nice example in JIRA issue STS-3499 about how multiple lines such as Resolving Dependencies. Resolving Dependencies.. Resolving Dependencies... Resolving Dependencies.... are supposed to ‘rewrite over themselves’ on ANSI-supported consoles, so you’d only see Resolving Dependencies...<increasing periods> on the same line. Output in the GGTS non-ANSI-enabled console is stripped from these codes – which would result in additional output which some people find unpleasant. So GGTS uses a workaround – which is enabled by default – and strips the beginning of the output which matches previous output and only print the remainder. So if you were wondering why class BootStrap { def init = { servletContext -> ['A', 'B', 'B'].each { println it } } } would only print |Running Grails application A B |Server running. Browse to http://localhost:8080/test instead of |Running Grails application A B B |Server running. Browse to http://localhost:8080/test you know now this is not a bug :-) You have to disable the option ‘Clean Grails 2.0 output‘ in the GGTS preferences under Groovy > Grails > Grails Launch to prevent this swallowing-behaviour.Now your output appears in GGTS when you want it to appear :-)Reference: GGTS: Clean up Grails 2.0 output from our JCG partner Ted Vinke at the Ted Vinke’s Blog blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close