Featured FREE Whitepapers

What's New Here?

java-interview-questions-answers

Simple Java SSH Client

An execution of a shell command via SSH can be done in Java, in just a few lines, using jcabi-ssh:                   String hello = new Shell.Plain( new SSH( "ssh.example.com", 22, "yegor", "-----BEGIN RSA PRIVATE KEY-----..." ) ).exec("echo 'Hello, world!'"); jcabi-ssh is a convenient wrapper of JSch, a well-known pure Java implementation of SSH2. Here is a more complex scenario, where I upload a file via SSH and then read back its grepped content: Shell shell = new SSH( "ssh.example.com", 22, "yegor", "-----BEGIN RSA PRIVATE KEY-----..." ); File file = new File("/tmp/data.txt"); new Shell.Safe(shell).exec( "cat > d.txt && grep 'some text' d.txt", new FileInputStream(file), Logger.stream(Level.INFO, this), Logger.stream(Level.WARNING, this) ); Class SSH, which implements interface Shell, has only one method, exec. This method accepts four arguments: interface Shell { int exec( String cmd, InputStream stdin, OutputStream stdout, OutputStream stderr ); } I think it’s obvious what these arguments are about. There are also a few convenient decorators that make it easier to operate with simple commands. Shell.Safe Shell.Safe decorates an instance of Shell and throws an exception if the exec exit code is not equal to zero. This may be very useful when you want to make sure that your command executed successfully, but don’t want to duplicate if/throw in many places of your code. Shell ssh = new Shell.Safe( new SSH( "ssh.example.com", 22, "yegor", "-----BEGIN RSA PRIVATE KEY-----..." ) ); Shell.Verbose Shell.Verbose decorates an instance of Shell and copies stdout and stderr to the slf4j logging facility (using jcabi-log). Of course, you can combine decorators, for example: Shell ssh = new Shell.Verbose( new Shell.Safe( new SSH( "ssh.example.com", 22, "yegor", "-----BEGIN RSA PRIVATE KEY-----..." ) ) ); Shell.Plain Shell.Plain is a wrapper of Shell that introduces a new exec method with only one argument, a command to execute. It also doesn’t return an exit code, but stdout instead. This should be very convenient when you want to execute a simple command and just get its output (I’m combining it with Shell.Safe for safety): String login = new Shell.Plain(new Shell.Safe(ssh)).exec("whoami"); Download You need a single dependency jcabi-ssh.jar in your Maven project (get its latest version in Maven Central): <dependency> <groupId>com.jcabi</groupId> <artifactId>jcabi-ssh</artifactId> </dependency>The project is in Github. If you have any problems, just submit an issue. I’ll try to help.Related Posts You may also find these posts interesting:Fluent JDBC Decorator How to Retry Java Method Call on Exception Cache Java Method Results How to Read MANIFEST.MF Files Java Method Logging with AOP and AnnotationsReference: Simple Java SSH Client from our JCG partner Yegor Bugayenko at the About Programming blog....
gradle-logo

Getting Started with Gradle: Creating a Binary Distribution

After we have created a useful application, the odds are that we want to share it with other people. One way to do this is to create a binary distribution that can be downloaded from our website. This blog post describes how we can a binary distribution that fulfils the following requirements:          Our binary distribution must not use so called “fat jar” approach. In other words, the dependencies of our application must not be packaged into the same jar file than our application. Our binary distribution must contain startup scripts for *nix and Windows operating systems. The root directory of our binary distribution must contain the license of our application.Let’s get started. Additional reading:Getting Started with Gradle: Introduction helps you to install Gradle, describes the basic concepts of a Gradle build, and describes how you can functionality to your build by using Gradle plugins. Getting Started with Gradle: Our First Java Project describes how you can create a Java project by using Gradle and package your application to an executable jar file. Getting Started with Gradle: Dependency Management describes how you can manage the dependencies of your Gradle project.Creating a Binary Distribution The application plugin is a Gradle plugin that allows us to run our application, install it, and create a binary distribution that doesn’t use the “fat jar” approach. We can create a binary distribution by making the following changes to the build.gradle file of the example application that we created during the previous part of my Getting Started with Gradle tutorial:Remove the configuration of the jar task. Apply the application plugin to our project. Configure the main class of our application by setting the value of the mainClassName property.After we have made these changes to our build.gradle file, it looks as follows (the relevant parts are highlighted): apply plugin: 'application' apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' testCompile 'junit:junit:4.11' }mainClassName = 'net.petrikainulainen.gradle.HelloWorld' The application plugin adds five tasks to our project:The run task starts the application. The startScripts task creates startup scripts to the build/scripts directory. This tasks creates startup scripts for Windows and *nix operating systems. The installApp task installs the application into the build/install/[project name] directory. The distZip task creates the binary distribution and packages it into a zip file that is found from the build/distributions directory. The distTar task creates the binary distribution and packages it into a tar file that is found from the build/distributions directory.We can create a binary distribution by running one of the following commands in the root directory of our project: gradle distZip or gradle distTar. If we create a binary distribution that is packaged to a zip file, see the following output: > gradle distZip :compileJava :processResources :classes :jar :startScripts :distZipBUILD SUCCESSFULTotal time: 4.679 secs If we unpackage the created binary distribution created by the application plugin, we get the following directory structure:The bin directory contains the startup scripts. The lib directory contains the jar file of our application and its dependencies.You can get more information about the application plugin by reading the Chapter 45. The Application Plugin of the Gradle User’s Guide. We can now create a binary distribution that fulfils almost all of our requirements. However, we still need to add the license of our application to the root directory of our binary distribution. Let’s move on and find out how we can do it. Adding the License File of Our Application to the Binary Distribution We can add the license of our application to our binary distribution by following these steps:Create a task that copies the license file from the root directory of our project to the build directory. Add the license file to the root directory of the created binary distribution.Let’s move on and take a closer look at these steps. Copying the License File to the Build Directory The name of the file that contains the license of our application is LICENSE, and it is found from the root directory of our project. We can copy the license file to the build directory by following these steps:Create a new Copy task called the copyLicense. Configure the source file by using the from() method of the CopySpec interface. Pass the string ‘LICENSE’ as a method parameter. Configure the target directory by using the into() method of the CopySpec interface. Pass the value of the $buildDir property as a method parameter.After we have followed these steps, our build.gradle file looks as follows (the relevant part is highlighted): apply plugin: 'application' apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' testCompile 'junit:junit:4.11' }mainClassName = 'net.petrikainulainen.gradle.HelloWorld'task copyLicense(type: Copy) { from "LICENSE" into "$buildDir" } Additional Information:The API Documentation of the Copy task Section 16.6 Copying files of the Gradle User’s GuideWe have now created a task that copies the LICENSE file from the root directory of our project to the build directory. However, when we run the command gradle distZip in the root directory of our project, we see the following output: > gradle distZip :compileJava :processResources :classes :jar :startScripts :distZipBUILD SUCCESSFULTotal time: 4.679 secs In other words, our new task is not invoked and this naturally means that the license file is not included in our binary distribution. Let’s fix this problem. Adding the License File to the Binary Distribution We can add the license file to the created binary distribution by following these steps:Transform the copyLicense task from a Copy task to a “regular” Gradle task by removing the string ‘(type: Copy)’ from its declaration. Modify the implementation of the copyLicense task by following these steps:Configure the output of the copyLicense task. Create a new File object that points to the license file found from the build directory and set it as the value of the outputs.file property. Copy the license file from the root directory of our project to the build directory.The application plugin sets a CopySpec property called the applicationDistribution to our project. We can use it to include the license file to the created binary distribution. We can do this by following these steps:Configure the location of the license file by using the from() method of the CopySpec interface and pass the output of the copyLicense task as method parameter. Configure the target directory by using the into() method of the CopySpec interface and pass an empty String as a method parameter.After we have followed these steps, our build.gradle file looks as follows (the relevant part is highlighted): apply plugin: 'application' apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' testCompile 'junit:junit:4.11' }mainClassName = 'net.petrikainulainen.gradle.HelloWorld'task copyLicense { outputs.file new File("$buildDir/LICENSE") doLast { copy { from "LICENSE" into "$buildDir" } } }applicationDistribution.from(copyLicense) { into "" } Additional Reading:The API Documentation of the doLast() action of the Task Section 45.5 Including other resources in the distribution of the Gradle User’s Guide The Groovydoc of the ApplicationPluginConvention classWhen we run the command gradle distZip in the root directory of our project, we see the following output: > gradle distZip :copyLicense :compileJava :processResources :classes :jar :startScripts :distZipBUILD SUCCESSFULTotal time: 5.594 secs As we can see, the copyLicense task is now invoked and if we unpackage our binary distribution, we notice that the LICENSE file is found from its root directory. Let’s move on summarize what we have learned from this blog post. Summary This blog post taught us three things:We learned that we can create a binary distribution by using the application plugin. We learned how we can copy a file from the source directory to the target directory by using the Copy task. We learned how we can add files to the binary distribution that is created by the application plugin.If you want play around with the example application of this blog post, you can get it from Github.Reference: Getting Started with Gradle: Creating a Binary Distribution from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
spring-interview-questions-answers

Stateless Session for multi-tenant application using Spring Security

Once upon a time, I published one article explaining the principle to build Stateless Session. Coincidentally, we are working on the same task again, but this time, for a multi-tenant application. This time, instead of building the authentication mechanism ourselves, we integrate our solution into Spring Security framework. This article will explain our approach and implementation. Business Requirement We need to build authentication mechanism for an Saas application. Each customer access the application through a dedicated sub-domain. Because the application will be deployed on the cloud, it is pretty obvious that Stateless Session is the preferred choice because it allow us to deploy additional instances without hassle. In the project glossary, each customer is one site. Each application is one app. For example, site may be Microsoft or Google. App may be Gmail, GooglePlus or Google Drive. A sub-domain that user use to access the application will include both app and site. For example, it may looks like microsoft.mail.somedomain.com or google.map.somedomain.com User once login to one app, can access any other apps as long as they are for the same site. Session will be timeout after a certain inactive period. Background Stateless Session Stateless application with timeout is nothing new. Play framework has been stateless from the first release in 2007. We also switched to Stateless Session many years ago. The benefit is pretty clear. Your Load Balancer do not need stickiness; hence, it is easier to configure. As the session in on the browser, we can simply bring in new servers to boost capacity immediately. However, the disadvantage is that your session is not so big and not so confidential anymore. Comparing to stateful application where the session is store in server, stateless application store the session in HTTP cookie, which can not grow more than 4KB. Moreover, as it is cookie, it is recommended that developers only store text or digit on the session rather than complicated data structure. The session is stored in browser and transfer to server in every single request. Therefore, we should keep the session as small as possible and avoid placing any confidential data on it. To put it short, stateless session force developer to change the way application using session. It should be user identity rather than convenient store. Security Framework The idea behind Security Framework is pretty simple, it helps to identify the principle that executing code, checking if he has permission to execute some services and throws exceptions if user does not. In term of implementation, security framework integrate with your service in an AOP style architecture. Every check will be done by the framework before method call. The mechanism for implementing permission check may be filter or proxy. Normally, security framework will store principal information in the thread storage (ThreadLocal in Java). That why it can give developers a static method access to the principal anytime. I think this is somethings developers should know well; otherwise, they may implement permission check or getting principal in some background jobs that running in separate threads. In this situation, it is obviously that the security framework will not be able to find the principal. Single Sign On Single Sign On in mostly implemented using Authentication Server. It is independent of the mechanism to implement session (stateless or stateful). Each application still maintain their own session. On the first access to an application, it will contact authentication server to authenticate user then create its own session. Food for Thought Framework or build from scratch As stateless session is the standard, the biggest concern for us is to use or not to use a security framework. If we use, then Spring Security is the cheapest and fastest solution because we already use Spring Framework in our application. For the benefit, any security framework provide us quick and declarative way to declare assess rule. However, it will not be business logic aware access rule. For example, we can define that only Agent can access the products but we can not define that one agent can only access some products that belong to him. In this situation, we have two choices, building our own business logic permission check from scratch or build 2 layers of permission check, one is only role based, one is business logic aware. After comparing two approaches, we chose the latter one because it is cheaper and faster to build. Our application will function similar to any other Spring Security application. It means that user will be redirected to login page if accessing protected content without session. If the session exist, user will get status code 403. If user access protected content with valid role but unauthorized records, he will get 401 instead. Authentication The next concern is how to integrate our authentication and Authorization mechanism with Spring Security. A standard Spring Security application may process a request like below:The diagram is simplified but still give us a raw idea how things work. If the request is login or logout, the top two filters update the server side session. After that, another filter help check access permission for the request. If the permission check success, another filter will help to store user session to thread storage. After that, controller will execute code with the properly setup environment. For us, we prefer to create our authentication mechanism because the credential need to contain website domain. For example, we may have Joe from Xerox and Joe from WDS accessing Saas application. As Spring Security take control of preparing authentication token and authentication provider, we find it is cheaper to implement login and logout ourselves at the controller level rather than spending effort on customizing Spring Security. As we implement stateless session, there are two works we need to implements here. At first, we need to to construct the session from cookie before any authorization check. We also need to update the session time stamp so that the session is refreshed every time browser send request to server. Because of the earlier decision to do authentication in controller, we face a challenge here. We should not refresh the session before controller executes because we do authentication here. However, some controller methods is attached with the View Resolver that write to output stream immediately. Therefore, we have no chance to refresh cookie after controller being executed. Finally, we choose a slightly compromised solution by using HandlerInterceptorAdapter. This handler interceptor allow us to do extra processing before and after each controller method. We implement refreshing cookie after controller method if the method is for authentication and before controller methods for any other purpose. The new diagram should look like thisCookie To be meaningful, user should have only one session cookie. As the session always change time stamp after each request, we need to update session on every single response. By HTTP protocol, this can only be done if the cookies match name, path and domain. When getting this business requirement, we prefer to try new way of implementing SSO by sharing session cookie. If every application are under the same parent domain and understand the same session cookie, effectively we have a global session. Therefore, there is no need for authentication server any more. To achieve that vision, we must set the domain as the parent domain of all applications. Performance Theoretically, stateless session should be slower. Assuming that the server implementation store session table in memory, passing in JSESSIONID cookie will only trigger a one time read of object from the session table and optional one time write to update last access (for calculating session timeout). In contrast, for stateless session, we need to calculate the hash to validate session cookie, load principal from database, assigning new time stamp and hash again. However, with today server performance, hashing should not add too much delay in server response time. The bigger concern is querying data from database, and for this, we can speed up by using cache. In best case scenario, stateless session can perform closely enough to stateful if there is no DB call made. In stead of loading from session table, which maintained by container, the session is loaded from internal cache, which is maintained by application. In the worst case scenario, requests are being routed to many different servers and the principal object is stored in many instances. This add additional effort to load principal to the cache once per server. While the cost may be high, it occurs only once in a while. If we apply stickiness routing to load balancer, we should be able to achieve best case scenario performance. With this, we can perceive the stateless session cookie as similar mechanism to JSESSIONID but with fall back ability to reconstruct session object. Implementation I have published the sample of this implementation to https://github.com/tuanngda/sgdev-blog repository. Kindly check the stateless-session project. The project requires a mysql database to work. Hence, kindly setup a schema following build.properties or modify the properties file to fit your schema. The project include maven configuration to start up a tomcat server at port 8686. Therefore, you can simply type mvn cargo:run to start up the server. Here is the project hierarchy:I packed both Tomcat 7 server and the database so that it work without any other installation except MySQL. The Tomcat configuration file TOMCAT_HOME/conf/context.xml contain the DataSource declaration and project properties file. Now, let’s look closer at the implementation. Session We need two session objects, one represent the session cookie, one represent the session object that we build internally in Spring security framework: public class SessionCookieData { private int userId; private String appId; private int siteId; private Date timeStamp; } and public class UserSession { private User user; private Site site;public SessionCookieData generateSessionCookieData(){ return new SessionCookieData(user.getId(), user.getAppId(), site.getId()); } } With this combo, we have the objects to store session object in cookie and memory. The next step is to implement a method that allow us to build session object from cookie data. public interface UserSessionService { public UserSession getUserSession(SessionCookieData sessionData); } Now, one more service to retrieve and generate cookie from cookie data. public class SessionCookieService {public Cookie generateSessionCookie(SessionCookieData cookieData, String domain);public SessionCookieData getSessionCookieData(Cookie sessionCookie);public Cookie generateSignCookie(Cookie sessionCookie); } Up to this point, We have the service that help us to do the conversion Cookie –> SessionCookieData –> UserSession and Session –> SessionCookieData –> Cookie Now, we should have enough material to integrate stateless session with Spring Security framework Integrate with Spring security At first, we need to add a filter to construct session from cookie. Because this should happen before permission check, it is better to use AbstractPreAuthenticatedProcessingFilter @Component(value="cookieSessionFilter") public class CookieSessionFilter extends AbstractPreAuthenticatedProcessingFilter { ... @Override protected Object getPreAuthenticatedPrincipal(HttpServletRequest request) { SecurityContext securityContext = extractSecurityContext(request); if (securityContext.getAuthentication()!=null  && securityContext.getAuthentication().isAuthenticated()){ UserAuthentication userAuthentication = (UserAuthentication) securityContext.getAuthentication(); UserSession session = (UserSession) userAuthentication.getDetails(); SecurityContextHolder.setContext(securityContext); return session; } return new UserSession(); } ... } The filter above construct principal object from session cookie. The filter also create a PreAuthenticatedAuthenticationToken that will be used later for authentication. It is obviously that Spring will not understand this Principal. Therefore, we need to provide our own AuthenticationProvider that manage to authenticate user based on this principal. public class UserAuthenticationProvider implements AuthenticationProvider { @Override public Authentication authenticate(Authentication authentication) throws AuthenticationException { PreAuthenticatedAuthenticationToken token = (PreAuthenticatedAuthenticationToken) authentication;UserSession session = (UserSession)token.getPrincipal();if (session != null && session.getUser() != null){ SecurityContext securityContext = SecurityContextHolder.getContext(); securityContext.setAuthentication(new UserAuthentication(session)); return new UserAuthentication(session); }throw new BadCredentialsException("Unknown user name or password"); } } This is Spring way. User is authenticated if we manage to provide a valid Authentication object. Practically, we let user login by session cookie for every single request. However, there are times that we need to alter user session and we can do it as usual in controller method. We simply overwrite the SecurityContext, which is setup earlier in filter. It also stores the UserSession to SecurityContextHolder, which helps to setup environment. Because it is a pre-authentication filter, it should work well for most of requests, except authentication. We should update the SecurityContext in authentication method manually: public ModelAndView login(String login, String password, String siteCode) throws IOException{ if(StringUtils.isEmpty(login) || StringUtils.isEmpty(password)){ throw new HttpServerErrorException(HttpStatus.BAD_REQUEST, "Missing login and password"); } User user = authService.login(siteCode, login, password); if(user!=null){ SecurityContext securityContext = SecurityContextHolder.getContext(); UserSession userSession = new UserSession(); userSession.setSite(user.getSite()); userSession.setUser(user); securityContext.setAuthentication(new UserAuthentication(userSession)); }else{ throw new HttpServerErrorException(HttpStatus.UNAUTHORIZED, "Invalid login or password"); } return new ModelAndView(new MappingJackson2JsonView()); } Refresh Session Up to now, you may notice that we have never mentioned the writing of cookie. Provided that we have a valid Authentication object and our SecurityContext contain the UserSession, it is important that we need to send this information to browser. Before the HttpServletResponse is generated, we must attach the session cookie to it. This cookie with similar domain and path will replace the older session that browser is keeping. As discussed above, refreshing session is better to be done after controller method because we implement authentication here. However, the challenge is caused by ViewResolver of Spring MVC. Sometimes it write to OutputStream so soon that any attempt to add cookie to response will be useless. Finally, we come up with a compromise solution that refresh session before controller methods for normal requests and after controller methods for authentication requests. To know whether requests is for authentication, we place an annotation at the authentication methods. @Override public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { if (handler instanceof HandlerMethod){ HandlerMethod handlerMethod = (HandlerMethod) handler; SessionUpdate sessionUpdateAnnotation = handlerMethod.getMethod().getAnnotation(SessionUpdate.class); if (sessionUpdateAnnotation == null){ SecurityContext context = SecurityContextHolder.getContext(); if (context.getAuthentication() instanceof UserAuthentication){ UserAuthentication userAuthentication = (UserAuthentication)context.getAuthentication(); UserSession session = (UserSession) userAuthentication.getDetails(); persistSessionCookie(response, session); } } } return true; }@Override public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception { if (handler instanceof HandlerMethod){ HandlerMethod handlerMethod = (HandlerMethod) handler; SessionUpdate sessionUpdateAnnotation = handlerMethod.getMethod().getAnnotation(SessionUpdate.class); if (sessionUpdateAnnotation != null){ SecurityContext context = SecurityContextHolder.getContext(); if (context.getAuthentication() instanceof UserAuthentication){ UserAuthentication userAuthentication = (UserAuthentication)context.getAuthentication(); UserSession session = (UserSession) userAuthentication.getDetails(); persistSessionCookie(response, session); } } } } Conclusion The solution works well for us but we do not have the confident that this is the best practices possible. However, it is simple and does not cost us much effort to implement (around 3 days include testing). Kindly feedback if you have any better idea to build stateless session with Spring.Reference: Stateless Session for multi-tenant application using Spring Security from our JCG partner Nguyen Anh Tuan at the Developers Corner blog....
java-interview-questions-answers

Java Method Logging with AOP and Annotations

Sometimes, I want to log (through slf4j and log4j) every execution of a method, seeing what arguments it receives, what it returns and how much time every execution takes. This is how I’m doing it, with help of AspectJ, jcabi-aspects and Java 6 annotations:                 public class Foo { @Loggable public int power(int x, int p) { return Math.pow(x, p); } } This is what I see in log4j output: [INFO] com.example.Foo #power(2, 10): 1024 in 12μs [INFO] com.example.Foo #power(3, 3): 27 in 4μs Nice, isn’t it? Now, let’s see how it works. Annotation with Runtime Retention Annotations is a technique introduced in Java 6. It is a meta-programming instrument that doesn’t change the way code works, but gives marks to certain elements (methods, classes or variables). In other words, annotations are just markers attached to the code that can be seen and read. Some annotations are designed to be seen at compile time only — they don’t exist in .class files after compilation. Others remain visible after compilation and can be accessed in runtime. For example, @Override is of the first type (its retention type is SOURCE), while @Test from JUnit is of the second type (retention type is RUNTIME). @Loggable — the one I’m using in the script above — is an annotation of the second type, from jcabi-aspects. It stays with the bytecode in the .class file after compilation. Again, it is important to understand that even though method power() is annotated and compiled, it doesn’t send anything to slf4j so far. It just contains a marker saying “please, log my execution”. Aspect Oriented Programming (AOP) AOP is a useful technique that enables adding executable blocks to the source code without explicitly changing it. In our example, we don’t want to log method execution inside the class. Instead, we want some other class to intercept every call to method power(), measure its execution time and send this information to slf4j. We want that interceptor to understand our @Loggable annotation and log every call to that specific method power(). And, of course, the same interceptor should be used for other methods where we’ll place the same annotation in the future. This case perfectly fits the original intent of AOP — to avoid re-implementation of some common behavior in multiple classes. Logging is a supplementary feature to our main functionality, and we don’t want to pollute our code with multiple logging instructions. Instead, we want logging to happen behind the scenes. In terms of AOP, our solution can be explained as creating an aspect that cross-cuts the code at certain join points and applies an around advice that implements the desired functionality. AspectJ Let’s see what these magic words mean. But, first, let’s see how jcabi-aspects implements them using AspectJ (it’s a simplified example, full code you can find in MethodLogger.java): @Aspect public class MethodLogger { @Around("execution(* *(..)) && @annotation(Loggable)") public Object around(ProceedingJoinPoint point) { long start = System.currentTimeMillis(); Object result = point.proceed(); Logger.info( "#%s(%s): %s in %[msec]s", MethodSignature.class.cast(point.getSignature()).getMethod().getName(), point.getArgs(), result, System.currentTimeMillis() - start ); return result; } } This is an aspect with a single around advice around() inside. The aspect is annotated with @Aspect and advice is annotated with @Around. As discussed above, these annotations are just markers in .class files. They don’t do anything except provide some meta-information to those w ho are interested in runtime. Annotation @Around has one parameter, which — in this case — says that the advice should be applied to a method if:its visibility modifier is * (public, protected or private); its name is name * (any name); its arguments are .. (any arguments); and it is annotated with @LoggableWhen a call to an annotated method is to be intercepted, method around() executes before executing the actual method. When a call to method power() is to be intercepted, method around() receives an instance of class ProceedingJoinPoint and must return an object, which will be used as a result of method power(). In order to call the original method, power(), the advice has to call proceed() of the join point object. We compile this aspect and make it available in classpath together with our main file Foo.class. So far so good, but we need to take one last step in order to put our aspect into action — we should apply our advice. Binary Aspect Weaving Aspect weaving is the name of the advice applying process. Aspect weaver modifies original code by injecting calls to aspects. AspectJ does exactly that. We give it two binary Java classes Foo.class and MethodLogger.class; it gives back three — modified Foo.class, Foo$AjcClosure1.class and unmodified MethodLogger.class. In order to understand which advices should be applied to which methods, AspectJ weaver is using annotations from .class files. Also, it uses reflection to browse all classes on classpath. It analyzes which methods satisfy the conditions from the @Around annotation. Of course, it finds our method power(). So, there are two steps. First, we compile our .java files using javac and get two files. Then, AspectJ weaves/modifies them and creates its own extra class. Our Foo class looks something like this after weaving: public class Foo { private final MethodLogger logger; @Loggable public int power(int x, int p) { return this.logger.around(point); } private int power_aroundBody(int x, int p) { return Math.pow(x, p); } } AspectJ weaver moves our original functionality to a new method, power_aroundBody(), and redirects all power() calls to the aspect class MethodLogger. Instead of one method power() in class Foo now we have four classes working together. From now on, this is what happens behind the scenes on every call to power():Original functionality of method power() is indicated by the small green lifeline on the diagram. As you see, the aspect weaving process connects together classes and aspects, transferring calls between them through join points. Without weaving, both classes and aspects are just compiled Java binaries with attached annotations. jcabi-aspects jcabi-aspects is a JAR library that contains Loggable annotation and MethodLogger aspect (btw, there are many more aspects and annotations). You don’t need to write your own aspect for method logging. Just add a few dependencies to your classpath and configure jcabi-maven-plugin for aspect weaving (get their latest versions in Maven Central): <project> <depenencies> <dependency> <dependency> <groupId>com.jcabi</groupId> <artifactId>jcabi-aspects</artifactId> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> </dependency> </dependency> </depenencies> <build> <plugins> <plugin> <groupId>com.jcabi</groupId> <artifactId>jcabi-maven-plugin</artifactId> <executions> <execution> <goals> <goal>ajc</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> Since this weaving procedure takes a lot of configuration effort, I created a convenient Maven plugin with an ajc goal, which does the entire aspect weaving job. You can use AspectJ directly, but I recommend that you use jcabi-maven-plugin. That’s it. Now you can use @com.jcabi.aspects.Loggable annotation and your methods will be logged through slf4j. If something doesn’t work as explained, don’t hesitate to submit a Github issue. Related Posts You may also find these posts interesting:How to Retry Java Method Call on Exception Cache Java Method Results Get Rid of Java Static Loggers Limit Java Method Execution Time Simple Java SSH ClientReference: Java Method Logging with AOP and Annotations from our JCG partner Yegor Bugayenko at the About Programming blog....
javascript-logo

AngularJS: Introducing modules, controllers, services

In my previous post AngularJS Tutorial: Getting Started with AngularJS we have seen how to setup an application using SpringBoot + AngularJS + WebJars. But it’s a kind of quick start tutorial where I haven’t explained much about AngularJS modules, controllers and services. Also it is a single screen (only one route) application. In this part-2 tutorial, we will take a look at what are Angular modules, controllers and services and how to configure and use them. Also we will look into how to use ngRoute to build multi-screen application. If we take a look at the code that we developed in previous post, especially in controllers.js, we clubbed the client side controller logic and business logic(of-course we don’t have any biz logic here !) in our Controllers which is not good. As java developers we get used to have dozen layers and we love making things complex and complain Java is complex. But here in AngularJS things looks simpler, let’s make things little bit complex. I am just kidding ! Even if you put all your logic in single place as we did in controllers.js, it will work and acceptable for simple applications. But if you are going to develop large enterprise application (who said enterprise applications should be large…hmm..ok..continue..) then things quickly become messy. And believe me working with a messy large JavaScript codebase is lot more painful than messy large Java codebase. So it is a good idea to separate the business logic from controller logic. In AngularJS we can organize application logic into modules and make them work together using dependency injection. Lets see how to create a module in AngularJS. var myModule = angular.module('moduleName',['dependency1','dependency2']); This is how we can create a module by using angular.module() function by passing the module name and specifying a list of dependencies if there are any. Once we define a module we can get handle of the module as follows: var myModule = angular.module('moduleName'); Observe that there is no second argument here which means we are getting the reference of a predefined angular module. If you include the second argument, which is an array, then it means you are defining the new module. Once we define a new module we can create controllers in that module as follows: module.controller('ControllerName',['dependency1','dependency2', function(dependency1, dependency2){ //logic }]); For example, lets see how we to create TodoController. var myApp = angular.module('myApp',['ngRoute']); myApp.controller('TodoController',['$scope','$http',function($scope,$http){ //logic }]); Here we are creating TodoController and providing $scope and $http as dependencies which are built-in angularjs services. We can also create the same controller as follows: myApp.controller('TodoController',function($scope,$http){ //logic });Observe that we are directly passing a function as a second argument instead of an array which has an array of dependencies followed by a function which takes the same dependencies as arguments and it works exactly same as array based declaration. But why do we need to do more typing when both do the same thing?? AngularJS injects the dependencies by name, that means when you define $http as a dependency then AngularJS looks for a registered service with name ‘$http‘. But majority of the real world applications use JavaScript code minification tools to reduce the size. Those tools may rename your variables to short variable names. For example: myApp.controller('TodoController',function($scope,$http){ //logic }); The preceding code might be minified into: myApp.controller('TodoController',function($s,$h){ //logic }); Then AngularJS tries to look for registered services with names $s and $h instead of $scope and $http and eventually it will fail. To overcome this issue we define the names of services as string literals in array and specify the same names as function arguments. With this even after JavaScript minifies the function argument names, string literals remains same and AngularJS picks right services to inject. That means you can write the controller as follows: myApp.controller('TodoController',['$scope','$http',function($s,$h){ //here $s represents $scope and $h represents $http services }]); So always prefer to use array based dependencies approach. Ok, now we know how to create controllers. Lets see how we can add some functionality to our controllers. myApp.controller('TodoController',['$scope','$http',function($scope,$http){ var todoCtrl = this; todoCtrl.todos = []; todoCtrl.loadTodos = function(){ $http.get('/todos.json').success(function(data){ todoCtrl.todos = data; }).error(function(){ alert('Error in loading Todos'); }); }; todoCtrl.loadTodos(); }]); Here in our TodoController we defined a variable todos which initially holds an empty array and we defined loadTodos() function which loads todos from RESTful services using $http.get() and once response received we are setting the todos array to our todos variable. Simple and straight forward. Why can’t we directly assign the response of $http.get() to our todos variable like todoCtrl.todos = $http.get(‘/todos.json’);?? Because $http.get(‘/todos.json’) returns a promise, not actual response data. So you have to get data from success handler function. Also note that if you want to perform any logic after receiving data from $http.get() you should put your logic inside success handler function only. For example if you are deleting a Todo item and then reload the todos you should NOT do as follows: $http.delete('/todos.json/1').success(function(data){ //hurray, deleted }).error(function(){ alert('Error in deleting Todo'); }); todoCtrl.loadTodos(); Here you might assume that after delete is done it will loadTodos() and the deleted Todo item won’t show up, but that won’t work like that. You should do it as follows: $http.delete('/todos.json/1').success(function(data){ //hurray, deleted todoCtrl.loadTodos(); }).error(function(){ alert('Error in deleting Todo'); }); Lets move on to how to create AngularJS services. Creating services is also similar to controllers but AngularJS provides multiple ways for creating services. There are 3 ways to create AngularJS services:Using module.factory() Using module.service() Using module.provider()Using module.factory() We can create a service using module.factory() as follows: angular.module('myApp') .factory('UserService', ['$http',function($http) { var service = { user: {}, login: function(email, pwd) { $http.get('/auth',{ username: email, password: pwd}).success(function(data){ service.user = data; }); }, register: function(newuser) { return $http.post('/users', newuser); } }; return service; }]); Using module.service() We can create a service using module.service() as follows: angular.module('myApp') .service('UserService', ['$http',function($http) { var service = this; this.user = {}; this.login = function(email, pwd) { $http.get('/auth',{ username: email, password: pwd}).success(function(data){ service.user = data; }); }; this.register = function(newuser) { return $http.post('/users', newuser); }; }]); Using module.provider() We can create a service using module.provider() as follows: angular.module('myApp') .provider('UserService', function() { return { this.$get = function($http) { var service = this; this.user = {}; this.login = function(email, pwd) { $http.get('/auth',{ username: email, password: pwd}).success(function(data){ service.user = data; }); }; this.register = function(newuser) { return $http.post('/users', newuser); }; } } }); You can find good documentation on which method is appropriate in which scenario at http://www.ng-newsletter.com/advent2013/#!/day/1. Let us create a TodoService in our services.js file as follows: var myApp = angular.module('myApp');myApp.factory('TodoService', function($http){ return { loadTodos : function(){ return $http.get('todos'); }, createTodo: function(todo){ return $http.post('todos',todo); }, deleteTodo: function(id){ return $http.delete('todos/'+id); } } }); Now inject our TodoService into our TodoController as follows: myApp.controller('TodoController', [ '$scope', 'TodoService', function ($scope, TodoService) { $scope.newTodo = {}; $scope.loadTodos = function(){ TodoService.loadTodos(). success(function(data, status, headers, config) { $scope.todos = data; }) .error(function(data, status, headers, config) { alert('Error loading Todos'); }); }; $scope.addTodo = function(){ TodoService.createTodo($scope.newTodo). success(function(data, status, headers, config) { $scope.newTodo = {}; $scope.loadTodos(); }) .error(function(data, status, headers, config) { alert('Error saving Todo'); }); }; $scope.deleteTodo = function(todo){ TodoService.deleteTodo(todo.id). success(function(data, status, headers, config) { $scope.loadTodos(); }) .error(function(data, status, headers, config) { alert('Error deleting Todo'); }); }; $scope.loadTodos(); }]); Now we have separated our controller logic and business logic using AngularJS controllers and services and make them work together using Dependency Injection. In the beginning of the post I said we will be developing a multi-screen application demonstrating ngRoute functionality. In addition to Todos, let us add PhoneBook feature to our application where we can maintain list of contacts. First, let us build the back-end functionality for PhoneBook REST services. Create Person JPA entity, its Spring Data JPA repository and Controller. @Entity public class Person implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy=GenerationType.AUTO) private Integer id; private String email; private String password; private String firstname; private String lastname; @Temporal(TemporalType.DATE) private Date dob; //setters and getters }public interface PersonRepository extends JpaRepository<Person, Integer>{}@RestController @RequestMapping("/contacts") public class ContactController { @Autowired private PersonRepository personRepository; @RequestMapping("") public List<Person> persons() { return personRepository.findAll(); } } Now let us create AngularJS service and controller for Contacts. Observe that we will be using module.service() approach this time. myApp.service('ContactService', ['$http',function($http){ this.getContacts = function(){ var promise = $http.get('contacts') .then(function(response){ return response.data; },function(response){ alert('error'); }); return promise; } } }]);myApp.controller('ContactController', [ '$scope', 'ContactService', function ($scope, ContactService) { ContactService.getContacts().then(function(data) { $scope.contacts = data; }); } ]); Now we need to configure our application routes in app.js file. var myApp = angular.module('myApp',['ngRoute']);myApp.config(['$routeProvider','$locationProvider', function($routeProvider, $locationProvider) { $routeProvider .when('/home', { templateUrl: 'templates/home.html', controller: 'HomeController' }) .when('/contacts', { templateUrl: 'templates/contacts.html', controller: 'ContactController' }) .when('/todos', { templateUrl: 'templates/todos.html', controller: 'TodoController' }) .otherwise({ redirectTo: 'home' }); }]); Here we have configured our application routes on $routeProvider inside myApp.config() function. When url matches with any of the routes then corresponding template content will be rendered in <div ng-view></div> div in our index.html. If the url doesn’t match with any of the configured urls then it will be routed to ‘home‘ as specified in otherwise() configuration. Our templates/home.html won’t have anything for now and templates/todos.html file will be same as home.html in previous post. The new templates/contacts.html will just have a table listing contacts as follows: <table class="table table-striped table-bordered table-hover"> <thead> <tr> <th>Name</th> <th>Email</th> </tr> </thead> <tbody> <tr ng-repeat="contact in contacts"> <td>{{contact.firstname + ' '+ (contact.lastname || '')}}</td> <td>{{contact.email}}</td> </tr> </tbody> </table> Now let us create navigation links to Todos, Contacts pages in our index.html page <body>. <div class="container"> <div class="row"> <div class="col-md-3 sidebar"> <div class="list-group"> <a href="#home" class="list-group-item"> <i class="fa fa-home fa-lg"></i> Home </a> <a href="#contacts" class="list-group-item"> <i class="fa fa-user fa-lg"></i> Contacts </a> <a href="#todos" class="list-group-item"> <i class="fa fa-indent fa-lg"></i> ToDos </a> </div> </div> <div class="col-md-9 col-md-offset-3"> <div ng-view></div> </div> </div> </div> By now we have a multi-screen application and we understood how to use modules, controllers and services. You can find the code for this article at https://github.com/sivaprasadreddy/angularjs-samples/tree/master/angularjs-series/angularjs-part2 Our next article would be on how to use $resource instead of $http to consume REST services. Also we will look update our application to use more powerful ui-router module instead of ngRoute. Stay tuned !Reference: AngularJS: Introducing modules, controllers, services from our JCG partner Siva Reddy at the My Experiments on Technology blog....
spring-interview-questions-answers

Spring Batch Tutorial with Spring Boot and Java Configuration

I’ve been working on migrating some batch jobs for Podcastpedia.org to Spring Batch. Before, these jobs were developed in my own kind of way, and I thought it was high time to use a more “standardized” approach. Because I had never used Spring with java configuration before, I thought this were a good opportunity to learn about it, by configuring the Spring Batch jobs in java. And since I am all into trying new things with Spring, why not also throw Spring Boot into the boat… Note: Before you begin with this tutorial I recommend you read first Spring’s Getting started – Creating a Batch Service, because  the structure and the code presented here builds on that original. 1. What I’ll build So, as mentioned, in this post I will present Spring Batch in the context of configuring it and developing with it some batch jobs for Podcastpedia.org. Here’s a short description of the two jobs that are currently part of the Podcastpedia-batch project:addNewPodcastJobreads podcast metadata (feed url, identifier, categories etc.) from a flat file transforms (parses and prepares episodes to be inserted with Http Apache Client) the data and in the last step, insert it to the Podcastpedia database and inform the submitter via email about itnotifyEmailSubscribersJob – people can subscribe to their favorite podcasts on Podcastpedia.org via email. For those who did it is checked on a regular basis (DAILY, WEEKLY, MONTHLY) if new episodes are available, and if they are the subscribers are informed via email about those; read from database, expand read data via JPA, re-group it and notify subscriber via emailSource code: The source code for this tutorial is available on GitHub – Podcastpedia-batch. Note: Before you start I also highly recommend you read the Domain Language of Batch,  so that terms like “Jobs”, “Steps” or “ItemReaders” don’t sound strange to you. 2. What you’ll needA favorite text editor or IDE JDK 1.7 or later Maven 3.0+ 3. Set up the project The project is built with Maven. It uses Spring Boot, which makes it easy to create stand-alone Spring based Applications that you can “just run”.  You can learn more about the Spring Boot by visiting the project’s website. 3.1. Maven build file Because it uses Spring Boot it will have the spring-boot-starter-parent as its parent, and a couple of other spring-boot-starters that will get for us some libraries required in the project: pom.xml of the podcastpedia-batch project <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion><groupId>org.podcastpedia.batch</groupId> <artifactId>podcastpedia-batch</artifactId> <version>0.1.0</version> <properties> <sprinb.boot.version>1.1.6.RELEASE</sprinb.boot.version> <java.version>1.7</java.version> </properties> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.1.6.RELEASE</version> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-batch</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.3.5</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpcore</artifactId> <version>4.3.2</version> </dependency> <!-- velocity --> <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity</artifactId> <version>1.7</version> </dependency> <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity-tools</artifactId> <version>2.0</version> <exclusions> <exclusion> <groupId>org.apache.struts</groupId> <artifactId>struts-core</artifactId> </exclusion> </exclusions> </dependency> <!-- Project rome rss, atom --> <dependency> <groupId>rome</groupId> <artifactId>rome</artifactId> <version>1.0</version> </dependency> <!-- option this fetcher thing --> <dependency> <groupId>rome</groupId> <artifactId>rome-fetcher</artifactId> <version>1.0</version> </dependency> <dependency> <groupId>org.jdom</groupId> <artifactId>jdom</artifactId> <version>1.1</version> </dependency> <!-- PID 1 --> <dependency> <groupId>xerces</groupId> <artifactId>xercesImpl</artifactId> <version>2.9.1</version> </dependency> <!-- MySQL JDBC connector --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.31</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-freemarker</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-remote-shell</artifactId> <exclusions> <exclusion> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4.7</version> </dependency> <dependency> <groupId>javax.inject</groupId> <artifactId>javax.inject</artifactId> <version>1</version> </dependency> <dependency> <groupId>org.twitter4j</groupId> <artifactId>twitter4j-core</artifactId> <version>[4.0,)</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> </dependency> </dependencies><build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> Note: One big advantage of using the spring-boot-starter-parent as the project’s parent is that you only have to upgrade the version of the parent and it will get the “latest” libraries for you. When I started the project spring boot was in version 1.1.3.RELEASE and by the time of finishing to write this post is already at 1.1.6.RELEASE. 3.2. Project directory structure I structured the project in the following way: Project directory structure └── src └── main └── java └── org └── podcastpedia └── batch └── common └── jobs └── addpodcast └── notifysubscribers Note:the org.podcastpedia.batch.jobs package contains sub-packages having specific classes to particular jobs.  the org.podcastpedia.batch.jobs.common package contains classes used by all the jobs, like for example the JPA entities that both the current jobs require.4. Create a batch Job configuration I will start by presenting the Java configuration class for the first batch job: Batch Job configuration package org.podcastpedia.batch.jobs.addpodcast;import org.podcastpedia.batch.common.configuration.DatabaseAccessConfiguration; import org.podcastpedia.batch.common.listeners.LogProcessListener; import org.podcastpedia.batch.common.listeners.ProtocolListener; import org.podcastpedia.batch.jobs.addpodcast.model.SuggestedPodcast; import org.springframework.batch.core.Job; import org.springframework.batch.core.Step; import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing; import org.springframework.batch.core.configuration.annotation.JobBuilderFactory; import org.springframework.batch.core.configuration.annotation.StepBuilderFactory; import org.springframework.batch.item.ItemProcessor; import org.springframework.batch.item.ItemReader; import org.springframework.batch.item.ItemWriter; import org.springframework.batch.item.file.FlatFileItemReader; import org.springframework.batch.item.file.LineMapper; import org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper; import org.springframework.batch.item.file.mapping.DefaultLineMapper; import org.springframework.batch.item.file.transform.DelimitedLineTokenizer; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Import; import org.springframework.core.io.ClassPathResource;import com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException;@Configuration @EnableBatchProcessing @Import({DatabaseAccessConfiguration.class, ServicesConfiguration.class}) public class AddPodcastJobConfiguration {@Autowired private JobBuilderFactory jobs; @Autowired private StepBuilderFactory stepBuilderFactory; // tag::jobstep[] @Bean public Job addNewPodcastJob(){ return jobs.get("addNewPodcastJob") .listener(protocolListener()) .start(step()) .build(); } @Bean public Step step(){ return stepBuilderFactory.get("step") .<SuggestedPodcast,SuggestedPodcast>chunk(1) //important to be one in this case to commit after every line read .reader(reader()) .processor(processor()) .writer(writer()) .listener(logProcessListener()) .faultTolerant() .skipLimit(10) //default is set to 0 .skip(MySQLIntegrityConstraintViolationException.class) .build(); } // end::jobstep[] // tag::readerwriterprocessor[] @Bean public ItemReader<SuggestedPodcast> reader(){ FlatFileItemReader<SuggestedPodcast> reader = new FlatFileItemReader<SuggestedPodcast>(); reader.setLinesToSkip(1);//first line is title definition reader.setResource(new ClassPathResource("suggested-podcasts.txt")); reader.setLineMapper(lineMapper()); return reader; }@Bean public LineMapper<SuggestedPodcast> lineMapper() { DefaultLineMapper<SuggestedPodcast> lineMapper = new DefaultLineMapper<SuggestedPodcast>(); DelimitedLineTokenizer lineTokenizer = new DelimitedLineTokenizer(); lineTokenizer.setDelimiter(";"); lineTokenizer.setStrict(false); lineTokenizer.setNames(new String[]{"FEED_URL", "IDENTIFIER_ON_PODCASTPEDIA", "CATEGORIES", "LANGUAGE", "MEDIA_TYPE", "UPDATE_FREQUENCY", "KEYWORDS", "FB_PAGE", "TWITTER_PAGE", "GPLUS_PAGE", "NAME_SUBMITTER", "EMAIL_SUBMITTER"}); BeanWrapperFieldSetMapper<SuggestedPodcast> fieldSetMapper = new BeanWrapperFieldSetMapper<SuggestedPodcast>(); fieldSetMapper.setTargetType(SuggestedPodcast.class); lineMapper.setLineTokenizer(lineTokenizer); lineMapper.setFieldSetMapper(suggestedPodcastFieldSetMapper()); return lineMapper; }@Bean public SuggestedPodcastFieldSetMapper suggestedPodcastFieldSetMapper() { return new SuggestedPodcastFieldSetMapper(); }/** configure the processor related stuff */ @Bean public ItemProcessor<SuggestedPodcast, SuggestedPodcast> processor() { return new SuggestedPodcastItemProcessor(); } @Bean public ItemWriter<SuggestedPodcast> writer() { return new Writer(); } // end::readerwriterprocessor[] @Bean public ProtocolListener protocolListener(){ return new ProtocolListener(); } @Bean public LogProcessListener logProcessListener(){ return new LogProcessListener(); }} The @EnableBatchProcessing annotation adds many critical beans that support jobs and saves us configuration work. For example you will also be able to @Autowired some useful stuff into your context:a JobRepository (bean name “jobRepository”) a JobLauncher (bean name “jobLauncher”) a JobRegistry (bean name “jobRegistry”) a PlatformTransactionManager (bean name “transactionManager”) a JobBuilderFactory (bean name “jobBuilders”) as a convenience to prevent you from having to inject the job repository into every job, as in the examples above a StepBuilderFactory (bean name “stepBuilders”) as a convenience to prevent you from having to inject the job repository and transaction manager into every stepThe first part focuses on the actual job configuration: Batch Job and Step configuration @Bean public Job addNewPodcastJob(){ return jobs.get("addNewPodcastJob") .listener(protocolListener()) .start(step()) .build(); }@Bean public Step step(){ return stepBuilderFactory.get("step") .<SuggestedPodcast,SuggestedPodcast>chunk(1) //important to be one in this case to commit after every line read .reader(reader()) .processor(processor()) .writer(writer()) .listener(logProcessListener()) .faultTolerant() .skipLimit(10) //default is set to 0 .skip(MySQLIntegrityConstraintViolationException.class) .build(); } The first method defines a job and the second one defines a single step. As you’ve read in The Domain Language of Batch,  jobs are built from steps, where each step can involve a reader, a processor, and a writer. In the step definition, you define how much data to write at a time (in our case 1 record at a time). Next you specify the reader, processor and writer. 5. Spring Batch processing units Most of the batch processing can be described as reading data, doing some transformation on it and then writing the result out. This mirrors somehow the Extract, Transform, Load (ETL) process, in case you know more about that. Spring Batch provides three key interfaces to help perform bulk reading and writing: ItemReader, ItemProcessor and ItemWriter. 5.1. Readers ItemReader is an abstraction providing the mean to retrieve data from many different types of input: flat files, xml files, database, jms etc., one item at a time. See the Appendix A. List of ItemReaders and ItemWriters for a complete list of available item readers. In the Podcastpedia batch jobs I use the following specialized ItemReaders: 5.1.1. FlatFileItemReader which, as the name implies, reads lines of data from a flat file that typically describe records with fields of data defined by fixed positions in the file or delimited by some special character (e.g. Comma). This type of ItemReader is being used in the first batch job, addNewPodcastJob. The input file used is named suggested-podcasts.in, resides in the classpath (src/main/resources) and looks something like the following: Input file for FlatFileItemReader FEED_URL; IDENTIFIER_ON_PODCASTPEDIA; CATEGORIES; LANGUAGE; MEDIA_TYPE; UPDATE_FREQUENCY; KEYWORDS; FB_PAGE; TWITTER_PAGE; GPLUS_PAGE; NAME_SUBMITTER; EMAIL_SUBMITTER http://www.5minutebiographies.com/feed/; 5minutebiographies; people_society, history; en; Audio; WEEKLY; biography, biographies, short biography, short biographies, 5 minute biographies, five minute biographies, 5 minute biography, five minute biography; https://www.facebook.com/5minutebiographies; https://twitter.com/5MinuteBios; ; Adrian Matei; adrianmatei@gmail.com http://notanotherpodcast.libsyn.com/rss; NotAnotherPodcast; entertainment; en; Audio; WEEKLY; Comedy, Sports, Cinema, Movies, Pop Culture, Food, Games; https://www.facebook.com/notanotherpodcastusa; https://twitter.com/NAPodcastUSA; https://plus.google.com/u/0/103089891373760354121/posts; Adrian Matei; adrianmatei@gmail.com As you can see the first line defines the names of the “columns”, and the following lines contain the actual data (delimited by “;”), that needs translating to domain objects relevant in the context. Let’s see now how to configure the FlatFileItemReader: FlatFileItemReader example @Bean public ItemReader<SuggestedPodcast> reader(){ FlatFileItemReader<SuggestedPodcast> reader = new FlatFileItemReader<SuggestedPodcast>(); reader.setLinesToSkip(1);//first line is title definition reader.setResource(new ClassPathResource("suggested-podcasts.in")); reader.setLineMapper(lineMapper()); return reader; } You can specify, among other things, the input resource, the number of lines to skip, and a line mapper. 5.1.1.1. LineMapper The LineMapper is an interface for mapping lines (strings) to domain objects, typically used to map lines read from a file to domain objects on a per line basis.  For the Podcastpedia job I used the DefaultLineMapper, which is two-phase implementation consisting of tokenization of the line into a FieldSet followed by mapping to item: LineMapper default implementation example @Bean public LineMapper<SuggestedPodcast> lineMapper() { DefaultLineMapper<SuggestedPodcast> lineMapper = new DefaultLineMapper<SuggestedPodcast>(); DelimitedLineTokenizer lineTokenizer = new DelimitedLineTokenizer(); lineTokenizer.setDelimiter(";"); lineTokenizer.setStrict(false); lineTokenizer.setNames(new String[]{"FEED_URL", "IDENTIFIER_ON_PODCASTPEDIA", "CATEGORIES", "LANGUAGE", "MEDIA_TYPE", "UPDATE_FREQUENCY", "KEYWORDS", "FB_PAGE", "TWITTER_PAGE", "GPLUS_PAGE", "NAME_SUBMITTER", "EMAIL_SUBMITTER"}); BeanWrapperFieldSetMapper<SuggestedPodcast> fieldSetMapper = new BeanWrapperFieldSetMapper<SuggestedPodcast>(); fieldSetMapper.setTargetType(SuggestedPodcast.class); lineMapper.setLineTokenizer(lineTokenizer); lineMapper.setFieldSetMapper(suggestedPodcastFieldSetMapper()); return lineMapper; }the DelimitedLineTokenizer  splits the input String via the “;” delimiter. if you set the strict flag to false then lines with less tokens will be tolerated and padded with empty columns, and lines with more tokens will simply be truncated. the columns names from the first line are set lineTokenizer.setNames(...); and the fieldMapper is set (line 14)Note: The FieldSet is an “interface used by flat file input sources to encapsulate concerns of converting an array of Strings to Java native types. A bit like the role played by ResultSet in JDBC, clients will know the name or position of strongly typed fields that they want to extract.“ 5.1.1.2. FieldSetMapper The FieldSetMapper is an interface that is used to map data obtained from a FieldSet into an object. Here’s my implementation which maps the fieldSet to the SuggestedPodcast domain object that will be further passed to the processor: FieldSetMapper implementation public class SuggestedPodcastFieldSetMapper implements FieldSetMapper<SuggestedPodcast> {@Override public SuggestedPodcast mapFieldSet(FieldSet fieldSet) throws BindException { SuggestedPodcast suggestedPodcast = new SuggestedPodcast(); suggestedPodcast.setCategories(fieldSet.readString("CATEGORIES")); suggestedPodcast.setEmail(fieldSet.readString("EMAIL_SUBMITTER")); suggestedPodcast.setName(fieldSet.readString("NAME_SUBMITTER")); suggestedPodcast.setTags(fieldSet.readString("KEYWORDS")); //some of the attributes we can map directly into the Podcast entity that we'll insert later into the database Podcast podcast = new Podcast(); podcast.setUrl(fieldSet.readString("FEED_URL")); podcast.setIdentifier(fieldSet.readString("IDENTIFIER_ON_PODCASTPEDIA")); podcast.setLanguageCode(LanguageCode.valueOf(fieldSet.readString("LANGUAGE"))); podcast.setMediaType(MediaType.valueOf(fieldSet.readString("MEDIA_TYPE"))); podcast.setUpdateFrequency(UpdateFrequency.valueOf(fieldSet.readString("UPDATE_FREQUENCY"))); podcast.setFbPage(fieldSet.readString("FB_PAGE")); podcast.setTwitterPage(fieldSet.readString("TWITTER_PAGE")); podcast.setGplusPage(fieldSet.readString("GPLUS_PAGE")); suggestedPodcast.setPodcast(podcast);return suggestedPodcast; } } 5.2. JdbcCursorItemReader In the second job, notifyEmailSubscribersJob, in the reader, I only read email subscribers from a single database table, but further in the processor a more detailed read(via JPA) is executed to retrieve all the new episodes of the podcasts the user subscribed to. This is a common pattern employed in the batch world. Follow this link for more Common Batch Patterns. For the initial read, I chose the JdbcCursorItemReader, which is a simple reader implementation that opens a JDBC cursor and continually retrieves the next row in the ResultSet: JdbcCursorItemReader example @Bean public ItemReader<User> notifySubscribersReader(){ JdbcCursorItemReader<User> reader = new JdbcCursorItemReader<User>(); String sql = "select * from users where is_email_subscriber is not null"; reader.setSql(sql); reader.setDataSource(dataSource); reader.setRowMapper(rowMapper());return reader; } Note I had to set the sql, the datasource to read from and a RowMapper. 5.2.1. RowMapper The RowMapper is an interface used by JdbcTemplate for mapping rows of a Result’set on a per-row basis. My implementation of this interface, , performs the actual work of mapping each row to a result object, but I don’t need to worry about exception handling: RowMapper implementation public class UserRowMapper implements RowMapper<User> {@Override public User mapRow(ResultSet rs, int rowNum) throws SQLException { User user = new User(); user.setEmail(rs.getString("email")); return user; }}  5.2. Writers ItemWriter is an abstraction that represents the output of a Step, one batch or chunk of items at a time. Generally, an item writer has no knowledge of the input it will receive next, only the item that was passed in its current invocation. The writers for the two jobs presented are quite simple. They just use external services to send email notifications and post tweets on Podcastpedia’s account. Here is the implementation of the ItemWriter for the first job – addNewPodcast: Writer implementation of ItemWriter package org.podcastpedia.batch.jobs.addpodcast;import java.util.Date; import java.util.List;import javax.inject.Inject; import javax.persistence.EntityManager;import org.podcastpedia.batch.common.entities.Podcast; import org.podcastpedia.batch.jobs.addpodcast.model.SuggestedPodcast; import org.podcastpedia.batch.jobs.addpodcast.service.EmailNotificationService; import org.podcastpedia.batch.jobs.addpodcast.service.SocialMediaService; import org.springframework.batch.item.ItemWriter; import org.springframework.beans.factory.annotation.Autowired;public class Writer implements ItemWriter<SuggestedPodcast>{@Autowired private EntityManager entityManager; @Inject private EmailNotificationService emailNotificationService; @Inject private SocialMediaService socialMediaService; @Override public void write(List<? extends SuggestedPodcast> items) throws Exception {if(items.get(0) != null){ SuggestedPodcast suggestedPodcast = items.get(0); //first insert the data in the database Podcast podcast = suggestedPodcast.getPodcast(); podcast.setInsertionDate(new Date()); entityManager.persist(podcast); entityManager.flush(); //notify submitter about the insertion and post a twitt about it String url = buildUrlOnPodcastpedia(podcast); emailNotificationService.sendPodcastAdditionConfirmation( suggestedPodcast.getName(), suggestedPodcast.getEmail(), url); if(podcast.getTwitterPage() != null){ socialMediaService.postOnTwitterAboutNewPodcast(podcast, url); } }}private String buildUrlOnPodcastpedia(Podcast podcast) { StringBuffer urlOnPodcastpedia = new StringBuffer( "http://www.podcastpedia.org"); if (podcast.getIdentifier() != null) { urlOnPodcastpedia.append("/" + podcast.getIdentifier()); } else { urlOnPodcastpedia.append("/podcasts/"); urlOnPodcastpedia.append(String.valueOf(podcast.getPodcastId())); urlOnPodcastpedia.append("/" + podcast.getTitleInUrl()); } String url = urlOnPodcastpedia.toString(); return url; }} As you can see there’s nothing special here, except that the write method has to be overriden and this is where the injected external services EmailNotificationService and SocialMediaService are used to inform via email the podcast submitter about the addition to the podcast directory, and if a Twitter page was submitted a tweet will be posted on the Podcastpedia’s wall. You can find detailed explanation on how to send email via Velocity and how to post on Twitter from Java in the following posts:How to compose html emails in Java with Spring and Velocity How to post to Twittter from Java with Twitter4J in 10 minutes 5.3. Processors ItemProcessor is an abstraction that represents the business processing of an item. While the ItemReader reads one item, and the ItemWriter writes them, the ItemProcessor provides access to transform or apply other business processing. When using your own Processors you have to implement the ItemProcessor<I,O> interface, with its only method O process(I item) throws Exception, returning a potentially modified or a new item for continued processing. If the returned result is null, it is assumed that processing of the item should not continue. While the processor of the first job requires a little bit of more logic, because I have to set the etag and last-modified header attributes, the feed attributes, episodes, categories and keywords of the podcast: ItemProcessor implementation for the job addNewPodcast public class SuggestedPodcastItemProcessor implements ItemProcessor<SuggestedPodcast, SuggestedPodcast> {private static final int TIMEOUT = 10;@Autowired ReadDao readDao; @Autowired PodcastAndEpisodeAttributesService podcastAndEpisodeAttributesService; @Autowired private PoolingHttpClientConnectionManager poolingHttpClientConnectionManager; @Autowired private SyndFeedService syndFeedService;/** * Method used to build the categories, tags and episodes of the podcast */ @Override public SuggestedPodcast process(SuggestedPodcast item) throws Exception { if(isPodcastAlreadyInTheDirectory(item.getPodcast().getUrl())) { return null; } String[] categories = item.getCategories().trim().split("\\s*,\\s*");item.getPodcast().setAvailability(org.apache.http.HttpStatus.SC_OK); //set etag and last modified attributes for the podcast setHeaderFieldAttributes(item.getPodcast()); //set the other attributes of the podcast from the feed podcastAndEpisodeAttributesService.setPodcastFeedAttributes(item.getPodcast()); //set the categories List<Category> categoriesByNames = readDao.findCategoriesByNames(categories); item.getPodcast().setCategories(categoriesByNames); //set the tags setTagsForPodcast(item); //build the episodes setEpisodesForPodcast(item.getPodcast()); return item; } ...... } the processor from the second job uses the ‘Driving Query’ approach, where I expand the data retrieved from the Reader with another “JPA-read” and I group the items on podcasts with episodes so that it looks nice in the emails that I am sending out to subscribers: ItemProcessor implementation of the second job – notifySubscribers @Scope("step") public class NotifySubscribersItemProcessor implements ItemProcessor<User, User> {@Autowired EntityManager em; @Value("#{jobParameters[updateFrequency]}") String updateFrequency; @Override public User process(User item) throws Exception { String sqlInnerJoinEpisodes = "select e from User u JOIN u.podcasts p JOIN p.episodes e WHERE u.email=?1 AND p.updateFrequency=?2 AND" + " e.isNew IS NOT NULL AND e.availability=200 ORDER BY e.podcast.podcastId ASC, e.publicationDate ASC"; TypedQuery<Episode> queryInnerJoinepisodes = em.createQuery(sqlInnerJoinEpisodes, Episode.class); queryInnerJoinepisodes.setParameter(1, item.getEmail()); queryInnerJoinepisodes.setParameter(2, UpdateFrequency.valueOf(updateFrequency)); List<Episode> newEpisodes = queryInnerJoinepisodes.getResultList(); return regroupPodcastsWithEpisodes(item, newEpisodes); } ....... } Note: If you’d like to find out more how to use the Apache Http Client, to get the etag and last-modified headers, you can have a look at my post – How to use the new Apache Http Client to make a HEAD request 6. Execute the batch application Batch processing can be embedded in web applications and WAR files, but I chose in the beginning the simpler approach that creates a standalone application, that can be started by the Java main() method: Batch processing Java main() method package org.podcastpedia.batch; //imports ...;@ComponentScan @EnableAutoConfiguration public class Application {private static final String NEW_EPISODES_NOTIFICATION_JOB = "newEpisodesNotificationJob"; private static final String ADD_NEW_PODCAST_JOB = "addNewPodcastJob";public static void main(String[] args) throws BeansException, JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException, JobParametersInvalidException, InterruptedException { Log log = LogFactory.getLog(Application.class); SpringApplication app = new SpringApplication(Application.class); app.setWebEnvironment(false); ConfigurableApplicationContext ctx= app.run(args); JobLauncher jobLauncher = ctx.getBean(JobLauncher.class); if(ADD_NEW_PODCAST_JOB.equals(args[0])){ //addNewPodcastJob Job addNewPodcastJob = ctx.getBean(ADD_NEW_PODCAST_JOB, Job.class); JobParameters jobParameters = new JobParametersBuilder() .addDate("date", new Date()) .toJobParameters(); JobExecution jobExecution = jobLauncher.run(addNewPodcastJob, jobParameters); BatchStatus batchStatus = jobExecution.getStatus(); while(batchStatus.isRunning()){ log.info("*********** Still running.... **************"); Thread.sleep(1000); } log.info(String.format("*********** Exit status: %s", jobExecution.getExitStatus().getExitCode())); JobInstance jobInstance = jobExecution.getJobInstance(); log.info(String.format("********* Name of the job %s", jobInstance.getJobName())); log.info(String.format("*********** job instance Id: %d", jobInstance.getId())); System.exit(0); } else if(NEW_EPISODES_NOTIFICATION_JOB.equals(args[0])){ JobParameters jobParameters = new JobParametersBuilder() .addDate("date", new Date()) .addString("updateFrequency", args[1]) .toJobParameters(); jobLauncher.run(ctx.getBean(NEW_EPISODES_NOTIFICATION_JOB, Job.class), jobParameters); } else { throw new IllegalArgumentException("Please provide a valid Job name as first application parameter"); } System.exit(0); } } The best explanation for  SpringApplication-, @ComponentScan- and @EnableAutoConfiguration-magic you get from the source – Getting Started – Creating a Batch Service: “The main() method defers to the SpringApplication helper class, providing Application.class as an argument to its run() method. This tells Spring to read the annotation metadata from Application and to manage it as a component in the Spring application context. The @ComponentScan annotation tells Spring to search recursively through the org.podcastpedia.batch package and its children for classes marked directly or indirectly with Spring’s @Component annotation. This directive ensures that Spring finds and registers BatchConfiguration, because it is marked with @Configuration, which in turn is a kind of @Component annotation. The @EnableAutoConfiguration annotation switches on reasonable default behaviors based on the content of your classpath. For example, it looks for any class that implements the CommandLineRunner interface and invokes its run() method.” Execution construction steps:the JobLauncher, which is a simple interface for controlling jobs,  is retrieved from the ApplicationContext. Remember this is automatically made available via the @EnableBatchProcessing annotation. now based on the first parameter of the application (args[0]), I will retrieve the corresponding Job from the ApplicationContext then the JobParameters are prepared, where I use the current date – .addDate("date", new Date()), so that the job executions are always unique. once everything is in place, the job can be executed: JobExecution jobExecution = jobLauncher.run(addNewPodcastJob, jobParameters); you can use the returned jobExecution to gain access to BatchStatus, exit code, or job name and id.Note: I highly recommend you read and understand the Meta-Data Schema for Spring Batch. It will also help you better understand the Spring Batch Domain objects. 6.1. Running the application on dev and prod environments To be able to run the Spring Batch / Spring Boot application on different environments I make use of the Spring Profiles capability. By default the application runs with development data (database). But if I want the job to use the production database I have to do the following:provide the following environment argument  -Dspring.profiles.active=prod have the production database properties configured in the application-prod.properties file in the classpath, right besides the default application.properties fileSummary In this tutorial we’ve learned how to configure a Spring Batch project with Spring Boot and Java configuration, how to use some of the most common readers in batch processing, how to configure some simple jobs, and how to start Spring Batch jobs from a main method.Reference: Spring Batch Tutorial with Spring Boot and Java Configuration from our JCG partner Adrian Matei at the Codingpedia.org blog....
jboss-hibernate-logo

Hibernate bytecode enhancement

Introduction Now that you know the basics of Hibernate dirty checking, we can dig into enhanced dirty checking mechanisms. While the default graph-traversal algorithm might be sufficient for most use-cases, there might be times when you need an optimized dirty checking algorithm and instrumentation is much more convinient than building your own custom strategy.         Using Ant Hibernate Tools Traditionally, The Hibernate Tools have been focused on Ant and Eclipse. Bytecode instrumentation has been possible since Hibernate 3, but it required an Ant task to run the CGLIB or Javassist bytecode enhancement routines. Maven supports running Ant tasks through the maven-antrun-plugin: <build> <plugins> <plugin> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <id>Instrument domain classes</id> <configuration> <tasks> <taskdef name="instrument" classname="org.hibernate.tool.instrument.javassist.InstrumentTask"> <classpath> <path refid="maven.dependency.classpath"/> <path refid="maven.plugin.classpath"/> </classpath> </taskdef> <instrument verbose="true"> <fileset dir="${project.build.outputDirectory}"> <include name="**/flushing/*.class"/> </fileset> </instrument> </tasks> </configuration> <phase>process-classes</phase> <goals> <goal>run</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>${hibernate.version}</version> </dependency> <dependency> <groupId>org.javassist</groupId> <artifactId>javassist</artifactId> <version>${javassist.version}</version> </dependency> </dependencies> </plugin> </plugins> </build> So for the following entity source class: @Entity public class EnhancedOrderLine {@Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id;private Long number;private String orderedBy;private Date orderedOn;public Long getId() { return id; }public Long getNumber() { return number; }public void setNumber(Long number) { this.number = number; }public String getOrderedBy() { return orderedBy; }public void setOrderedBy(String orderedBy) { this.orderedBy = orderedBy; }public Date getOrderedOn() { return orderedOn; }public void setOrderedOn(Date orderedOn) { this.orderedOn = orderedOn; } } During build-time the following class is generated: @Entity public class EnhancedOrderLine implements FieldHandled {@Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; private Long number; private String orderedBy; private Date orderedOn; private transient FieldHandler $JAVASSIST_READ_WRITE_HANDLER;public Long getId() { return $javassist_read_id(); }public Long getNumber() { return $javassist_read_number(); }public void setNumber(Long number) { $javassist_write_number(number); }public String getOrderedBy() { return $javassist_read_orderedBy(); }public void setOrderedBy(String orderedBy) { $javassist_write_orderedBy(orderedBy); }public Date getOrderedOn() { return $javassist_read_orderedOn(); }public void setOrderedOn(Date orderedOn) { $javassist_write_orderedOn(orderedOn); }public FieldHandler getFieldHandler() { return this.$JAVASSIST_READ_WRITE_HANDLER; }public void setFieldHandler(FieldHandler paramFieldHandler) { this.$JAVASSIST_READ_WRITE_HANDLER = paramFieldHandler; }public Long $javassist_read_id() { if (getFieldHandler() == null) return this.id; }public void $javassist_write_id(Long paramLong) { if (getFieldHandler() == null) { this.id = paramLong; return; } this.id = ((Long)getFieldHandler().writeObject(this, "id", this.id, paramLong)); }public Long $javassist_read_number() { if (getFieldHandler() == null) return this.number; }public void $javassist_write_number(Long paramLong) { if (getFieldHandler() == null) { this.number = paramLong; return; } this.number = ((Long)getFieldHandler().writeObject(this, "number", this.number, paramLong)); }public String $javassist_read_orderedBy() { if (getFieldHandler() == null) return this.orderedBy; }public void $javassist_write_orderedBy(String paramString) { if (getFieldHandler() == null) { this.orderedBy = paramString; return; } this.orderedBy = ((String)getFieldHandler().writeObject(this, "orderedBy", this.orderedBy, paramString)); }public Date $javassist_read_orderedOn() { if (getFieldHandler() == null) return this.orderedOn; }public void $javassist_write_orderedOn(Date paramDate) { if (getFieldHandler() == null) { this.orderedOn = paramDate; return; } this.orderedOn = ((Date)getFieldHandler().writeObject(this, "orderedOn", this.orderedOn, paramDate)); } } Although the org.hibernate.bytecode.instrumentation.spi.AbstractFieldInterceptor manages to intercept dirty fields, this info is never really enquired during dirtiness tracking. The InstrumentTask bytecode enhancement can only tell whether an entity is dirty, lacking support for indicating which properties have been modified, therefore making the InstrumentTask more suitable for “No-proxy” LAZY fetching strategy. hibernate-enhance-maven-plugin Hibernate 4.2.8 added support for a dedicated Maven bytecode enhancement plugin. The Maven bytecode enhancement plugin is easy to configure: <build> <plugins> <plugin> <groupId>org.hibernate.orm.tooling</groupId> <artifactId>hibernate-enhance-maven-plugin</artifactId> <executions> <execution> <phase>compile</phase> <goals> <goal>enhance</goal> </goals> </execution> </executions> </plugin> </plugins> </build> During project build-time, the following class is being generated: @Entity public class EnhancedOrderLine implements ManagedEntity, PersistentAttributeInterceptable, SelfDirtinessTracker {@Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private Long number; private String orderedBy; private Date orderedOn;@Transient private transient PersistentAttributeInterceptor $$_hibernate_attributeInterceptor;@Transient private transient Set $$_hibernate_tracker;@Transient private transient CollectionTracker $$_hibernate_collectionTracker;@Transient private transient EntityEntry $$_hibernate_entityEntryHolder;@Transient private transient ManagedEntity $$_hibernate_previousManagedEntity;@Transient private transient ManagedEntity $$_hibernate_nextManagedEntity;public Long getId() { return $$_hibernate_read_id(); }public Long getNumber() { return $$_hibernate_read_number(); }public void setNumber(Long number) { $$_hibernate_write_number(number); }public String getOrderedBy() { return $$_hibernate_read_orderedBy(); }public void setOrderedBy(String orderedBy) { $$_hibernate_write_orderedBy(orderedBy); }public Date getOrderedOn() { return $$_hibernate_read_orderedOn(); }public void setOrderedOn(Date orderedOn) { $$_hibernate_write_orderedOn(orderedOn); }public PersistentAttributeInterceptor $$_hibernate_getInterceptor() { return this.$$_hibernate_attributeInterceptor; }public void $$_hibernate_setInterceptor(PersistentAttributeInterceptor paramPersistentAttributeInterceptor) { this.$$_hibernate_attributeInterceptor = paramPersistentAttributeInterceptor; }public void $$_hibernate_trackChange(String paramString) { if (this.$$_hibernate_tracker == null) this.$$_hibernate_tracker = new HashSet(); if (!this.$$_hibernate_tracker.contains(paramString)) this.$$_hibernate_tracker.add(paramString); }private boolean $$_hibernate_areCollectionFieldsDirty() { return ($$_hibernate_getInterceptor() != null) && (this.$$_hibernate_collectionTracker != null); }private void $$_hibernate_getCollectionFieldDirtyNames(Set paramSet) { if (this.$$_hibernate_collectionTracker == null) return; }public boolean $$_hibernate_hasDirtyAttributes() { return ((this.$$_hibernate_tracker == null) || (this.$$_hibernate_tracker.isEmpty())) && ($$_hibernate_areCollectionFieldsDirty()); }private void $$_hibernate_clearDirtyCollectionNames() { if (this.$$_hibernate_collectionTracker == null) this.$$_hibernate_collectionTracker = new CollectionTracker(); }public void $$_hibernate_clearDirtyAttributes() { if (this.$$_hibernate_tracker != null) this.$$_hibernate_tracker.clear(); $$_hibernate_clearDirtyCollectionNames(); }public Set<String> $$_hibernate_getDirtyAttributes() { if (this.$$_hibernate_tracker == null) this.$$_hibernate_tracker = new HashSet(); $$_hibernate_getCollectionFieldDirtyNames(this.$$_hibernate_tracker); return this.$$_hibernate_tracker; }private Long $$_hibernate_read_id() { if ($$_hibernate_getInterceptor() != null) this.id = ((Long) $$_hibernate_getInterceptor().readObject(this, "id", this.id)); return this.id; }private void $$_hibernate_write_id(Long paramLong) { if (($$_hibernate_getInterceptor() == null) || ((this.id == null) || (this.id.equals(paramLong)))) break label39; $$_hibernate_trackChange("id"); label39: Long localLong = paramLong; if ($$_hibernate_getInterceptor() != null) localLong = (Long) $$_hibernate_getInterceptor().writeObject(this, "id", this.id, paramLong); this.id = localLong; }private Long $$_hibernate_read_number() { if ($$_hibernate_getInterceptor() != null) this.number = ((Long) $$_hibernate_getInterceptor().readObject(this, "number", this.number)); return this.number; }private void $$_hibernate_write_number(Long paramLong) { if (($$_hibernate_getInterceptor() == null) || ((this.number == null) || (this.number.equals(paramLong)))) break label39; $$_hibernate_trackChange("number"); label39: Long localLong = paramLong; if ($$_hibernate_getInterceptor() != null) localLong = (Long) $$_hibernate_getInterceptor().writeObject(this, "number", this.number, paramLong); this.number = localLong; }private String $$_hibernate_read_orderedBy() { if ($$_hibernate_getInterceptor() != null) this.orderedBy = ((String) $$_hibernate_getInterceptor().readObject(this, "orderedBy", this.orderedBy)); return this.orderedBy; }private void $$_hibernate_write_orderedBy(String paramString) { if (($$_hibernate_getInterceptor() == null) || ((this.orderedBy == null) || (this.orderedBy.equals(paramString)))) break label39; $$_hibernate_trackChange("orderedBy"); label39: String str = paramString; if ($$_hibernate_getInterceptor() != null) str = (String) $$_hibernate_getInterceptor().writeObject(this, "orderedBy", this.orderedBy, paramString); this.orderedBy = str; }private Date $$_hibernate_read_orderedOn() { if ($$_hibernate_getInterceptor() != null) this.orderedOn = ((Date) $$_hibernate_getInterceptor().readObject(this, "orderedOn", this.orderedOn)); return this.orderedOn; }private void $$_hibernate_write_orderedOn(Date paramDate) { if (($$_hibernate_getInterceptor() == null) || ((this.orderedOn == null) || (this.orderedOn.equals(paramDate)))) break label39; $$_hibernate_trackChange("orderedOn"); label39: Date localDate = paramDate; if ($$_hibernate_getInterceptor() != null) localDate = (Date) $$_hibernate_getInterceptor().writeObject(this, "orderedOn", this.orderedOn, paramDate); this.orderedOn = localDate; }public Object $$_hibernate_getEntityInstance() { return this; }public EntityEntry $$_hibernate_getEntityEntry() { return this.$$_hibernate_entityEntryHolder; }public void $$_hibernate_setEntityEntry(EntityEntry paramEntityEntry) { this.$$_hibernate_entityEntryHolder = paramEntityEntry; }public ManagedEntity $$_hibernate_getPreviousManagedEntity() { return this.$$_hibernate_previousManagedEntity; }public void $$_hibernate_setPreviousManagedEntity(ManagedEntity paramManagedEntity) { this.$$_hibernate_previousManagedEntity = paramManagedEntity; }public ManagedEntity $$_hibernate_getNextManagedEntity() { return this.$$_hibernate_nextManagedEntity; }public void $$_hibernate_setNextManagedEntity(ManagedEntity paramManagedEntity) { this.$$_hibernate_nextManagedEntity = paramManagedEntity; } } It’s easy to realize that the new bytecode enhancement logic is different than the one generated by the previous InstrumentTask. Like the custom dirty checking mechanism, the new bytecode enhancement version records what properties have changed, not just a simple dirty boolean flag. The enhancement logic marks dirty fields upon changing. This approach is much more efficient than having to compare all current property values against the load-time snapshot data. Are we there yet? Even if the entity class bytecode is being enhanced, somehow with Hibernate 4.3.6 there are still missing puzzle pieces. For instance, when calling setNumber(Long number) the following intercepting method gets executed: private void $$_hibernate_write_number(Long paramLong) { if (($$_hibernate_getInterceptor() == null) || ((this.number == null) || (this.number.equals(paramLong)))) break label39; $$_hibernate_trackChange("number"); label39: Long localLong = paramLong; if ($$_hibernate_getInterceptor() != null) localLong = (Long) $$_hibernate_getInterceptor().writeObject(this, "number", this.number, paramLong); this.number = localLong; } In my examples, $$_hibernate_getInterceptor() is always null, which bypasses the $$_hibernate_trackChange(“number”) call. Because of this, no dirty property is going to be recorded, forcing Hibernate to fall-back to the default deep-comparison dirty checking algorithm. So, even if Hibernate has made considerable progress in this particular area, the dirty checking enhancement still requires additional work to become readily available.Code available on GitHub.Reference: Hibernate bytecode enhancement from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
agile-logo

Agile Myth #6: “Agile Means No Upfront Design”

This is my 7th post in my 13-part series, “Agile Myths and Misconceptions”, It’s based on the talk I gave at the first PSIA Softech Philippine Software Engineering Conference. I am striving to correct 12 common misconceptions about Agile Software Development. First of all, let me correct the notion that Agile has little to no concern about design. Many if not most of the signatories of the Agile Manifesto are thought leaders in design. The two people who called the seminal “Lightweight Process Summit”in Snowbird resort in 2001, where the term “Agile Software Development” was coined and the Agile Manifesto was written, were Martin Fowler and Robert Martin. Martin Fowler is synonymous with Design Patterns. Robert Martin wrote one of the first books with “Agile” in the title – the book “Agile Software Development”. This is the book where he outlined the now famous “SOLID” principles, and most of the rest of the book dealt with Design Patterns, Refactoring, and Test-Driven Development. So now let’s discuss upfront design. Agile teams have been told that “Big Design Upfront” is bad, so some interpret that to mean that “No Design Upfront” must be good. The truth is somewhere in the middle – “Minimal Design Upfront”, supported by Spikes.  What’s Wrong with Big Upfront Design? The main problem with “Big Upfront Design” is that after all the time and energy spent in creating a design, we almost always find out that much of the design is wrong only when the team starts coding, or even worse, when the team starts performance testing towards the end of the project! For the Java developers, do you remember the dark days of EJB2? All of us lept like lemmings to adopt EJB into our projects, since it was the Sun Microsystems standard designed to make systems “scalable”. What we ended up was project after project that could only support a fraction of the users they were meant to support. I know of one payroll project that was supposed to support thousands of users, but when it was tested with just ten users the system crawled. The whole project was canceled, after two years of development and huge loses for both client and vendor. How about Healthcare.gov? A lot of problems were blamed on the use of a very new columnar database that promised performance and scalability, but caused more problems than it solved. And when I myself was a developer, I remember staring at a UML diagram that my boss was forcing me to implement, but was impossible to implement in code! The main problem with Big Upfront Design is that very little of it is validated. And by the time we find out that a design decision is wrong, so much code had already been written, that changing the design becomes expensive, wasteful, and risky. Agile Design is Incremental & Evidence-Based So how is design done in Agile? First is the idea of “Just Enough Design” – The team makes just enough design decisions in order to get going with the project. However, as in all decisions in Agile, decisions need to be empirical or evidence-based. Design decisions therefore need to be validated before a large amount of code is invested in the design. One of the best ways to validate a design decision is through a Spike Solution. The team writes one or more small prototypes that implement the design, often taking actual use cases or features from the project. If performance is a concern of the design, the team may subject the Spike Solution to performance tests. Other kinds of tests may also be applied, depending on what concerns the design is supposed to achieve (security, integrity, scalability, ease-of-use, etc.) So early on, the team finds out if a design approach is easy or hard to use, performs as expected, or is otherwise relevant to the problems the design is meant to solve. After some initial design decisions have been made, the rest of the design is done incrementally, with every Sprint. This is the concept of “Emergent Design”. With every Sprint, the team continues to do just enough design to implement the near-term work. Improvements or corrections to the design are discovered along the way and implemented, often through Refactoring. A Note on UML Tools While I’m advocating design, I’m not advocated going out and getting some UML software. I’ve tried a lot of tools in my years in software – UML tools start out handy for small designs, but as designs get more complex, and the need to collaborate on designs increases, the UML tools just tend to hold the team back rather than help the team forward. Arguably the best design tool for a team is a whiteboard. It radiates information to the entire team, it allows for impromptu discussions and collaboration, and it’s limited space prevents you from overcomplicating the design, so you get on with implementing the code.Don’t waste time detailing your design on some UML tool. Scribble just enough on a whiteboard for the team to get going. Finish your design in the code itself. Which Parts of the Design Are Upfront? So what specifically are the parts of the design done upfront? For all the projects I’ve observed, there are at least three aspects of design where some upfront work is done even before the first Sprint begins. These are Domain Model, Architecture, and User Interface. It’s pretty much impossible to get a team to work together efficiently unless at least some design in each of these three aspects has been decided on beforehand. Again, only just enough design for the team to get started is done. The rest of the design emerges with each Sprint. Domain Model The business logic of a system is the most important part, since it’s the very reason why the system exists. It’s also a part of the system that usually changes the most often. A lot of teams just cram their business logic into procedural routines called “Transaction Scripts”. This is fine for simple systems, but for anything moderately complex, this results in a lot of messy, convoluted, duplicated, hard-to-understand code. And since business logic changes a lot, this kind of confusing code can be a source of bugs. In addition to that, code that’s difficult to understand slows the team down. It’s therefore important that the business logic is written in a way that’s organized, readable, and easy and safe to change. The recommended way to achieve this is through what’s called a “Rich Domain Model”, meaning the entities of a particular business domain are modeled as classes, and their interactions with one another coded as method calls to one another. Designing domain models is a lengthy topic, and I probably lost some of you already, so let me just point you to a great starting point – Craig Larman’s “Applying UML & Patterns”, which is a step-by-step guide to analyzing a business domain to design a domain model. Supplement that with Len Silverston’s The Data Model Resource Book series, which is a catalog of industry-tested data models, which are starting points for your domain model designs. I’d suggest that as a team starts with a project, it draws some simple, partial, low-detail UML Class Diagrams to agree on their understanding of the business domain. It would be good if the team can bring in Product Owners or Customers to validate their understanding. This is one reason to be biased towards low-detail diagrams over high-detail diagrams – low-detail diagrams tend to be more understandable and less intimidating to non-technical people. Architecture “Architecture” is just a fancy term for the part of the design that deals with the “non-functional” requirements, or in other words, requirements that are not business logic. Examples are performance, uptime, security, and cost. Often, architectural mistakes are only discovered towards the end, or worse, after the system is deployed. Architectural mistakes could manifest themselves as a slow system, or maybe a security breach! It could manifest itself as expensive recurring costs or the cost of scaling the system is high. Since these mistakes are found towards the end, it’s very expensive and risky to change these architectural decisions, since so much code has already been invested on top of the chosen architecture. Why are architectural decisions so often wrong? Architectural decisions are usually based on vendor documentation, vendor demos, or popularity. A vendor may present a demo or a proof-of-concept to sell you on a product, but remember the vendor is biased – he built the proof-of-concept to sell you on the product, not really to test if the product works for your particular project. There’s also such a thing as “Resume-Driven Development”. This is where developers choose a technology not because they think it’s what’s best for their project, but because experience in the technology will look good in their resume. They’ll use the technology for a while, put it in their resume, and look for a better job. You’re now stuck with a technology that may or may not be the best choice. Oh, have I seen this several times in organizations where their code base is a mess. Architectural decisions should be based on tests, done by the team, and specific to the problems to be solved.In Agile, we build simple prototypes, called “Spike Solutions”, to resolve technical questions. These Spikes can be subjected to performance tests and other evaluations of suitability. Certain user stories or scenarios can be selected and implemented as a full stack using the technologies in question, and then subjected to various tests and other evaluations. User Interface High-level themes and layouts for the user interface of a system should be decided early on, for the purposes of consistency in the user interface. This is evolved and detailed based on feedback from the customer, with each iteration. Wrap-Up Good design, especially object-oriented design, is core to Agile. As such, some upfront thought needs to be given design to set the team in the right direction. Agile just emphasizes simplicity and evidence in design decisions.Reference: Agile Myth #6: “Agile Means No Upfront Design” from our JCG partner Calen Legaspi at the Calen Legaspi blog....
java-logo

Why NULL is Bad?

A simple example of NULL usage in Java:                     public Employee getByName(String name) { int id = database.find(name); if (id == 0) { return null; } return new Employee(id); } What is wrong with this method? It may return NULL instead of an object — that’s what is wrong. NULL is a terrible practice in an object-oriented paradigm and should be avoided at all costs. There have been a number of opinions about this published already, including Null References, The Billion Dollar Mistake presentation by Tony Hoare and the entire Object Thinking book by David West. Here, I’ll try to summarize all the arguments and show examples of how NULL usage can be avoided and replaced with proper object-oriented constructs. Basically, there are two possible alternatives to NULL. The first one is Null Object design pattern (the best way is to make it a constant): public Employee getByName(String name) { int id = database.find(name); if (id == 0) { return Employee.NOBODY; } return Employee(id); } The second possible alternative is to fail fast by throwing an Exception when you can’t return an object: public Employee getByName(String name) { int id = database.find(name); if (id == 0) { throw new EmployeeNotFoundException(name); } return Employee(id); } Now, let’s see the arguments against NULL. Besides Tony Hoare’s presentation and David West’s book mentioned above, I read these publications before writing this post: Clean Code by Robert Martin, Code Complete by Steve McConnell, Say “No” to “Null” by John Sonmez, Is returning null bad design? discussion at StackOverflow. Ad-hoc Error Handling Every time you get an object as an input you must check whether it is NULL or a valid object reference. If you forget to check, a NullPointerException (NPE) may break execution in runtime. Thus, your logic becomes polluted with multiple checks and if/then/else forks: // this is a terrible design, don't reuse Employee employee = dept.getByName("Jeffrey"); if (employee == null) { System.out.println("can't find an employee"); System.exit(-1); } else { employee.transferTo(dept2); } This is how exceptional situations are supposed to be handled in C and other imperative procedural languages. OOP introduced exception handling primarily to get rid of these ad-hoc error handling blocks. In OOP, we let exceptions bubble up until they reach an application-wide error handler and our code becomes much cleaner and shorter: dept.getByName("Jeffrey").transferTo(dept2); Consider NULL references an inheritance of procedural programming, and use 1) Null Objects or 2) Exceptions instead. Ambiguous Semantic In order to explicitly convey its meaning, the function getByName() has to be named getByNameOrNullIfNotFound(). The same should happen with every function that returns an object or NULL. Otherwise, ambiguity is inevitable for a code reader. Thus, to keep semantic unambiguous, you should give longer names to functions. To get rid of this ambiguity, always return a real object, a null object or throw an exception. Some may argue that we sometimes have to return NULL, for the sake of performance. For example, method get() of interface Map in Java returns NULL when there is no such item in the map: Employee employee = employees.get("Jeffrey"); if (employee == null) { throw new EmployeeNotFoundException(); } return employee; This code searches the map only once due to the usage of NULL in Map. If we would refactor Map so that its method get() will throw an exception if nothing is found, our code will look like this: if (!employees.containsKey("Jeffrey")) { // first search throw new EmployeeNotFoundException(); } return employees.get("Jeffrey"); // second search Obviously, this is method is twice as slow as the first one. What to do? The Map interface (no offense to its authors) has a design flaw. Its method get() should have been returning an Iterator so that our code would look like: Iterator found = Map.search("Jeffrey"); if (!found.hasNext()) { throw new EmployeeNotFoundException(); } return found.next(); BTW, that is exactly how C++ STL map::find() method is designed. Computer Thinking vs. Object Thinking Statement if (employee == null) is understood by someone who knows that an object in Java is a pointer to a data structure and that NULL is a pointer to nothing (0x00000000, in Intel x86 processors). However, if you start thinking as an object, this statement makes much less sense. This is how our code looks from an object point of view: - Hello, is it a software department? - Yes. - Let me talk to your employee "Jeffrey" please. - Hold the line please... - Hello. - Are you NULL? The last question in this conversation sounds weird, doesn’t it? Instead, if they hang up the phone after our request to speak to Jeffrey, that causes a problem for us (Exception). At that point, we try to call again or inform our supervisor that we can’t reach Jeffrey and complete a bigger transaction. Alternatively, they may let us speak to another person, who is not Jeffrey, but who can help with most of our questions or refuse to help if we need something “Jeffrey specific” (Null Object). Slow Failing Instead of failing fast, the code above attempts to die slowly, killing others on its way. Instead of letting everyone know that something went wrong and that an exception handling should start immediately, it is hiding this failure from its client. This argument is close to the “ad-hoc error handling” discussed above. It is a good practice to make your code as fragile as possible, letting it break when necessary. Make your methods extremely demanding as to the data they manipulate. Let them complain by throwing exceptions, if the provided data provided is not sufficient or simply doesn’t fit with the main usage scenario of the method. Otherwise, return a Null Object, that exposes some common behavior and throws exceptions on all other calls: public Employee getByName(String name) { int id = database.find(name); Employee employee; if (id == 0) { employee = new Employee() { @Override public String name() { return "anonymous"; } @Override public void transferTo(Department dept) { throw new AnonymousEmployeeException( "I can't be transferred, I'm anonymous" ); } }; } else { employee = Employee(id); } return employee; } Mutable and Incomplete Objects In general, it is highly recommended to design objects with immutability in mind. This means that an object gets all necessary knowledge during its instantiating and never changes its state during the entire lifecycle. Very often, NULL values are used in lazy loading, to make objects incomplete and mutable. For example: public class Department { private Employee found = null; public synchronized Employee manager() { if (this.found == null) { this.found = new Employee("Jeffrey"); } return this.found; } } This technology, although widely used, is an anti-pattern in OOP. Mostly because it makes an object responsible for performance problems of the computational platform, which is something an Employee object should not be aware of. Instead of managing a state and exposing its business-relevant behavior, an object has to take care of the caching of its own results — this is what lazy loading is about. Caching is not something an employee does in the office, does he? The solution? Don’t use lazy loading in such a primitive way, as in the example above. Instead, move this caching problem to another layer of your application. For example, in Java, you can use aspect-oriented programming aspects. For example, jcabi-aspects has @Cacheable annotation that caches the value returned by a method: import com.jcabi.aspects.Cacheable; public class Department { @Cacheable(forever = true) public Employee manager() { return new Employee("Jacky Brown"); } } I hope this analysis was convincing enough that you will stop NULL-ing your code! Related Posts You may also find these posts interesting:Typical Mistakes in Java Code OOP Alternative to Utility Classes Avoid String Concatenation Objects Should Be ImmutableReference: Why NULL is Bad? from our JCG partner Yegor Bugayenko at the About Programming blog....
java-logo

OOP Alternative to Utility Classes

A utility class (aka helper class) is a “structure” that has only static methods and encapsulates no state. StringUtils, IOUtils, FileUtils from Apache Commons; Iterables and Iterators from Guava, and Files from JDK7 are perfect examples of utility classes. This design idea is very popular in the Java world (as well as C#, Ruby, etc.) because utility classes provide common functionality used everywhere. Here, we want to follow the DRY principle and avoid duplication. Therefore, we place common code blocks into utility classes and reuse them when necessary:   // This is a terrible design, don't reuse public class NumberUtils { public static int max(int a, int b) { return a > b ? a : b; } } Indeed, this a very convenient technique!? Utility Classes Are Evil However, in an object-oriented world, utility classes are considered a very bad (some even may say “terrible”) practice. There have been many discussions of this subject; to name a few: Are Helper Classes Evil? by Nick Malik, Why helper, singletons and utility classes are mostly bad by Simon Hart, Avoiding Utility Classes by Marshal Ward, Kill That Util Class! by Dhaval Dalal, Helper Classes Are A Code Smell by Rob Bagby. Additionally, there are a few questions on StackExchange about utility classes: If a “Utilities” class is evil, where do I put my generic code?, Utility Classes are Evil. A dry summary of all their arguments is that utility classes are not proper objects; therefore, they don’t fit into object-oriented world. They were inherited from procedural programming, mostly because most were used to a functional decomposition paradigm back then. Assuming you agree with the arguments and want to stop using utility classes, I’ll show by example how these creatures can be replaced with proper objects. Procedural Example Say, for instance, you want to read a text file, split it into lines, trim every line and then save the results in another file. This is can be done with FileUtils from Apache Commons: void transform(File in, File out) { Collection<String> src = FileUtils.readLines(in, "UTF-8"); Collection<String> dest = new ArrayList<>(src.size()); for (String line : src) { dest.add(line.trim()); } FileUtils.writeLines(out, dest, "UTF-8"); } The above code may look clean; however, this is procedural programming, not object-oriented. We are manipulating data (bytes and bits) and explicitly instructing the computer from where to retrieve them and then where to put them on every single line of code. We’re defining a procedure of execution. Object-Oriented Alternative In an object-oriented paradigm, we should instantiate and compose objects, thus letting them manage data when and how they desire. Instead of calling supplementary static functions, we should create objects that are capable of exposing the behaviour we are seeking: public class Max implements Number { private final int a; private final int b; public Max(int x, int y) { this.a = x; this.b = y; } @Override public int intValue() { return this.a > this.b ? this.a : this.b; } } This procedural call: int max = NumberUtils.max(10, 5); Will become object-oriented: int max = new Max(10, 5).intValue(); Potato, potato? Not really; just read on… Objects Instead of Data Structures This is how I would design the same file-transforming functionality as above but in an object-oriented manner: void transform(File in, File out) { Collection<String> src = new Trimmed( new FileLines(new UnicodeFile(in)) ); Collection<String> dest = new FileLines( new UnicodeFile(out) ); dest.addAll(src); } FileLines implements Collection<String> and encapsulates all file reading and writing operations. An instance of FileLines behaves exactly as a collection of strings and hides all I/O operations. When we iterate it — a file is being read. When we addAll() to it — a file is being written. Trimmed also implements Collection<String> and encapsulates a collection of strings (Decorator pattern). Every time the next line is retrieved, it gets trimmed. All classes taking participation in the snippet are rather small: Trimmed, FileLines, and UnicodeFile. Each of them is responsible for its own single feature, thus following perfectly the single responsibility principle. On our side, as users of the library, this may be not so important, but for their developers it is an imperative. It is much easier to develop, maintain and unit-test class FileLines rather than using a readLines() method in a 80+ methods and 3000 lines utility class FileUtils. Seriously, look at its source code. An object-oriented approach enables lazy execution. The in file is not read until its data is required. If we fail to open out due to some I/O error, the first file won’t even be touched. The whole show starts only after we call addAll(). All lines in the second snippet, except the last one, instantiate and compose smaller objects into bigger ones. This object composition is rather cheap for the CPU since it doesn’t cause any data transformations. Besides that, it is obvious that the second script runs in O(1) space, while the first one executes in O(n). This is the consequence of our procedural approach to data in the first script. In an object-oriented world, there is no data; there are only objects and their behavior! Related Posts You may also find these posts interesting:Why NULL is Bad? Avoid String Concatenation Objects Should Be Immutable Typical Mistakes in Java CodeReference: OOP Alternative to Utility Classes from our JCG partner Yegor Bugayenko at the About Programming blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close