Featured FREE Whitepapers

What's New Here?


Cost, Value & Investment: How Much Will This Project Cost? Part 1

I’ve said before that you cannot use capacity planning for the project portfolio. I also said that managers often want to know how much the project will cost. Why? Because businesses have to manage costs. No one can have runaway projects. That is fair. If you use an agile or incremental approach to your projects, you have options. You don’t have to have runaway projects. Here are two better questions:          How much do you want to invest before we stop? How much value is this project or program worth to you?You need to think about cost, value, and investment, not just cost when you think about about the project portfolio. If you think about cost, you miss the potentially great projects and features. However, no business exists without managing costs. In fact, the cost question might be critical to your business. If you proceed without thinking about cost, you might doom your business. Why do you want to know about cost? Do you have a contract? Does the customer need to know? A cost-bound contract is a good reason.  (If you have other reasons for needing to know cost, let me know. I am serious when I say you need to evaluate the project portfolio on value, not on cost.) You Have a Cost-Bound Contract I’ve talked before about providing date ranges or confidence ranges with estimates. It all depends on why you need to know. If you are trying to stay within your predicted cost-bound contract, you need a ranked backlog. If you are part of a program, everyone needs to see the roadmap. Everyone needs to see the product backlog burnup charts. You’d better have feature teams so you can create features. If you don’t, you’d better have small features. Why? You can manage the interdependencies among and between teams more easily with small features and with a detailed roadmap. The larger the program, the smaller you want the batch size to be. Otherwise, you will waste a lot of money very fast. (The teams will create components and get caught in integration hell. That wastes money.) Your Customer Wants to Know How Much the Project Will Cost Why does your customer want to know? Do you have payments based on interim deliverables? If the customer needs to know, you want to build trust by delivering value, so the customer trusts you over time. If the customer wants to contain his or her costs, you want to work by feature, delivering value. You want to share the roadmap, delivering value. You want to know what value the estimate has for your customer. You can provide an estimate for your customer, as long as you know why your customer needs it. Some of you think I’m being perverse. I’m not being helpful by saying what you could do to provide an estimate. Okay, in Part 2, I will provide a suggestion of how you could do an order-of-magnitude approach for estimating a program.Reference: Cost, Value & Investment: How Much Will This Project Cost? Part 1 from our JCG partner Johanna Rothman at the Managing Product Development blog....

Job Search Trap: I Can Network Only By Computer

I gave a talk at a networking group recently about Manage Your Job Search. When the members checked in at the beginning they gave themselves points for their activity the week before. They only got one point for applying for a job. They got 15 points for going on an informational interview, and 15 points for networking at an event. I loved it. When they went out to meet people, they got more points. Meeting people, in person, is key to a successful job search. Why? It’s all about the loose connection. Loose connections is how you will find people to introduce you to people who will help you meet people on your target list. Loose connections will say, “Oh, I heard about that developer job (or tester job or project manager job or engineering job or whatever job) in that company last week. Here is the name I know.” You can search job boards. It’s difficult, time-demanding, and the job descriptions are shopping lists/laundry lists of jargon, ridiculous numbers of years of technical skills, and something masquerading as cultural fit. What passes for job descriptions these days is a horror show. The descriptions are written for the ATS, not for the people. If you’re looking for a job, go meet people. I know this might be the hardest thing you’ve ever done, if you are a technical person. Even if you are extroverted, walking into a group of people where you don’t know anyone? Oh boy. Not the way you want to spend an evening, is it? I have many tips about networking for shy people in Manage Your Job Search. Here are three:Find someone to go to a meeting with. That way you have a familiar face as a backup. Decide to meet just two or three people. You do not have to meet everyone in the room. If you have an in-depth conversation with those two or three people, that might be enough. You can always meet more people. Start small. If this is a dinner meeting, sit with people you don’t know. If you are with your friend, sit at opposite sides of the table. At a table of eight, space yourselves four apart. That way, you each get to talk to a different two or three people.If it’s not a dinner meeting, talk to someone for 5-10 minutes—enough to get to know enough about them. If the conversation lags, you can say, “Thank you, I’ll refresh my soda now.” You can get more club soda or whatever. What, you thought you would drink an alcoholic drink while you were looking for a job? Start with a clear head. At the end of the evening, feel free to indulge. This way, you always have a way to end the conversation. Because what goes in must go out, too. Don’t sit behind your computer and network that way. Go out and meet people. Your job search will be more productive and faster because of it.Reference: Job Search Trap: I Can Network Only By Computer from our JCG partner Johanna Rothman at the Managing Product Development blog....

How To Waste Estimations

We like numbers because of their symbolic simplicity. In other words, we get them. They give us certainty, and therefore confidence. Which sounds more trustworthy: “It will take 5 months” or “It will take between 4 to 6 months”? The first one sounds more confident, and therefore we trust it more. After all, why don’t you give me a simple number, if you’re not really sure about what you’re doing? Of course, if you knew where the numbers came from, you might not be that confident.   Simple and Wrong In our story, the 5 month estimation was not the first one given. The team deliberated and considered, and decided that in order to deliver a working, tested feature, they would need 8 months. “But our competitors can do it in 4 months.” shouted the manager. “Well. maybe they can. Or maybe, they will do it in half the time, with half the quality” said the team. “It doesn’t matter, because they will get the contract!” the manager got angrier. The team had nothing to say. They left the room for half an hour. Then the team-lead came back and said very softly: “We’ll do it in 5 months”. Wastimates The project plan said 5 months. No one remembered the elaborate discussion that went down the toilet. Estimates are requested for making decisions. In our story, the decision was to overrule them. There’s not really isn’t a way of knowing who was right. Many companies thrived by having a crappy v1 out, and then fixing things in v2. They might not have gotten to v2 if they had done v1 “properly”. Then again, it might be that they bled talent, because the developers were unhappy, or not willing to sacrifice quality. Who knows. The point is: the effort put into estimation should be small enough to provide the numbers to management. If the team had reached the “8 months” answer after 1/2 day, rather than after dissecting the project dependencies up front for 2 weeks, they would have more time to work on the actual project. By the way, they might not get the go ahead. And they would work 2 weeks more on other projects. That’s good too. Don’t waste time getting to good enough estimates. Estimate and move on to real work. PS: Based on a true story.Reference: How To Waste Estimations from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Java 9 – The Ultimate Feature List

This post will be updated with new features targeted at the upcoming Java 9 release (last updated: 9/9/2014)   The OpenJDK development is picking up speed: after the Java 8 launch in March 2014, we’re expecting to enter a 2 year release cycle. Java 9 will reportedly be released in 2016, and an early list of JEPs (JDK Enhancement Proposals) that target the release has already been published. Moreover, some JSRs (Java Specification Requests) are already being worked on and we’ve also added a hint of other features that might be included. The flagship features are the Jigsaw project, significant performance improvements and long awaited APIs including: Process API updates, JSON as part of java.util and a money handling API. For those of you who want to be on the bleeding edge, JDK 9 early access builds are already available here. In this post we’ll keep updating around the main new features for Java 9 and what they’re all about. So stay tuned for additional updates! Table of contents[Accepted] Project Jigsaw – Modular Source Code [Accepted] Process API Updates [Accepted] Light Weight JSON API [Accepted] Money and Currency API [Accepted] Improved Contended Locking [Accepted] Segmented Code Cache [Accepted] Smart Java Compilation – Phase Two [Expected] HTTP 2 Client [Expected] REPL in Java Where do new features come from?Accepted features 1. Project Jigsaw – Modular Source Code Project Jigsaw’s goal is to make Java modular and break the JRE to interoperable components, one of the most hyped features for Java 9. This JEP is the first out of 4 steps towards Jigsaw and will not change the actual structure of the JRE and JDK. The purpose of this step is to reorganize the JDK source code into modules, enhance the build system to compile modules, and enforce module boundaries at build time. The project was originally intended for Java 8 but was delayed since and retargeted at Java 9. Once it’s finished, it would allow creating a scaled down runtime Jar (rt.jar) customised to the components a project actually needs. The JDK 7 and JDK 8 rt.jar’s have about 20,000 classes that are part of the JDK even if many of them aren’t really being used in a specific envrionment (although a partial solution is included in the Java 8 compact profiles feature). The motivation behind this is to make Java easily scalable to small computing devices (Internet of Things), improve security and performance, and make it easier for developers to construct and maintain libraries. More about JEP 201 2. Process API Updates So far there has been a limited ability for controlling and managing operating system processes with Java. For example, in order to do something as simple as get your process PID today, you would need to either access native code or use some sort of a workaround. More than that, it would require a different implementation for each platform to guarantee you’re getting the right result. In Java 9, expect the code for retrieving Linux PIDs, that now looks like this: public static void main(String[] args) throws Exception { Process proc = Runtime.getRuntime().exec(new String[]{ "/bin/sh", "-c", "echo $PPID" });if (proc.waitFor() == 0) { InputStream in = proc.getInputStream(); int available = in.available(); byte[] outputBytes = new byte[available];in.read(outputBytes); String pid = new String(outputBytes);System.out.println("Your pid is " + pid); } } To turn into something like this (that also supports all operating systems): System.out.println("Your pid is " + Process.getCurrentPid()); The update will extend Java’s ability to to interact with the operating system: New direct methods to handle PIDs, process names and states, and ability to enumerate JVMs and processes and more. More about JEP 102 3. Light-Weight JSON API Currently there are alternatives available for handling JSON in Java, what’s unique about this API is that it would be part of the language, lightweight and would use the new capabilities of Java 8. And will be delivered right through java.util (Unlike JSR 353 which uses an external package or other alternatives). ** Code samples coming soon! More about JEP 198 4. Money and Currency API After the new Date and Time API introduced in Java 8, Java 9 brings with it a new and official API for representing, transporting, and performing comprehensive calculations with Money and Currency. To find out more about the project, you can visit JavaMoney on Github. Code and usage examples are already available right here . Here are a few highlights: Money amt1 = Money.of(10.1234556123456789, "USD"); // Money is a BigDecimal FastMoney amt2 = FastMoney.of(123456789, "USD"); // FastMoney is up to 5 decimal places Money total = amt1.add(amt2); The new money types: Money & FastMoney MonetaryAmountFormat germanFormat = MonetaryFormats.getAmountFormat( Locale.GERMANY);System.out.println(germanFormat.format(monetaryAmount)); // 1.202,12 USD Formatting money according to different countries More about JSR 354 5. Improve Contended Locking Lock contention is a performance bottleneck for many multithreaded Java applications. The enhancement proposal looks into improving the performance of Java object monitors as measured by different benchmarks. One of the these tests is Volano. It simulates a chat server with huge thread counts and client connections, many of them trying to access the same resources and simulate a heavy duty real world application. These kind of stress tests push JVMs to the limit and try to determine the maximum throughput they can achieve, usually in terms of messages per second. The ambitious success metric for this JEP is a significant improvement over 22 different benchmarks. If the effort will succeed, these performance improvements will be rolling out in Java 9. More about JEP 143 6. Segmented Code Cache Another performance improvement for Java 9 is coming from the JIT compiler angle. When certain areas of code are executed rapidly, the VM compiles them to native code and stores them in the code cache. This update looks into segmenting the code cache to different areas of compiled code in order to improve the compiler’s performance. Instead of a single area, the code cache will be segmented into 3 by the code’s lifetime in the cache:Code that will stay in the cache forever (JVM internal / non-method code) Short lifetime (Profiled code, specific to a certain set of conditions) Potentially long lifetime (Non-profiled code)The segmentation would allow for several performance improvements to happen. For example, the method sweeper would be able to skip non-method code and act faster. More about JEP 197 7. Smart Java Compilation, Phase Two The Smart Java Compilation tool, or sjavac, was first worked on around JEP 139 in order to improve JDK build speeds by having the javac compiler run on all cores. With JEP 199, it enters Phase Two, where it will be improved and generalized so that it can be used by default and build other projects than the JDK. More about JEP 199 What else to expect? 8. HTTP 2 Client HTTP 2.0 hasn’t been released yet as a standard but it will be submitted for final review soon and it’s expected to be finalized before the release of Java 9. JEP 110 will define and implement a new HTTP client for Java that will replace HttpURLConnection, and also implement HTTP 2.0 and websockets. It wasn’t published as an accepted JEP yet but its targeting Java 9 and we expect it to be included. The official HTTP 2.0 RFC release date is currently set to February 2015, building on top of Google’s SPDY algorithm. SPDY has already shown great speed improvements over HTTP 1.1 ranging between 11.81% to 47.7% and its implementation already exists in most modern browsers. More about JEP 110 9. Project Kulla – REPL in Java Recently announced, a bit unlikely to hit Java 9 but might make it on time with a targeted integration date set in April 2015. Today there’s no “native” Java way to REPL (Read-Eval-Print-Loop). Meaning, if you want to run a few lines of Java to check out them quickly on their own you will have to wrap it all in a separate project or method. There are REPL add-ons to popular IDEs and some other solutions like Java REPL, but no official way to do this so far – Project Kulla might be the answer. More about Project Kulla Bonus: Where do new features come from? JEPs and JSRs don’t usually pop out of nowhere, here’s the structure that holds it all together:Groups – Individuals and organisations with a mutual interest around a broad subject or a specific body of code. Some examples are Security, Networking, Swing, and HotSpot. Projects – Efforts to produce a body of code, documentation or other effort. Must be sponsored by at least one group. Recent examples are Project Lambda, Project Jigsaw, and Project Sumatra. JDK Enhancement Proposal (JEP) – Allows promoting a new specification informally before or in parallel to the JCP, when further exploration is needed. Accepted JEPs become a part of the JDK roadmap and assigned a version number. Java Specification Request (JSR) – The actual specification of the feature happens in this stage, can be either coming through Groups/Projects, JEPs or from individual JCP (Java Community Process) members. An umbrella JSR is usually opened for each Java version, this has yet to happen with Java 9. Individual members of the community can also propose new Java specification requests. Reference: Java 9 – The Ultimate Feature List from our JCG partner Alex Zhitnitsky at the Takipi blog....

Simple Java SSH Client

An execution of a shell command via SSH can be done in Java, in just a few lines, using jcabi-ssh:                   String hello = new Shell.Plain( new SSH( "ssh.example.com", 22, "yegor", "-----BEGIN RSA PRIVATE KEY-----..." ) ).exec("echo 'Hello, world!'"); jcabi-ssh is a convenient wrapper of JSch, a well-known pure Java implementation of SSH2. Here is a more complex scenario, where I upload a file via SSH and then read back its grepped content: Shell shell = new SSH( "ssh.example.com", 22, "yegor", "-----BEGIN RSA PRIVATE KEY-----..." ); File file = new File("/tmp/data.txt"); new Shell.Safe(shell).exec( "cat > d.txt && grep 'some text' d.txt", new FileInputStream(file), Logger.stream(Level.INFO, this), Logger.stream(Level.WARNING, this) ); Class SSH, which implements interface Shell, has only one method, exec. This method accepts four arguments: interface Shell { int exec( String cmd, InputStream stdin, OutputStream stdout, OutputStream stderr ); } I think it’s obvious what these arguments are about. There are also a few convenient decorators that make it easier to operate with simple commands. Shell.Safe Shell.Safe decorates an instance of Shell and throws an exception if the exec exit code is not equal to zero. This may be very useful when you want to make sure that your command executed successfully, but don’t want to duplicate if/throw in many places of your code. Shell ssh = new Shell.Safe( new SSH( "ssh.example.com", 22, "yegor", "-----BEGIN RSA PRIVATE KEY-----..." ) ); Shell.Verbose Shell.Verbose decorates an instance of Shell and copies stdout and stderr to the slf4j logging facility (using jcabi-log). Of course, you can combine decorators, for example: Shell ssh = new Shell.Verbose( new Shell.Safe( new SSH( "ssh.example.com", 22, "yegor", "-----BEGIN RSA PRIVATE KEY-----..." ) ) ); Shell.Plain Shell.Plain is a wrapper of Shell that introduces a new exec method with only one argument, a command to execute. It also doesn’t return an exit code, but stdout instead. This should be very convenient when you want to execute a simple command and just get its output (I’m combining it with Shell.Safe for safety): String login = new Shell.Plain(new Shell.Safe(ssh)).exec("whoami"); Download You need a single dependency jcabi-ssh.jar in your Maven project (get its latest version in Maven Central): <dependency> <groupId>com.jcabi</groupId> <artifactId>jcabi-ssh</artifactId> </dependency>The project is in Github. If you have any problems, just submit an issue. I’ll try to help.Related Posts You may also find these posts interesting:Fluent JDBC Decorator How to Retry Java Method Call on Exception Cache Java Method Results How to Read MANIFEST.MF Files Java Method Logging with AOP and AnnotationsReference: Simple Java SSH Client from our JCG partner Yegor Bugayenko at the About Programming blog....

Getting Started with Gradle: Creating a Binary Distribution

After we have created a useful application, the odds are that we want to share it with other people. One way to do this is to create a binary distribution that can be downloaded from our website. This blog post describes how we can a binary distribution that fulfils the following requirements:          Our binary distribution must not use so called “fat jar” approach. In other words, the dependencies of our application must not be packaged into the same jar file than our application. Our binary distribution must contain startup scripts for *nix and Windows operating systems. The root directory of our binary distribution must contain the license of our application.Let’s get started. Additional reading:Getting Started with Gradle: Introduction helps you to install Gradle, describes the basic concepts of a Gradle build, and describes how you can functionality to your build by using Gradle plugins. Getting Started with Gradle: Our First Java Project describes how you can create a Java project by using Gradle and package your application to an executable jar file. Getting Started with Gradle: Dependency Management describes how you can manage the dependencies of your Gradle project.Creating a Binary Distribution The application plugin is a Gradle plugin that allows us to run our application, install it, and create a binary distribution that doesn’t use the “fat jar” approach. We can create a binary distribution by making the following changes to the build.gradle file of the example application that we created during the previous part of my Getting Started with Gradle tutorial:Remove the configuration of the jar task. Apply the application plugin to our project. Configure the main class of our application by setting the value of the mainClassName property.After we have made these changes to our build.gradle file, it looks as follows (the relevant parts are highlighted): apply plugin: 'application' apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' testCompile 'junit:junit:4.11' }mainClassName = 'net.petrikainulainen.gradle.HelloWorld' The application plugin adds five tasks to our project:The run task starts the application. The startScripts task creates startup scripts to the build/scripts directory. This tasks creates startup scripts for Windows and *nix operating systems. The installApp task installs the application into the build/install/[project name] directory. The distZip task creates the binary distribution and packages it into a zip file that is found from the build/distributions directory. The distTar task creates the binary distribution and packages it into a tar file that is found from the build/distributions directory.We can create a binary distribution by running one of the following commands in the root directory of our project: gradle distZip or gradle distTar. If we create a binary distribution that is packaged to a zip file, see the following output: > gradle distZip :compileJava :processResources :classes :jar :startScripts :distZipBUILD SUCCESSFULTotal time: 4.679 secs If we unpackage the created binary distribution created by the application plugin, we get the following directory structure:The bin directory contains the startup scripts. The lib directory contains the jar file of our application and its dependencies.You can get more information about the application plugin by reading the Chapter 45. The Application Plugin of the Gradle User’s Guide. We can now create a binary distribution that fulfils almost all of our requirements. However, we still need to add the license of our application to the root directory of our binary distribution. Let’s move on and find out how we can do it. Adding the License File of Our Application to the Binary Distribution We can add the license of our application to our binary distribution by following these steps:Create a task that copies the license file from the root directory of our project to the build directory. Add the license file to the root directory of the created binary distribution.Let’s move on and take a closer look at these steps. Copying the License File to the Build Directory The name of the file that contains the license of our application is LICENSE, and it is found from the root directory of our project. We can copy the license file to the build directory by following these steps:Create a new Copy task called the copyLicense. Configure the source file by using the from() method of the CopySpec interface. Pass the string ‘LICENSE’ as a method parameter. Configure the target directory by using the into() method of the CopySpec interface. Pass the value of the $buildDir property as a method parameter.After we have followed these steps, our build.gradle file looks as follows (the relevant part is highlighted): apply plugin: 'application' apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' testCompile 'junit:junit:4.11' }mainClassName = 'net.petrikainulainen.gradle.HelloWorld'task copyLicense(type: Copy) { from "LICENSE" into "$buildDir" } Additional Information:The API Documentation of the Copy task Section 16.6 Copying files of the Gradle User’s GuideWe have now created a task that copies the LICENSE file from the root directory of our project to the build directory. However, when we run the command gradle distZip in the root directory of our project, we see the following output: > gradle distZip :compileJava :processResources :classes :jar :startScripts :distZipBUILD SUCCESSFULTotal time: 4.679 secs In other words, our new task is not invoked and this naturally means that the license file is not included in our binary distribution. Let’s fix this problem. Adding the License File to the Binary Distribution We can add the license file to the created binary distribution by following these steps:Transform the copyLicense task from a Copy task to a “regular” Gradle task by removing the string ‘(type: Copy)’ from its declaration. Modify the implementation of the copyLicense task by following these steps:Configure the output of the copyLicense task. Create a new File object that points to the license file found from the build directory and set it as the value of the outputs.file property. Copy the license file from the root directory of our project to the build directory.The application plugin sets a CopySpec property called the applicationDistribution to our project. We can use it to include the license file to the created binary distribution. We can do this by following these steps:Configure the location of the license file by using the from() method of the CopySpec interface and pass the output of the copyLicense task as method parameter. Configure the target directory by using the into() method of the CopySpec interface and pass an empty String as a method parameter.After we have followed these steps, our build.gradle file looks as follows (the relevant part is highlighted): apply plugin: 'application' apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' testCompile 'junit:junit:4.11' }mainClassName = 'net.petrikainulainen.gradle.HelloWorld'task copyLicense { outputs.file new File("$buildDir/LICENSE") doLast { copy { from "LICENSE" into "$buildDir" } } }applicationDistribution.from(copyLicense) { into "" } Additional Reading:The API Documentation of the doLast() action of the Task Section 45.5 Including other resources in the distribution of the Gradle User’s Guide The Groovydoc of the ApplicationPluginConvention classWhen we run the command gradle distZip in the root directory of our project, we see the following output: > gradle distZip :copyLicense :compileJava :processResources :classes :jar :startScripts :distZipBUILD SUCCESSFULTotal time: 5.594 secs As we can see, the copyLicense task is now invoked and if we unpackage our binary distribution, we notice that the LICENSE file is found from its root directory. Let’s move on summarize what we have learned from this blog post. Summary This blog post taught us three things:We learned that we can create a binary distribution by using the application plugin. We learned how we can copy a file from the source directory to the target directory by using the Copy task. We learned how we can add files to the binary distribution that is created by the application plugin.If you want play around with the example application of this blog post, you can get it from Github.Reference: Getting Started with Gradle: Creating a Binary Distribution from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....

Stateless Session for multi-tenant application using Spring Security

Once upon a time, I published one article explaining the principle to build Stateless Session. Coincidentally, we are working on the same task again, but this time, for a multi-tenant application. This time, instead of building the authentication mechanism ourselves, we integrate our solution into Spring Security framework. This article will explain our approach and implementation. Business Requirement We need to build authentication mechanism for an Saas application. Each customer access the application through a dedicated sub-domain. Because the application will be deployed on the cloud, it is pretty obvious that Stateless Session is the preferred choice because it allow us to deploy additional instances without hassle. In the project glossary, each customer is one site. Each application is one app. For example, site may be Microsoft or Google. App may be Gmail, GooglePlus or Google Drive. A sub-domain that user use to access the application will include both app and site. For example, it may looks like microsoft.mail.somedomain.com or google.map.somedomain.com User once login to one app, can access any other apps as long as they are for the same site. Session will be timeout after a certain inactive period. Background Stateless Session Stateless application with timeout is nothing new. Play framework has been stateless from the first release in 2007. We also switched to Stateless Session many years ago. The benefit is pretty clear. Your Load Balancer do not need stickiness; hence, it is easier to configure. As the session in on the browser, we can simply bring in new servers to boost capacity immediately. However, the disadvantage is that your session is not so big and not so confidential anymore. Comparing to stateful application where the session is store in server, stateless application store the session in HTTP cookie, which can not grow more than 4KB. Moreover, as it is cookie, it is recommended that developers only store text or digit on the session rather than complicated data structure. The session is stored in browser and transfer to server in every single request. Therefore, we should keep the session as small as possible and avoid placing any confidential data on it. To put it short, stateless session force developer to change the way application using session. It should be user identity rather than convenient store. Security Framework The idea behind Security Framework is pretty simple, it helps to identify the principle that executing code, checking if he has permission to execute some services and throws exceptions if user does not. In term of implementation, security framework integrate with your service in an AOP style architecture. Every check will be done by the framework before method call. The mechanism for implementing permission check may be filter or proxy. Normally, security framework will store principal information in the thread storage (ThreadLocal in Java). That why it can give developers a static method access to the principal anytime. I think this is somethings developers should know well; otherwise, they may implement permission check or getting principal in some background jobs that running in separate threads. In this situation, it is obviously that the security framework will not be able to find the principal. Single Sign On Single Sign On in mostly implemented using Authentication Server. It is independent of the mechanism to implement session (stateless or stateful). Each application still maintain their own session. On the first access to an application, it will contact authentication server to authenticate user then create its own session. Food for Thought Framework or build from scratch As stateless session is the standard, the biggest concern for us is to use or not to use a security framework. If we use, then Spring Security is the cheapest and fastest solution because we already use Spring Framework in our application. For the benefit, any security framework provide us quick and declarative way to declare assess rule. However, it will not be business logic aware access rule. For example, we can define that only Agent can access the products but we can not define that one agent can only access some products that belong to him. In this situation, we have two choices, building our own business logic permission check from scratch or build 2 layers of permission check, one is only role based, one is business logic aware. After comparing two approaches, we chose the latter one because it is cheaper and faster to build. Our application will function similar to any other Spring Security application. It means that user will be redirected to login page if accessing protected content without session. If the session exist, user will get status code 403. If user access protected content with valid role but unauthorized records, he will get 401 instead. Authentication The next concern is how to integrate our authentication and Authorization mechanism with Spring Security. A standard Spring Security application may process a request like below:The diagram is simplified but still give us a raw idea how things work. If the request is login or logout, the top two filters update the server side session. After that, another filter help check access permission for the request. If the permission check success, another filter will help to store user session to thread storage. After that, controller will execute code with the properly setup environment. For us, we prefer to create our authentication mechanism because the credential need to contain website domain. For example, we may have Joe from Xerox and Joe from WDS accessing Saas application. As Spring Security take control of preparing authentication token and authentication provider, we find it is cheaper to implement login and logout ourselves at the controller level rather than spending effort on customizing Spring Security. As we implement stateless session, there are two works we need to implements here. At first, we need to to construct the session from cookie before any authorization check. We also need to update the session time stamp so that the session is refreshed every time browser send request to server. Because of the earlier decision to do authentication in controller, we face a challenge here. We should not refresh the session before controller executes because we do authentication here. However, some controller methods is attached with the View Resolver that write to output stream immediately. Therefore, we have no chance to refresh cookie after controller being executed. Finally, we choose a slightly compromised solution by using HandlerInterceptorAdapter. This handler interceptor allow us to do extra processing before and after each controller method. We implement refreshing cookie after controller method if the method is for authentication and before controller methods for any other purpose. The new diagram should look like thisCookie To be meaningful, user should have only one session cookie. As the session always change time stamp after each request, we need to update session on every single response. By HTTP protocol, this can only be done if the cookies match name, path and domain. When getting this business requirement, we prefer to try new way of implementing SSO by sharing session cookie. If every application are under the same parent domain and understand the same session cookie, effectively we have a global session. Therefore, there is no need for authentication server any more. To achieve that vision, we must set the domain as the parent domain of all applications. Performance Theoretically, stateless session should be slower. Assuming that the server implementation store session table in memory, passing in JSESSIONID cookie will only trigger a one time read of object from the session table and optional one time write to update last access (for calculating session timeout). In contrast, for stateless session, we need to calculate the hash to validate session cookie, load principal from database, assigning new time stamp and hash again. However, with today server performance, hashing should not add too much delay in server response time. The bigger concern is querying data from database, and for this, we can speed up by using cache. In best case scenario, stateless session can perform closely enough to stateful if there is no DB call made. In stead of loading from session table, which maintained by container, the session is loaded from internal cache, which is maintained by application. In the worst case scenario, requests are being routed to many different servers and the principal object is stored in many instances. This add additional effort to load principal to the cache once per server. While the cost may be high, it occurs only once in a while. If we apply stickiness routing to load balancer, we should be able to achieve best case scenario performance. With this, we can perceive the stateless session cookie as similar mechanism to JSESSIONID but with fall back ability to reconstruct session object. Implementation I have published the sample of this implementation to https://github.com/tuanngda/sgdev-blog repository. Kindly check the stateless-session project. The project requires a mysql database to work. Hence, kindly setup a schema following build.properties or modify the properties file to fit your schema. The project include maven configuration to start up a tomcat server at port 8686. Therefore, you can simply type mvn cargo:run to start up the server. Here is the project hierarchy:I packed both Tomcat 7 server and the database so that it work without any other installation except MySQL. The Tomcat configuration file TOMCAT_HOME/conf/context.xml contain the DataSource declaration and project properties file. Now, let’s look closer at the implementation. Session We need two session objects, one represent the session cookie, one represent the session object that we build internally in Spring security framework: public class SessionCookieData { private int userId; private String appId; private int siteId; private Date timeStamp; } and public class UserSession { private User user; private Site site;public SessionCookieData generateSessionCookieData(){ return new SessionCookieData(user.getId(), user.getAppId(), site.getId()); } } With this combo, we have the objects to store session object in cookie and memory. The next step is to implement a method that allow us to build session object from cookie data. public interface UserSessionService { public UserSession getUserSession(SessionCookieData sessionData); } Now, one more service to retrieve and generate cookie from cookie data. public class SessionCookieService {public Cookie generateSessionCookie(SessionCookieData cookieData, String domain);public SessionCookieData getSessionCookieData(Cookie sessionCookie);public Cookie generateSignCookie(Cookie sessionCookie); } Up to this point, We have the service that help us to do the conversion Cookie –> SessionCookieData –> UserSession and Session –> SessionCookieData –> Cookie Now, we should have enough material to integrate stateless session with Spring Security framework Integrate with Spring security At first, we need to add a filter to construct session from cookie. Because this should happen before permission check, it is better to use AbstractPreAuthenticatedProcessingFilter @Component(value="cookieSessionFilter") public class CookieSessionFilter extends AbstractPreAuthenticatedProcessingFilter { ... @Override protected Object getPreAuthenticatedPrincipal(HttpServletRequest request) { SecurityContext securityContext = extractSecurityContext(request); if (securityContext.getAuthentication()!=null  && securityContext.getAuthentication().isAuthenticated()){ UserAuthentication userAuthentication = (UserAuthentication) securityContext.getAuthentication(); UserSession session = (UserSession) userAuthentication.getDetails(); SecurityContextHolder.setContext(securityContext); return session; } return new UserSession(); } ... } The filter above construct principal object from session cookie. The filter also create a PreAuthenticatedAuthenticationToken that will be used later for authentication. It is obviously that Spring will not understand this Principal. Therefore, we need to provide our own AuthenticationProvider that manage to authenticate user based on this principal. public class UserAuthenticationProvider implements AuthenticationProvider { @Override public Authentication authenticate(Authentication authentication) throws AuthenticationException { PreAuthenticatedAuthenticationToken token = (PreAuthenticatedAuthenticationToken) authentication;UserSession session = (UserSession)token.getPrincipal();if (session != null && session.getUser() != null){ SecurityContext securityContext = SecurityContextHolder.getContext(); securityContext.setAuthentication(new UserAuthentication(session)); return new UserAuthentication(session); }throw new BadCredentialsException("Unknown user name or password"); } } This is Spring way. User is authenticated if we manage to provide a valid Authentication object. Practically, we let user login by session cookie for every single request. However, there are times that we need to alter user session and we can do it as usual in controller method. We simply overwrite the SecurityContext, which is setup earlier in filter. It also stores the UserSession to SecurityContextHolder, which helps to setup environment. Because it is a pre-authentication filter, it should work well for most of requests, except authentication. We should update the SecurityContext in authentication method manually: public ModelAndView login(String login, String password, String siteCode) throws IOException{ if(StringUtils.isEmpty(login) || StringUtils.isEmpty(password)){ throw new HttpServerErrorException(HttpStatus.BAD_REQUEST, "Missing login and password"); } User user = authService.login(siteCode, login, password); if(user!=null){ SecurityContext securityContext = SecurityContextHolder.getContext(); UserSession userSession = new UserSession(); userSession.setSite(user.getSite()); userSession.setUser(user); securityContext.setAuthentication(new UserAuthentication(userSession)); }else{ throw new HttpServerErrorException(HttpStatus.UNAUTHORIZED, "Invalid login or password"); } return new ModelAndView(new MappingJackson2JsonView()); } Refresh Session Up to now, you may notice that we have never mentioned the writing of cookie. Provided that we have a valid Authentication object and our SecurityContext contain the UserSession, it is important that we need to send this information to browser. Before the HttpServletResponse is generated, we must attach the session cookie to it. This cookie with similar domain and path will replace the older session that browser is keeping. As discussed above, refreshing session is better to be done after controller method because we implement authentication here. However, the challenge is caused by ViewResolver of Spring MVC. Sometimes it write to OutputStream so soon that any attempt to add cookie to response will be useless. Finally, we come up with a compromise solution that refresh session before controller methods for normal requests and after controller methods for authentication requests. To know whether requests is for authentication, we place an annotation at the authentication methods. @Override public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { if (handler instanceof HandlerMethod){ HandlerMethod handlerMethod = (HandlerMethod) handler; SessionUpdate sessionUpdateAnnotation = handlerMethod.getMethod().getAnnotation(SessionUpdate.class); if (sessionUpdateAnnotation == null){ SecurityContext context = SecurityContextHolder.getContext(); if (context.getAuthentication() instanceof UserAuthentication){ UserAuthentication userAuthentication = (UserAuthentication)context.getAuthentication(); UserSession session = (UserSession) userAuthentication.getDetails(); persistSessionCookie(response, session); } } } return true; }@Override public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception { if (handler instanceof HandlerMethod){ HandlerMethod handlerMethod = (HandlerMethod) handler; SessionUpdate sessionUpdateAnnotation = handlerMethod.getMethod().getAnnotation(SessionUpdate.class); if (sessionUpdateAnnotation != null){ SecurityContext context = SecurityContextHolder.getContext(); if (context.getAuthentication() instanceof UserAuthentication){ UserAuthentication userAuthentication = (UserAuthentication)context.getAuthentication(); UserSession session = (UserSession) userAuthentication.getDetails(); persistSessionCookie(response, session); } } } } Conclusion The solution works well for us but we do not have the confident that this is the best practices possible. However, it is simple and does not cost us much effort to implement (around 3 days include testing). Kindly feedback if you have any better idea to build stateless session with Spring.Reference: Stateless Session for multi-tenant application using Spring Security from our JCG partner Nguyen Anh Tuan at the Developers Corner blog....

Java Method Logging with AOP and Annotations

Sometimes, I want to log (through slf4j and log4j) every execution of a method, seeing what arguments it receives, what it returns and how much time every execution takes. This is how I’m doing it, with help of AspectJ, jcabi-aspects and Java 6 annotations:                 public class Foo { @Loggable public int power(int x, int p) { return Math.pow(x, p); } } This is what I see in log4j output: [INFO] com.example.Foo #power(2, 10): 1024 in 12μs [INFO] com.example.Foo #power(3, 3): 27 in 4μs Nice, isn’t it? Now, let’s see how it works. Annotation with Runtime Retention Annotations is a technique introduced in Java 6. It is a meta-programming instrument that doesn’t change the way code works, but gives marks to certain elements (methods, classes or variables). In other words, annotations are just markers attached to the code that can be seen and read. Some annotations are designed to be seen at compile time only — they don’t exist in .class files after compilation. Others remain visible after compilation and can be accessed in runtime. For example, @Override is of the first type (its retention type is SOURCE), while @Test from JUnit is of the second type (retention type is RUNTIME). @Loggable — the one I’m using in the script above — is an annotation of the second type, from jcabi-aspects. It stays with the bytecode in the .class file after compilation. Again, it is important to understand that even though method power() is annotated and compiled, it doesn’t send anything to slf4j so far. It just contains a marker saying “please, log my execution”. Aspect Oriented Programming (AOP) AOP is a useful technique that enables adding executable blocks to the source code without explicitly changing it. In our example, we don’t want to log method execution inside the class. Instead, we want some other class to intercept every call to method power(), measure its execution time and send this information to slf4j. We want that interceptor to understand our @Loggable annotation and log every call to that specific method power(). And, of course, the same interceptor should be used for other methods where we’ll place the same annotation in the future. This case perfectly fits the original intent of AOP — to avoid re-implementation of some common behavior in multiple classes. Logging is a supplementary feature to our main functionality, and we don’t want to pollute our code with multiple logging instructions. Instead, we want logging to happen behind the scenes. In terms of AOP, our solution can be explained as creating an aspect that cross-cuts the code at certain join points and applies an around advice that implements the desired functionality. AspectJ Let’s see what these magic words mean. But, first, let’s see how jcabi-aspects implements them using AspectJ (it’s a simplified example, full code you can find in MethodLogger.java): @Aspect public class MethodLogger { @Around("execution(* *(..)) && @annotation(Loggable)") public Object around(ProceedingJoinPoint point) { long start = System.currentTimeMillis(); Object result = point.proceed(); Logger.info( "#%s(%s): %s in %[msec]s", MethodSignature.class.cast(point.getSignature()).getMethod().getName(), point.getArgs(), result, System.currentTimeMillis() - start ); return result; } } This is an aspect with a single around advice around() inside. The aspect is annotated with @Aspect and advice is annotated with @Around. As discussed above, these annotations are just markers in .class files. They don’t do anything except provide some meta-information to those w ho are interested in runtime. Annotation @Around has one parameter, which — in this case — says that the advice should be applied to a method if:its visibility modifier is * (public, protected or private); its name is name * (any name); its arguments are .. (any arguments); and it is annotated with @LoggableWhen a call to an annotated method is to be intercepted, method around() executes before executing the actual method. When a call to method power() is to be intercepted, method around() receives an instance of class ProceedingJoinPoint and must return an object, which will be used as a result of method power(). In order to call the original method, power(), the advice has to call proceed() of the join point object. We compile this aspect and make it available in classpath together with our main file Foo.class. So far so good, but we need to take one last step in order to put our aspect into action — we should apply our advice. Binary Aspect Weaving Aspect weaving is the name of the advice applying process. Aspect weaver modifies original code by injecting calls to aspects. AspectJ does exactly that. We give it two binary Java classes Foo.class and MethodLogger.class; it gives back three — modified Foo.class, Foo$AjcClosure1.class and unmodified MethodLogger.class. In order to understand which advices should be applied to which methods, AspectJ weaver is using annotations from .class files. Also, it uses reflection to browse all classes on classpath. It analyzes which methods satisfy the conditions from the @Around annotation. Of course, it finds our method power(). So, there are two steps. First, we compile our .java files using javac and get two files. Then, AspectJ weaves/modifies them and creates its own extra class. Our Foo class looks something like this after weaving: public class Foo { private final MethodLogger logger; @Loggable public int power(int x, int p) { return this.logger.around(point); } private int power_aroundBody(int x, int p) { return Math.pow(x, p); } } AspectJ weaver moves our original functionality to a new method, power_aroundBody(), and redirects all power() calls to the aspect class MethodLogger. Instead of one method power() in class Foo now we have four classes working together. From now on, this is what happens behind the scenes on every call to power():Original functionality of method power() is indicated by the small green lifeline on the diagram. As you see, the aspect weaving process connects together classes and aspects, transferring calls between them through join points. Without weaving, both classes and aspects are just compiled Java binaries with attached annotations. jcabi-aspects jcabi-aspects is a JAR library that contains Loggable annotation and MethodLogger aspect (btw, there are many more aspects and annotations). You don’t need to write your own aspect for method logging. Just add a few dependencies to your classpath and configure jcabi-maven-plugin for aspect weaving (get their latest versions in Maven Central): <project> <depenencies> <dependency> <dependency> <groupId>com.jcabi</groupId> <artifactId>jcabi-aspects</artifactId> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> </dependency> </dependency> </depenencies> <build> <plugins> <plugin> <groupId>com.jcabi</groupId> <artifactId>jcabi-maven-plugin</artifactId> <executions> <execution> <goals> <goal>ajc</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> Since this weaving procedure takes a lot of configuration effort, I created a convenient Maven plugin with an ajc goal, which does the entire aspect weaving job. You can use AspectJ directly, but I recommend that you use jcabi-maven-plugin. That’s it. Now you can use @com.jcabi.aspects.Loggable annotation and your methods will be logged through slf4j. If something doesn’t work as explained, don’t hesitate to submit a Github issue. Related Posts You may also find these posts interesting:How to Retry Java Method Call on Exception Cache Java Method Results Get Rid of Java Static Loggers Limit Java Method Execution Time Simple Java SSH ClientReference: Java Method Logging with AOP and Annotations from our JCG partner Yegor Bugayenko at the About Programming blog....

AngularJS: Introducing modules, controllers, services

In my previous post AngularJS Tutorial: Getting Started with AngularJS we have seen how to setup an application using SpringBoot + AngularJS + WebJars. But it’s a kind of quick start tutorial where I haven’t explained much about AngularJS modules, controllers and services. Also it is a single screen (only one route) application. In this part-2 tutorial, we will take a look at what are Angular modules, controllers and services and how to configure and use them. Also we will look into how to use ngRoute to build multi-screen application. If we take a look at the code that we developed in previous post, especially in controllers.js, we clubbed the client side controller logic and business logic(of-course we don’t have any biz logic here !) in our Controllers which is not good. As java developers we get used to have dozen layers and we love making things complex and complain Java is complex. But here in AngularJS things looks simpler, let’s make things little bit complex. I am just kidding ! Even if you put all your logic in single place as we did in controllers.js, it will work and acceptable for simple applications. But if you are going to develop large enterprise application (who said enterprise applications should be large…hmm..ok..continue..) then things quickly become messy. And believe me working with a messy large JavaScript codebase is lot more painful than messy large Java codebase. So it is a good idea to separate the business logic from controller logic. In AngularJS we can organize application logic into modules and make them work together using dependency injection. Lets see how to create a module in AngularJS. var myModule = angular.module('moduleName',['dependency1','dependency2']); This is how we can create a module by using angular.module() function by passing the module name and specifying a list of dependencies if there are any. Once we define a module we can get handle of the module as follows: var myModule = angular.module('moduleName'); Observe that there is no second argument here which means we are getting the reference of a predefined angular module. If you include the second argument, which is an array, then it means you are defining the new module. Once we define a new module we can create controllers in that module as follows: module.controller('ControllerName',['dependency1','dependency2', function(dependency1, dependency2){ //logic }]); For example, lets see how we to create TodoController. var myApp = angular.module('myApp',['ngRoute']); myApp.controller('TodoController',['$scope','$http',function($scope,$http){ //logic }]); Here we are creating TodoController and providing $scope and $http as dependencies which are built-in angularjs services. We can also create the same controller as follows: myApp.controller('TodoController',function($scope,$http){ //logic });Observe that we are directly passing a function as a second argument instead of an array which has an array of dependencies followed by a function which takes the same dependencies as arguments and it works exactly same as array based declaration. But why do we need to do more typing when both do the same thing?? AngularJS injects the dependencies by name, that means when you define $http as a dependency then AngularJS looks for a registered service with name ‘$http‘. But majority of the real world applications use JavaScript code minification tools to reduce the size. Those tools may rename your variables to short variable names. For example: myApp.controller('TodoController',function($scope,$http){ //logic }); The preceding code might be minified into: myApp.controller('TodoController',function($s,$h){ //logic }); Then AngularJS tries to look for registered services with names $s and $h instead of $scope and $http and eventually it will fail. To overcome this issue we define the names of services as string literals in array and specify the same names as function arguments. With this even after JavaScript minifies the function argument names, string literals remains same and AngularJS picks right services to inject. That means you can write the controller as follows: myApp.controller('TodoController',['$scope','$http',function($s,$h){ //here $s represents $scope and $h represents $http services }]); So always prefer to use array based dependencies approach. Ok, now we know how to create controllers. Lets see how we can add some functionality to our controllers. myApp.controller('TodoController',['$scope','$http',function($scope,$http){ var todoCtrl = this; todoCtrl.todos = []; todoCtrl.loadTodos = function(){ $http.get('/todos.json').success(function(data){ todoCtrl.todos = data; }).error(function(){ alert('Error in loading Todos'); }); }; todoCtrl.loadTodos(); }]); Here in our TodoController we defined a variable todos which initially holds an empty array and we defined loadTodos() function which loads todos from RESTful services using $http.get() and once response received we are setting the todos array to our todos variable. Simple and straight forward. Why can’t we directly assign the response of $http.get() to our todos variable like todoCtrl.todos = $http.get(‘/todos.json’);?? Because $http.get(‘/todos.json’) returns a promise, not actual response data. So you have to get data from success handler function. Also note that if you want to perform any logic after receiving data from $http.get() you should put your logic inside success handler function only. For example if you are deleting a Todo item and then reload the todos you should NOT do as follows: $http.delete('/todos.json/1').success(function(data){ //hurray, deleted }).error(function(){ alert('Error in deleting Todo'); }); todoCtrl.loadTodos(); Here you might assume that after delete is done it will loadTodos() and the deleted Todo item won’t show up, but that won’t work like that. You should do it as follows: $http.delete('/todos.json/1').success(function(data){ //hurray, deleted todoCtrl.loadTodos(); }).error(function(){ alert('Error in deleting Todo'); }); Lets move on to how to create AngularJS services. Creating services is also similar to controllers but AngularJS provides multiple ways for creating services. There are 3 ways to create AngularJS services:Using module.factory() Using module.service() Using module.provider()Using module.factory() We can create a service using module.factory() as follows: angular.module('myApp') .factory('UserService', ['$http',function($http) { var service = { user: {}, login: function(email, pwd) { $http.get('/auth',{ username: email, password: pwd}).success(function(data){ service.user = data; }); }, register: function(newuser) { return $http.post('/users', newuser); } }; return service; }]); Using module.service() We can create a service using module.service() as follows: angular.module('myApp') .service('UserService', ['$http',function($http) { var service = this; this.user = {}; this.login = function(email, pwd) { $http.get('/auth',{ username: email, password: pwd}).success(function(data){ service.user = data; }); }; this.register = function(newuser) { return $http.post('/users', newuser); }; }]); Using module.provider() We can create a service using module.provider() as follows: angular.module('myApp') .provider('UserService', function() { return { this.$get = function($http) { var service = this; this.user = {}; this.login = function(email, pwd) { $http.get('/auth',{ username: email, password: pwd}).success(function(data){ service.user = data; }); }; this.register = function(newuser) { return $http.post('/users', newuser); }; } } }); You can find good documentation on which method is appropriate in which scenario at http://www.ng-newsletter.com/advent2013/#!/day/1. Let us create a TodoService in our services.js file as follows: var myApp = angular.module('myApp');myApp.factory('TodoService', function($http){ return { loadTodos : function(){ return $http.get('todos'); }, createTodo: function(todo){ return $http.post('todos',todo); }, deleteTodo: function(id){ return $http.delete('todos/'+id); } } }); Now inject our TodoService into our TodoController as follows: myApp.controller('TodoController', [ '$scope', 'TodoService', function ($scope, TodoService) { $scope.newTodo = {}; $scope.loadTodos = function(){ TodoService.loadTodos(). success(function(data, status, headers, config) { $scope.todos = data; }) .error(function(data, status, headers, config) { alert('Error loading Todos'); }); }; $scope.addTodo = function(){ TodoService.createTodo($scope.newTodo). success(function(data, status, headers, config) { $scope.newTodo = {}; $scope.loadTodos(); }) .error(function(data, status, headers, config) { alert('Error saving Todo'); }); }; $scope.deleteTodo = function(todo){ TodoService.deleteTodo(todo.id). success(function(data, status, headers, config) { $scope.loadTodos(); }) .error(function(data, status, headers, config) { alert('Error deleting Todo'); }); }; $scope.loadTodos(); }]); Now we have separated our controller logic and business logic using AngularJS controllers and services and make them work together using Dependency Injection. In the beginning of the post I said we will be developing a multi-screen application demonstrating ngRoute functionality. In addition to Todos, let us add PhoneBook feature to our application where we can maintain list of contacts. First, let us build the back-end functionality for PhoneBook REST services. Create Person JPA entity, its Spring Data JPA repository and Controller. @Entity public class Person implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy=GenerationType.AUTO) private Integer id; private String email; private String password; private String firstname; private String lastname; @Temporal(TemporalType.DATE) private Date dob; //setters and getters }public interface PersonRepository extends JpaRepository<Person, Integer>{}@RestController @RequestMapping("/contacts") public class ContactController { @Autowired private PersonRepository personRepository; @RequestMapping("") public List<Person> persons() { return personRepository.findAll(); } } Now let us create AngularJS service and controller for Contacts. Observe that we will be using module.service() approach this time. myApp.service('ContactService', ['$http',function($http){ this.getContacts = function(){ var promise = $http.get('contacts') .then(function(response){ return response.data; },function(response){ alert('error'); }); return promise; } } }]);myApp.controller('ContactController', [ '$scope', 'ContactService', function ($scope, ContactService) { ContactService.getContacts().then(function(data) { $scope.contacts = data; }); } ]); Now we need to configure our application routes in app.js file. var myApp = angular.module('myApp',['ngRoute']);myApp.config(['$routeProvider','$locationProvider', function($routeProvider, $locationProvider) { $routeProvider .when('/home', { templateUrl: 'templates/home.html', controller: 'HomeController' }) .when('/contacts', { templateUrl: 'templates/contacts.html', controller: 'ContactController' }) .when('/todos', { templateUrl: 'templates/todos.html', controller: 'TodoController' }) .otherwise({ redirectTo: 'home' }); }]); Here we have configured our application routes on $routeProvider inside myApp.config() function. When url matches with any of the routes then corresponding template content will be rendered in <div ng-view></div> div in our index.html. If the url doesn’t match with any of the configured urls then it will be routed to ‘home‘ as specified in otherwise() configuration. Our templates/home.html won’t have anything for now and templates/todos.html file will be same as home.html in previous post. The new templates/contacts.html will just have a table listing contacts as follows: <table class="table table-striped table-bordered table-hover"> <thead> <tr> <th>Name</th> <th>Email</th> </tr> </thead> <tbody> <tr ng-repeat="contact in contacts"> <td>{{contact.firstname + ' '+ (contact.lastname || '')}}</td> <td>{{contact.email}}</td> </tr> </tbody> </table> Now let us create navigation links to Todos, Contacts pages in our index.html page <body>. <div class="container"> <div class="row"> <div class="col-md-3 sidebar"> <div class="list-group"> <a href="#home" class="list-group-item"> <i class="fa fa-home fa-lg"></i> Home </a> <a href="#contacts" class="list-group-item"> <i class="fa fa-user fa-lg"></i> Contacts </a> <a href="#todos" class="list-group-item"> <i class="fa fa-indent fa-lg"></i> ToDos </a> </div> </div> <div class="col-md-9 col-md-offset-3"> <div ng-view></div> </div> </div> </div> By now we have a multi-screen application and we understood how to use modules, controllers and services. You can find the code for this article at https://github.com/sivaprasadreddy/angularjs-samples/tree/master/angularjs-series/angularjs-part2 Our next article would be on how to use $resource instead of $http to consume REST services. Also we will look update our application to use more powerful ui-router module instead of ngRoute. Stay tuned !Reference: AngularJS: Introducing modules, controllers, services from our JCG partner Siva Reddy at the My Experiments on Technology blog....

Spring Batch Tutorial with Spring Boot and Java Configuration

I’ve been working on migrating some batch jobs for Podcastpedia.org to Spring Batch. Before, these jobs were developed in my own kind of way, and I thought it was high time to use a more “standardized” approach. Because I had never used Spring with java configuration before, I thought this were a good opportunity to learn about it, by configuring the Spring Batch jobs in java. And since I am all into trying new things with Spring, why not also throw Spring Boot into the boat… Note: Before you begin with this tutorial I recommend you read first Spring’s Getting started – Creating a Batch Service, because  the structure and the code presented here builds on that original. 1. What I’ll build So, as mentioned, in this post I will present Spring Batch in the context of configuring it and developing with it some batch jobs for Podcastpedia.org. Here’s a short description of the two jobs that are currently part of the Podcastpedia-batch project:addNewPodcastJobreads podcast metadata (feed url, identifier, categories etc.) from a flat file transforms (parses and prepares episodes to be inserted with Http Apache Client) the data and in the last step, insert it to the Podcastpedia database and inform the submitter via email about itnotifyEmailSubscribersJob – people can subscribe to their favorite podcasts on Podcastpedia.org via email. For those who did it is checked on a regular basis (DAILY, WEEKLY, MONTHLY) if new episodes are available, and if they are the subscribers are informed via email about those; read from database, expand read data via JPA, re-group it and notify subscriber via emailSource code: The source code for this tutorial is available on GitHub – Podcastpedia-batch. Note: Before you start I also highly recommend you read the Domain Language of Batch,  so that terms like “Jobs”, “Steps” or “ItemReaders” don’t sound strange to you. 2. What you’ll needA favorite text editor or IDE JDK 1.7 or later Maven 3.0+ 3. Set up the project The project is built with Maven. It uses Spring Boot, which makes it easy to create stand-alone Spring based Applications that you can “just run”.  You can learn more about the Spring Boot by visiting the project’s website. 3.1. Maven build file Because it uses Spring Boot it will have the spring-boot-starter-parent as its parent, and a couple of other spring-boot-starters that will get for us some libraries required in the project: pom.xml of the podcastpedia-batch project <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion><groupId>org.podcastpedia.batch</groupId> <artifactId>podcastpedia-batch</artifactId> <version>0.1.0</version> <properties> <sprinb.boot.version>1.1.6.RELEASE</sprinb.boot.version> <java.version>1.7</java.version> </properties> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.1.6.RELEASE</version> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-batch</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.3.5</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpcore</artifactId> <version>4.3.2</version> </dependency> <!-- velocity --> <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity</artifactId> <version>1.7</version> </dependency> <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity-tools</artifactId> <version>2.0</version> <exclusions> <exclusion> <groupId>org.apache.struts</groupId> <artifactId>struts-core</artifactId> </exclusion> </exclusions> </dependency> <!-- Project rome rss, atom --> <dependency> <groupId>rome</groupId> <artifactId>rome</artifactId> <version>1.0</version> </dependency> <!-- option this fetcher thing --> <dependency> <groupId>rome</groupId> <artifactId>rome-fetcher</artifactId> <version>1.0</version> </dependency> <dependency> <groupId>org.jdom</groupId> <artifactId>jdom</artifactId> <version>1.1</version> </dependency> <!-- PID 1 --> <dependency> <groupId>xerces</groupId> <artifactId>xercesImpl</artifactId> <version>2.9.1</version> </dependency> <!-- MySQL JDBC connector --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.31</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-freemarker</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-remote-shell</artifactId> <exclusions> <exclusion> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4.7</version> </dependency> <dependency> <groupId>javax.inject</groupId> <artifactId>javax.inject</artifactId> <version>1</version> </dependency> <dependency> <groupId>org.twitter4j</groupId> <artifactId>twitter4j-core</artifactId> <version>[4.0,)</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> </dependency> </dependencies><build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> Note: One big advantage of using the spring-boot-starter-parent as the project’s parent is that you only have to upgrade the version of the parent and it will get the “latest” libraries for you. When I started the project spring boot was in version 1.1.3.RELEASE and by the time of finishing to write this post is already at 1.1.6.RELEASE. 3.2. Project directory structure I structured the project in the following way: Project directory structure └── src └── main └── java └── org └── podcastpedia └── batch └── common └── jobs └── addpodcast └── notifysubscribers Note:the org.podcastpedia.batch.jobs package contains sub-packages having specific classes to particular jobs.  the org.podcastpedia.batch.jobs.common package contains classes used by all the jobs, like for example the JPA entities that both the current jobs require.4. Create a batch Job configuration I will start by presenting the Java configuration class for the first batch job: Batch Job configuration package org.podcastpedia.batch.jobs.addpodcast;import org.podcastpedia.batch.common.configuration.DatabaseAccessConfiguration; import org.podcastpedia.batch.common.listeners.LogProcessListener; import org.podcastpedia.batch.common.listeners.ProtocolListener; import org.podcastpedia.batch.jobs.addpodcast.model.SuggestedPodcast; import org.springframework.batch.core.Job; import org.springframework.batch.core.Step; import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing; import org.springframework.batch.core.configuration.annotation.JobBuilderFactory; import org.springframework.batch.core.configuration.annotation.StepBuilderFactory; import org.springframework.batch.item.ItemProcessor; import org.springframework.batch.item.ItemReader; import org.springframework.batch.item.ItemWriter; import org.springframework.batch.item.file.FlatFileItemReader; import org.springframework.batch.item.file.LineMapper; import org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper; import org.springframework.batch.item.file.mapping.DefaultLineMapper; import org.springframework.batch.item.file.transform.DelimitedLineTokenizer; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Import; import org.springframework.core.io.ClassPathResource;import com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException;@Configuration @EnableBatchProcessing @Import({DatabaseAccessConfiguration.class, ServicesConfiguration.class}) public class AddPodcastJobConfiguration {@Autowired private JobBuilderFactory jobs; @Autowired private StepBuilderFactory stepBuilderFactory; // tag::jobstep[] @Bean public Job addNewPodcastJob(){ return jobs.get("addNewPodcastJob") .listener(protocolListener()) .start(step()) .build(); } @Bean public Step step(){ return stepBuilderFactory.get("step") .<SuggestedPodcast,SuggestedPodcast>chunk(1) //important to be one in this case to commit after every line read .reader(reader()) .processor(processor()) .writer(writer()) .listener(logProcessListener()) .faultTolerant() .skipLimit(10) //default is set to 0 .skip(MySQLIntegrityConstraintViolationException.class) .build(); } // end::jobstep[] // tag::readerwriterprocessor[] @Bean public ItemReader<SuggestedPodcast> reader(){ FlatFileItemReader<SuggestedPodcast> reader = new FlatFileItemReader<SuggestedPodcast>(); reader.setLinesToSkip(1);//first line is title definition reader.setResource(new ClassPathResource("suggested-podcasts.txt")); reader.setLineMapper(lineMapper()); return reader; }@Bean public LineMapper<SuggestedPodcast> lineMapper() { DefaultLineMapper<SuggestedPodcast> lineMapper = new DefaultLineMapper<SuggestedPodcast>(); DelimitedLineTokenizer lineTokenizer = new DelimitedLineTokenizer(); lineTokenizer.setDelimiter(";"); lineTokenizer.setStrict(false); lineTokenizer.setNames(new String[]{"FEED_URL", "IDENTIFIER_ON_PODCASTPEDIA", "CATEGORIES", "LANGUAGE", "MEDIA_TYPE", "UPDATE_FREQUENCY", "KEYWORDS", "FB_PAGE", "TWITTER_PAGE", "GPLUS_PAGE", "NAME_SUBMITTER", "EMAIL_SUBMITTER"}); BeanWrapperFieldSetMapper<SuggestedPodcast> fieldSetMapper = new BeanWrapperFieldSetMapper<SuggestedPodcast>(); fieldSetMapper.setTargetType(SuggestedPodcast.class); lineMapper.setLineTokenizer(lineTokenizer); lineMapper.setFieldSetMapper(suggestedPodcastFieldSetMapper()); return lineMapper; }@Bean public SuggestedPodcastFieldSetMapper suggestedPodcastFieldSetMapper() { return new SuggestedPodcastFieldSetMapper(); }/** configure the processor related stuff */ @Bean public ItemProcessor<SuggestedPodcast, SuggestedPodcast> processor() { return new SuggestedPodcastItemProcessor(); } @Bean public ItemWriter<SuggestedPodcast> writer() { return new Writer(); } // end::readerwriterprocessor[] @Bean public ProtocolListener protocolListener(){ return new ProtocolListener(); } @Bean public LogProcessListener logProcessListener(){ return new LogProcessListener(); }} The @EnableBatchProcessing annotation adds many critical beans that support jobs and saves us configuration work. For example you will also be able to @Autowired some useful stuff into your context:a JobRepository (bean name “jobRepository”) a JobLauncher (bean name “jobLauncher”) a JobRegistry (bean name “jobRegistry”) a PlatformTransactionManager (bean name “transactionManager”) a JobBuilderFactory (bean name “jobBuilders”) as a convenience to prevent you from having to inject the job repository into every job, as in the examples above a StepBuilderFactory (bean name “stepBuilders”) as a convenience to prevent you from having to inject the job repository and transaction manager into every stepThe first part focuses on the actual job configuration: Batch Job and Step configuration @Bean public Job addNewPodcastJob(){ return jobs.get("addNewPodcastJob") .listener(protocolListener()) .start(step()) .build(); }@Bean public Step step(){ return stepBuilderFactory.get("step") .<SuggestedPodcast,SuggestedPodcast>chunk(1) //important to be one in this case to commit after every line read .reader(reader()) .processor(processor()) .writer(writer()) .listener(logProcessListener()) .faultTolerant() .skipLimit(10) //default is set to 0 .skip(MySQLIntegrityConstraintViolationException.class) .build(); } The first method defines a job and the second one defines a single step. As you’ve read in The Domain Language of Batch,  jobs are built from steps, where each step can involve a reader, a processor, and a writer. In the step definition, you define how much data to write at a time (in our case 1 record at a time). Next you specify the reader, processor and writer. 5. Spring Batch processing units Most of the batch processing can be described as reading data, doing some transformation on it and then writing the result out. This mirrors somehow the Extract, Transform, Load (ETL) process, in case you know more about that. Spring Batch provides three key interfaces to help perform bulk reading and writing: ItemReader, ItemProcessor and ItemWriter. 5.1. Readers ItemReader is an abstraction providing the mean to retrieve data from many different types of input: flat files, xml files, database, jms etc., one item at a time. See the Appendix A. List of ItemReaders and ItemWriters for a complete list of available item readers. In the Podcastpedia batch jobs I use the following specialized ItemReaders: 5.1.1. FlatFileItemReader which, as the name implies, reads lines of data from a flat file that typically describe records with fields of data defined by fixed positions in the file or delimited by some special character (e.g. Comma). This type of ItemReader is being used in the first batch job, addNewPodcastJob. The input file used is named suggested-podcasts.in, resides in the classpath (src/main/resources) and looks something like the following: Input file for FlatFileItemReader FEED_URL; IDENTIFIER_ON_PODCASTPEDIA; CATEGORIES; LANGUAGE; MEDIA_TYPE; UPDATE_FREQUENCY; KEYWORDS; FB_PAGE; TWITTER_PAGE; GPLUS_PAGE; NAME_SUBMITTER; EMAIL_SUBMITTER http://www.5minutebiographies.com/feed/; 5minutebiographies; people_society, history; en; Audio; WEEKLY; biography, biographies, short biography, short biographies, 5 minute biographies, five minute biographies, 5 minute biography, five minute biography; https://www.facebook.com/5minutebiographies; https://twitter.com/5MinuteBios; ; Adrian Matei; adrianmatei@gmail.com http://notanotherpodcast.libsyn.com/rss; NotAnotherPodcast; entertainment; en; Audio; WEEKLY; Comedy, Sports, Cinema, Movies, Pop Culture, Food, Games; https://www.facebook.com/notanotherpodcastusa; https://twitter.com/NAPodcastUSA; https://plus.google.com/u/0/103089891373760354121/posts; Adrian Matei; adrianmatei@gmail.com As you can see the first line defines the names of the “columns”, and the following lines contain the actual data (delimited by “;”), that needs translating to domain objects relevant in the context. Let’s see now how to configure the FlatFileItemReader: FlatFileItemReader example @Bean public ItemReader<SuggestedPodcast> reader(){ FlatFileItemReader<SuggestedPodcast> reader = new FlatFileItemReader<SuggestedPodcast>(); reader.setLinesToSkip(1);//first line is title definition reader.setResource(new ClassPathResource("suggested-podcasts.in")); reader.setLineMapper(lineMapper()); return reader; } You can specify, among other things, the input resource, the number of lines to skip, and a line mapper. LineMapper The LineMapper is an interface for mapping lines (strings) to domain objects, typically used to map lines read from a file to domain objects on a per line basis.  For the Podcastpedia job I used the DefaultLineMapper, which is two-phase implementation consisting of tokenization of the line into a FieldSet followed by mapping to item: LineMapper default implementation example @Bean public LineMapper<SuggestedPodcast> lineMapper() { DefaultLineMapper<SuggestedPodcast> lineMapper = new DefaultLineMapper<SuggestedPodcast>(); DelimitedLineTokenizer lineTokenizer = new DelimitedLineTokenizer(); lineTokenizer.setDelimiter(";"); lineTokenizer.setStrict(false); lineTokenizer.setNames(new String[]{"FEED_URL", "IDENTIFIER_ON_PODCASTPEDIA", "CATEGORIES", "LANGUAGE", "MEDIA_TYPE", "UPDATE_FREQUENCY", "KEYWORDS", "FB_PAGE", "TWITTER_PAGE", "GPLUS_PAGE", "NAME_SUBMITTER", "EMAIL_SUBMITTER"}); BeanWrapperFieldSetMapper<SuggestedPodcast> fieldSetMapper = new BeanWrapperFieldSetMapper<SuggestedPodcast>(); fieldSetMapper.setTargetType(SuggestedPodcast.class); lineMapper.setLineTokenizer(lineTokenizer); lineMapper.setFieldSetMapper(suggestedPodcastFieldSetMapper()); return lineMapper; }the DelimitedLineTokenizer  splits the input String via the “;” delimiter. if you set the strict flag to false then lines with less tokens will be tolerated and padded with empty columns, and lines with more tokens will simply be truncated. the columns names from the first line are set lineTokenizer.setNames(...); and the fieldMapper is set (line 14)Note: The FieldSet is an “interface used by flat file input sources to encapsulate concerns of converting an array of Strings to Java native types. A bit like the role played by ResultSet in JDBC, clients will know the name or position of strongly typed fields that they want to extract.“ FieldSetMapper The FieldSetMapper is an interface that is used to map data obtained from a FieldSet into an object. Here’s my implementation which maps the fieldSet to the SuggestedPodcast domain object that will be further passed to the processor: FieldSetMapper implementation public class SuggestedPodcastFieldSetMapper implements FieldSetMapper<SuggestedPodcast> {@Override public SuggestedPodcast mapFieldSet(FieldSet fieldSet) throws BindException { SuggestedPodcast suggestedPodcast = new SuggestedPodcast(); suggestedPodcast.setCategories(fieldSet.readString("CATEGORIES")); suggestedPodcast.setEmail(fieldSet.readString("EMAIL_SUBMITTER")); suggestedPodcast.setName(fieldSet.readString("NAME_SUBMITTER")); suggestedPodcast.setTags(fieldSet.readString("KEYWORDS")); //some of the attributes we can map directly into the Podcast entity that we'll insert later into the database Podcast podcast = new Podcast(); podcast.setUrl(fieldSet.readString("FEED_URL")); podcast.setIdentifier(fieldSet.readString("IDENTIFIER_ON_PODCASTPEDIA")); podcast.setLanguageCode(LanguageCode.valueOf(fieldSet.readString("LANGUAGE"))); podcast.setMediaType(MediaType.valueOf(fieldSet.readString("MEDIA_TYPE"))); podcast.setUpdateFrequency(UpdateFrequency.valueOf(fieldSet.readString("UPDATE_FREQUENCY"))); podcast.setFbPage(fieldSet.readString("FB_PAGE")); podcast.setTwitterPage(fieldSet.readString("TWITTER_PAGE")); podcast.setGplusPage(fieldSet.readString("GPLUS_PAGE")); suggestedPodcast.setPodcast(podcast);return suggestedPodcast; } } 5.2. JdbcCursorItemReader In the second job, notifyEmailSubscribersJob, in the reader, I only read email subscribers from a single database table, but further in the processor a more detailed read(via JPA) is executed to retrieve all the new episodes of the podcasts the user subscribed to. This is a common pattern employed in the batch world. Follow this link for more Common Batch Patterns. For the initial read, I chose the JdbcCursorItemReader, which is a simple reader implementation that opens a JDBC cursor and continually retrieves the next row in the ResultSet: JdbcCursorItemReader example @Bean public ItemReader<User> notifySubscribersReader(){ JdbcCursorItemReader<User> reader = new JdbcCursorItemReader<User>(); String sql = "select * from users where is_email_subscriber is not null"; reader.setSql(sql); reader.setDataSource(dataSource); reader.setRowMapper(rowMapper());return reader; } Note I had to set the sql, the datasource to read from and a RowMapper. 5.2.1. RowMapper The RowMapper is an interface used by JdbcTemplate for mapping rows of a Result’set on a per-row basis. My implementation of this interface, , performs the actual work of mapping each row to a result object, but I don’t need to worry about exception handling: RowMapper implementation public class UserRowMapper implements RowMapper<User> {@Override public User mapRow(ResultSet rs, int rowNum) throws SQLException { User user = new User(); user.setEmail(rs.getString("email")); return user; }}  5.2. Writers ItemWriter is an abstraction that represents the output of a Step, one batch or chunk of items at a time. Generally, an item writer has no knowledge of the input it will receive next, only the item that was passed in its current invocation. The writers for the two jobs presented are quite simple. They just use external services to send email notifications and post tweets on Podcastpedia’s account. Here is the implementation of the ItemWriter for the first job – addNewPodcast: Writer implementation of ItemWriter package org.podcastpedia.batch.jobs.addpodcast;import java.util.Date; import java.util.List;import javax.inject.Inject; import javax.persistence.EntityManager;import org.podcastpedia.batch.common.entities.Podcast; import org.podcastpedia.batch.jobs.addpodcast.model.SuggestedPodcast; import org.podcastpedia.batch.jobs.addpodcast.service.EmailNotificationService; import org.podcastpedia.batch.jobs.addpodcast.service.SocialMediaService; import org.springframework.batch.item.ItemWriter; import org.springframework.beans.factory.annotation.Autowired;public class Writer implements ItemWriter<SuggestedPodcast>{@Autowired private EntityManager entityManager; @Inject private EmailNotificationService emailNotificationService; @Inject private SocialMediaService socialMediaService; @Override public void write(List<? extends SuggestedPodcast> items) throws Exception {if(items.get(0) != null){ SuggestedPodcast suggestedPodcast = items.get(0); //first insert the data in the database Podcast podcast = suggestedPodcast.getPodcast(); podcast.setInsertionDate(new Date()); entityManager.persist(podcast); entityManager.flush(); //notify submitter about the insertion and post a twitt about it String url = buildUrlOnPodcastpedia(podcast); emailNotificationService.sendPodcastAdditionConfirmation( suggestedPodcast.getName(), suggestedPodcast.getEmail(), url); if(podcast.getTwitterPage() != null){ socialMediaService.postOnTwitterAboutNewPodcast(podcast, url); } }}private String buildUrlOnPodcastpedia(Podcast podcast) { StringBuffer urlOnPodcastpedia = new StringBuffer( "http://www.podcastpedia.org"); if (podcast.getIdentifier() != null) { urlOnPodcastpedia.append("/" + podcast.getIdentifier()); } else { urlOnPodcastpedia.append("/podcasts/"); urlOnPodcastpedia.append(String.valueOf(podcast.getPodcastId())); urlOnPodcastpedia.append("/" + podcast.getTitleInUrl()); } String url = urlOnPodcastpedia.toString(); return url; }} As you can see there’s nothing special here, except that the write method has to be overriden and this is where the injected external services EmailNotificationService and SocialMediaService are used to inform via email the podcast submitter about the addition to the podcast directory, and if a Twitter page was submitted a tweet will be posted on the Podcastpedia’s wall. You can find detailed explanation on how to send email via Velocity and how to post on Twitter from Java in the following posts:How to compose html emails in Java with Spring and Velocity How to post to Twittter from Java with Twitter4J in 10 minutes 5.3. Processors ItemProcessor is an abstraction that represents the business processing of an item. While the ItemReader reads one item, and the ItemWriter writes them, the ItemProcessor provides access to transform or apply other business processing. When using your own Processors you have to implement the ItemProcessor<I,O> interface, with its only method O process(I item) throws Exception, returning a potentially modified or a new item for continued processing. If the returned result is null, it is assumed that processing of the item should not continue. While the processor of the first job requires a little bit of more logic, because I have to set the etag and last-modified header attributes, the feed attributes, episodes, categories and keywords of the podcast: ItemProcessor implementation for the job addNewPodcast public class SuggestedPodcastItemProcessor implements ItemProcessor<SuggestedPodcast, SuggestedPodcast> {private static final int TIMEOUT = 10;@Autowired ReadDao readDao; @Autowired PodcastAndEpisodeAttributesService podcastAndEpisodeAttributesService; @Autowired private PoolingHttpClientConnectionManager poolingHttpClientConnectionManager; @Autowired private SyndFeedService syndFeedService;/** * Method used to build the categories, tags and episodes of the podcast */ @Override public SuggestedPodcast process(SuggestedPodcast item) throws Exception { if(isPodcastAlreadyInTheDirectory(item.getPodcast().getUrl())) { return null; } String[] categories = item.getCategories().trim().split("\\s*,\\s*");item.getPodcast().setAvailability(org.apache.http.HttpStatus.SC_OK); //set etag and last modified attributes for the podcast setHeaderFieldAttributes(item.getPodcast()); //set the other attributes of the podcast from the feed podcastAndEpisodeAttributesService.setPodcastFeedAttributes(item.getPodcast()); //set the categories List<Category> categoriesByNames = readDao.findCategoriesByNames(categories); item.getPodcast().setCategories(categoriesByNames); //set the tags setTagsForPodcast(item); //build the episodes setEpisodesForPodcast(item.getPodcast()); return item; } ...... } the processor from the second job uses the ‘Driving Query’ approach, where I expand the data retrieved from the Reader with another “JPA-read” and I group the items on podcasts with episodes so that it looks nice in the emails that I am sending out to subscribers: ItemProcessor implementation of the second job – notifySubscribers @Scope("step") public class NotifySubscribersItemProcessor implements ItemProcessor<User, User> {@Autowired EntityManager em; @Value("#{jobParameters[updateFrequency]}") String updateFrequency; @Override public User process(User item) throws Exception { String sqlInnerJoinEpisodes = "select e from User u JOIN u.podcasts p JOIN p.episodes e WHERE u.email=?1 AND p.updateFrequency=?2 AND" + " e.isNew IS NOT NULL AND e.availability=200 ORDER BY e.podcast.podcastId ASC, e.publicationDate ASC"; TypedQuery<Episode> queryInnerJoinepisodes = em.createQuery(sqlInnerJoinEpisodes, Episode.class); queryInnerJoinepisodes.setParameter(1, item.getEmail()); queryInnerJoinepisodes.setParameter(2, UpdateFrequency.valueOf(updateFrequency)); List<Episode> newEpisodes = queryInnerJoinepisodes.getResultList(); return regroupPodcastsWithEpisodes(item, newEpisodes); } ....... } Note: If you’d like to find out more how to use the Apache Http Client, to get the etag and last-modified headers, you can have a look at my post – How to use the new Apache Http Client to make a HEAD request 6. Execute the batch application Batch processing can be embedded in web applications and WAR files, but I chose in the beginning the simpler approach that creates a standalone application, that can be started by the Java main() method: Batch processing Java main() method package org.podcastpedia.batch; //imports ...;@ComponentScan @EnableAutoConfiguration public class Application {private static final String NEW_EPISODES_NOTIFICATION_JOB = "newEpisodesNotificationJob"; private static final String ADD_NEW_PODCAST_JOB = "addNewPodcastJob";public static void main(String[] args) throws BeansException, JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException, JobParametersInvalidException, InterruptedException { Log log = LogFactory.getLog(Application.class); SpringApplication app = new SpringApplication(Application.class); app.setWebEnvironment(false); ConfigurableApplicationContext ctx= app.run(args); JobLauncher jobLauncher = ctx.getBean(JobLauncher.class); if(ADD_NEW_PODCAST_JOB.equals(args[0])){ //addNewPodcastJob Job addNewPodcastJob = ctx.getBean(ADD_NEW_PODCAST_JOB, Job.class); JobParameters jobParameters = new JobParametersBuilder() .addDate("date", new Date()) .toJobParameters(); JobExecution jobExecution = jobLauncher.run(addNewPodcastJob, jobParameters); BatchStatus batchStatus = jobExecution.getStatus(); while(batchStatus.isRunning()){ log.info("*********** Still running.... **************"); Thread.sleep(1000); } log.info(String.format("*********** Exit status: %s", jobExecution.getExitStatus().getExitCode())); JobInstance jobInstance = jobExecution.getJobInstance(); log.info(String.format("********* Name of the job %s", jobInstance.getJobName())); log.info(String.format("*********** job instance Id: %d", jobInstance.getId())); System.exit(0); } else if(NEW_EPISODES_NOTIFICATION_JOB.equals(args[0])){ JobParameters jobParameters = new JobParametersBuilder() .addDate("date", new Date()) .addString("updateFrequency", args[1]) .toJobParameters(); jobLauncher.run(ctx.getBean(NEW_EPISODES_NOTIFICATION_JOB, Job.class), jobParameters); } else { throw new IllegalArgumentException("Please provide a valid Job name as first application parameter"); } System.exit(0); } } The best explanation for  SpringApplication-, @ComponentScan- and @EnableAutoConfiguration-magic you get from the source – Getting Started – Creating a Batch Service: “The main() method defers to the SpringApplication helper class, providing Application.class as an argument to its run() method. This tells Spring to read the annotation metadata from Application and to manage it as a component in the Spring application context. The @ComponentScan annotation tells Spring to search recursively through the org.podcastpedia.batch package and its children for classes marked directly or indirectly with Spring’s @Component annotation. This directive ensures that Spring finds and registers BatchConfiguration, because it is marked with @Configuration, which in turn is a kind of @Component annotation. The @EnableAutoConfiguration annotation switches on reasonable default behaviors based on the content of your classpath. For example, it looks for any class that implements the CommandLineRunner interface and invokes its run() method.” Execution construction steps:the JobLauncher, which is a simple interface for controlling jobs,  is retrieved from the ApplicationContext. Remember this is automatically made available via the @EnableBatchProcessing annotation. now based on the first parameter of the application (args[0]), I will retrieve the corresponding Job from the ApplicationContext then the JobParameters are prepared, where I use the current date – .addDate("date", new Date()), so that the job executions are always unique. once everything is in place, the job can be executed: JobExecution jobExecution = jobLauncher.run(addNewPodcastJob, jobParameters); you can use the returned jobExecution to gain access to BatchStatus, exit code, or job name and id.Note: I highly recommend you read and understand the Meta-Data Schema for Spring Batch. It will also help you better understand the Spring Batch Domain objects. 6.1. Running the application on dev and prod environments To be able to run the Spring Batch / Spring Boot application on different environments I make use of the Spring Profiles capability. By default the application runs with development data (database). But if I want the job to use the production database I have to do the following:provide the following environment argument  -Dspring.profiles.active=prod have the production database properties configured in the application-prod.properties file in the classpath, right besides the default application.properties fileSummary In this tutorial we’ve learned how to configure a Spring Batch project with Spring Boot and Java configuration, how to use some of the most common readers in batch processing, how to configure some simple jobs, and how to start Spring Batch jobs from a main method.Reference: Spring Batch Tutorial with Spring Boot and Java Configuration from our JCG partner Adrian Matei at the Codingpedia.org blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: