Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Getting Started with Spring Social

Like me, you will not have failed to notice the current rush to ‘socialize’ applications, whether it’s adding a simple Facebook ‘Like’ button, a whole bunch of ‘share’ buttons or displaying timeline information. Everybody’s doing it including the Guys at Spring, and true to form they’ve come up with a rinky-dinky API called Spring Social that allows you to integrate your application with a number of Software as a Service (SaaS) feeds such as Twitter, Facebook, LinkedIn etc. This, and the following few blogs, takes a look at the whole social scene by demonstrating the use of Spring Social, and I’m going to start by getting very basic. If you’ve seen the Spring Social Samples you’ll know that they contain a couple of very good and complete ‘quickstart’ apps; one for Spring 3.0.x and another for Spring 3.1.x. In looking into these apps, the thing that struck me was the number of concepts you have to learn in order to appreciate just what’s going on. This includes configuration, external authorization, feed integration, credential persistence etc… Most of this complexity stems from the fact that your user will need to login to their Software as a Service (SaaS) account, such as Twitter, Facebook or QZone, so that your application can access their data 1. This is further complicated by the large number of SaaS providers around together with the different number of authorization protocols they use. So, I thought that I’d try and break all this down into the various individual components explaining how to build a useful app; however, I’m going to start with a little background. The Guys at Spring have quite rightly realized that there are so many SaaS providers on the Internet that they’ll never be able to code modules for all of them, so they’ve split the functionality into two parts, with the first parting comprising the spring-social-core and spring-social-web modules that provide the basic connectivity and authorization code for every SaaS provider. Providing all this sounds like a mammoth task but it’s simplified in that to be a SaaS provider you need to implement what’s known as the OAuth protocol. I’m not going into OAuth details just yet, but in a nutshell the OAuth protocol performs a complicated little jig that allows the user to share their SaaS data (i.e. stuff they have on Facebook etc) with your application without the user handing out their credentials to your application. There are at least three versions: 1.0, 1.0a and 2.0 and SaaS providers are free to implement any version they like, often adding their own proprietary features. The second part of this split consists of the SaaS provider modules that know how to talk to the individual service provider servers at the lowest levels. The Guys at Spring currently provide the basic services, which to the Western World are Facebook, LinkedIn and Twitter. The benefit of taking the extensive modular approach is that there’s also a whole bunch of other community led modules that you can use:Spring Social 500px Spring Social BitBucket Spring Social Digg Spring Social Dropbox Spring Social Flattr Spring Social Flickr Spring Social Foursquare Spring Social Google Spring Social Instagram Spring Social Spring Social Live (Windows Live) Spring Social Miso Spring Social Mixcloud Spring Social Nk Spring Social Salesforce Spring Social SoundCloud Spring Social Tumblr Spring Social Viadeo Spring Social Vkontakte Spring Social Weibo Spring Social Xing Spring Social Yammer Spring Social Security Module Spring Social Grails PluginThis, however, is only fraction of the number of services available: to see how large this list is visit the AddThis web site and find out what services they support.Back to the Code Now, if you’re like me, then when it comes to programming you’ll hate security: from a development view point it’s a lot of faff, stops you from writing code and makes your life difficult, so I thought I’d start off by throwing all that stuff away and write a small app that displays some basic SaaS data. This, it turns out, is possible as some SaaS providers, such as Twitter, serve both private and public data. Private data is the stuff that you need to login for, whilst public data is available to anyone. In today’s scenario, I’m writing a basic app that displays a Twitter user’s time line in an application using the Spring Social Twitter Module and all you’ll need to do this is the screen name of a Twitter user. To create the application, the first step is to create a basic Spring MVC Project using the template section of the SpringSource Toolkit Dashboard. This provides a webapp that’ll get you started. The second step is to add the following dependencies to your pom.xml file: <!-- Twitter API --> <dependency> <groupId></groupId> <artifactId>spring-social-twitter</artifactId> <version>${}</version> </dependency><!-- CGLIB, only required and used for @Configuration usage: could be removed in future release of Spring --> <dependency> <groupId>cglib</groupId> <artifactId>cglib-nodep</artifactId> <version>2.2</version> </dependency> The first dependency above is for Spring Social’s Twitter API, whilst the second is required for configuring the application using Spring 3’s @Configuration annotation. Note that you’ll also need to specify the Twitter API version number by adding: <>1.0.2.RELEASE</> …to the <properties> section at the top of the file. Step 3 is where you need configure Spring. If you look at the Spring Social sample code, you’ll notice that the Guys at Spring configure their apps using Java and the Spring 3 @Configuration annotation. This is because Java based configuration allows you a lot more flexibility than the original XML based configuration.                   @Configurationpublic class SimpleTwitterConfig {private static Twitter twitter;public SimpleTwitterConfig() {if (twitter == null) {twitter = new TwitterTemplate();}}/*** A proxy to a request-scoped object representing the simplest Twitter API* - one that doesn't need any authorization*/@Bean@Scope(value = 'request', proxyMode = ScopedProxyMode.INTERFACES)public Twitter twitter() {return twitter;}} All that the code above does is to provide Spring with a simple TwitterTemplate object via its Twitter interface. Using @Configuration is strictly overkill for this basic application, but I will be building upon it in future blogs. For more information on the @Configuration annotation and Java based configuration, take a look at:Spring’s Java Based Dependency Injection More Spring Java based DIHaving written the configuration class the next thing to do is to sort out the controller. In this simple example, I’ve used a straight forward @RequestMapping handler that deals with URLs that look something like this: <a href=timeline?id=roghughe>Grab Twitter User Time Line for @roghughe</a><br /> …and the code looks something like this: @Controllerpublic class TwitterTimeLineController {private static final Logger logger = LoggerFactory.getLogger(TwitterTimeLineController.class);private final Twitter twitter;@Autowiredpublic TwitterTimeLineController(Twitter twitter) {this.twitter = twitter;}@RequestMapping(value = 'timeline', method = RequestMethod.GET)public String getUserTimeline(@RequestParam('id') String screenName, Model model) {'Loading Twitter timeline for :' + screenName);List<Tweet> results = queryForTweets(screenName);// Optional Step - format the Tweets into HTMLformatTweets(results);model.addAttribute('tweets', results);model.addAttribute('id', screenName);return 'timeline';}private List<Tweet> queryForTweets(String screenName) {TimelineOperations timelineOps = twitter.timelineOperations();List<Tweet> results = timelineOps.getUserTimeline(screenName);'Fond Twitter timeline for :' + screenName + ' adding ' + results.size() + ' tweets to model');return results;}private void formatTweets(List<Tweet> tweets) {ByteArrayOutputStream bos = new ByteArrayOutputStream();StateMachine<TweetState> stateMachine = createStateMachine(bos);for (Tweet tweet : tweets) {bos.reset();String text = tweet.getText();stateMachine.processStream(new ByteArrayInputStream(text.getBytes()));String out = bos.toString();tweet.setText(out);}}private StateMachine<TweetState> createStateMachine(ByteArrayOutputStream bos) {StateMachine<TweetState> machine = new StateMachine<TweetState>(TweetState.OFF);// Add some actions to the statemachinemachine.addAction(TweetState.OFF, new DefaultAction(bos));machine.addAction(TweetState.RUNNING, new DefaultAction(bos));machine.addAction(TweetState.READY, new ReadyAction(bos));machine.addAction(TweetState.HASHTAG, new CaptureTag(bos, new HashTagStrategy()));machine.addAction(TweetState.NAMETAG, new CaptureTag(bos, new UserNameStrategy()));machine.addAction(TweetState.HTTPCHECK, new CheckHttpAction(bos));machine.addAction(TweetState.URL, new CaptureTag(bos, new UrlStrategy()));return machine;}} The getUserTimeline method contains three steps: firstly it gets hold of some tweets, does a bit of formatting and then puts the results into the model. In terms of this blog, getting hold of the tweets in the most important point and you can see that this is done in the List<tweet> queryForTweets(String screenName) method. This methods has two steps: use the Twitter object to get hold of a TimelineOperations instance and then use that object to query a time line using using screen name the argument. If you look at the Twitter interface, it acts as a factory object returning other objects that deal with different Twitter features: timelines, direct messaging, searching etc. I guess that this is because the developers realized that Twitter itself encompasses so much functionality that if all the required methods were in one class, then they’d have a God Object on their hands. I’ve also included the optional step of converting the Tweets into HTML. To do this I’ve used the JAR from my State Machine project and blog and you can see how this is done in the formatTweets(...) method. After putting the list of Tweets in to the model as an attribute, the final thing to accomplish is to write a JSP to display the data: <ul> <c:forEach items='${tweets}' var='tweet'> <li><img src='${tweet.profileImageUrl}' align='middle'/><c:out value='${tweet.createdAt}'/><br/><c:out value='${tweet.text}' escapeXml='false'/></li> </c:forEach> </ul> If you implement the optional anchor tag formatting then the key thing to remember here is to ensure that the formatted Tweet’s HTML is picked up by the browser. This is achieved by either using the escapeXml='false' attribute of the c:out tag or to place ${tweet.text} directly into the JSP. I haven’t included any styling or a fancy front end in this sample, so if you run the code 2 you should get something like this:And that completes my simple introduction to Spring Social, but there’s still a lot of ground to cover. In my next blog, I’ll be taking a look at what’s going on in the background. 1I’m guessing that there’s lots of privacy and data protection legality issues to consider here, especially if you use this API to store your users’ data and I’d welcome comments and observations on this. 2The code is available on GitHub at git:// in the social project. Reference: Getting Started with Spring Social from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....

Spring Framework 3.2 M1 Released

SpringSource just announced the first milestone release toward Spring 3.2. The new release is now available from the SpringSource repository at Check out a quick tutorial on resolving these artifacts via Maven. This release includes:Initial support for asynchronous @Controller methods Early support for JCache-based cache providers Significant performance improvements in autowiring of non-singleton beans Initial delay support for @Scheduled and Ability to choose between multiple executors with @Async Enhanced bean profile selection using the not (!) operator 48 bugs fixed, 8 new features and 36 improvements implementedThis is also the first release since Spring’s move to GitHub and using the new Gradle build.Some Java Code Geeks articles to kick-start you with Spring:Set up a Spring 3 development environment Gradle archetype for Spring applications Transaction configuration with JPA and Spring 3.1 Spring 3.1 Cache Abstraction TutorialHappy Spring coding..!!...

Rise above the Cloud hype with OpenShift

Are you tired of requesting a new development machine for your application? Are you sick of having to setup a new test environment for your application? Do you just want to focus on developing your application in peace without ‘dorking with the stack’ all of the time? We hear you. We have been there too. Have no fear, OpenShift is here! In this article will walk you through the simple steps it takes to setup not one, not two, not three, but up to five new machines in the Cloud with OpenShift. You will have your applications deployed for development, testing or to present them to the world at large in minutes. No more messing around. We start with an overview of what OpenShift is, where it comes from and how you can get the client tooling setup on your workstation. You will then be taken on a tour of the client tooling as it applies to the entry level of OpenShift, called Express. In minutes you will be off and back to focusing on your application development, deploying to test it in OpenShift Express. When finished you will just discard your test machine and move on. When you have mastered this, it will be time to ramp up into the next level with OpenShift Flex. This opens up your options a bit so you can do more with complex applications and deployments that might need a bit more fire power. After this you will be fully capable of ascending into the OpenShift Cloud when you chose, where you need it and at a moments notice. This is how development is supposed to be, development without stack distractions. Introduction There is a great amount of hype in the IT world right now about Cloud. There is no shortage of acronyms for the various areas that have been carved out, like IaaS, PaaS and SaaS. OpenShift is a Platform as a Service (PaaS) from Red Hat which provides you with a platform to run your applications. For you as a developer, you want to look at the environment where you put your applications as just a service that is being provided. You don’t want to bother with how that service is constructed of a set of components, how they are configured or where they are running. You just want to make use of this service that they offer to deploy, develop, test and run your application. At this basic level, OpenShift provides a platform for your Java applications. First let’s take a quick look at where OpenShift comes from. It started at a company called Makara that was based in Redwood City, Calif., providing solutions to enable organizations to deploy, manage, monitor and scale their applications on both private or public clouds. Red Hat acquired Makara in November of 2010, and in the following year they have merged Red Hat technologies into a new project called OpenShift[1]. They launched a first project that initially provides two levels of service[2], a shared hosting solution called Express and a dedicated hosting solution known as Flex. What makes this merging of technologies interesting for a Java developer is that Red Hat has included the next generation application platform based on JBoss AS 7 in OpenShift[3]. This brings a lightning fast application platform for all your development needs. OpenShift Express The OpenShift website states, “Express is a free, cloud-based application platform for Java, Perl, PHP, Python, and Ruby applications. It’s super-simple—your development environment is also your deployment environment: git push, ‘and you’re in the cloud.” This peaks the interest so lets give it a try and see if we can raise our web application into the clouds. For this we have our jBPM Migration web application[4] which we will use as a running example for the rest of this exercise. Getting started in Express is well documented on the website as a quick start[5], which you can get to once you have signed up for a Red Hat Cloud (rhcloud) account. This quick start provides us with the four steps you need to get our application online and starts with the installation of the necessary client tools. This is outlined for Red Hat Enterprise Linux (RHEL), Fedora Linux, generic Linux distributions, Mac OS X and Windows. For RHEL and Fedora it is a simple package installation, for the rest it is a Ruby based gem installation which we will leave for the reader to apply to her system. Once the client tooling is installed, there are several commands based on the form rhc-<command>. There is an online interface available but most developers prefer the control offered by the command line client tools so we will be making use of these. Here is an overview of what is available with a brief description of each:rhc-create-domain – used to bind a registered rhcloud user to a domain in rhcloud. You can have maximum of one domain per registered rhcloud user. rhc-create-app – used to create an application for a given rhcloud user, a given development environment (Java, Ruby, Python, Perl, PHP) and for a given rhcloud domain. You can create up to five applications for a given domain. This will generate the full URI for your rhcloud instance, setup your rhcloud instance based on the environment you chose and by default will create a local git project for your chosen development environment. rhc-snapshot – used to create a local backup of a given rhcloud instance. rhc-ctl-app – used to control a given rhcloud application. Here you can add a database, check the status of the instance, start, stop, etc. rhc-tail-files – used to connect to a rhcloud applications log files and dump them into your command shell. rhc-user-info – used to look at a given rhcloud user, the defined domains and created applications. rhc-chk – used to run a simple configuration check on your setup.Create your domain To get started with our demo application we need to do a few simple thing to get an Express instance setup for hosting our Java application, beginning with a domain. # We need to create the domain for Express to start setting up # We need to create the domain for Express to start setting up # our URL with the client tooling using # rhc-create-domain -n domainname -l rhlogin # $ rhc-create-domain --helpUsage: /usr/bin/rhc-create-domain Bind a registered rhcloud user to a domain in rhcloud.NOTE: to change ssh key, please alter your ~/.ssh/libra_id_rsa and ~/.ssh/ key, then re-run with --alter-n|--namespace namespace Namespace for your application(s) (alphanumeric - max 16 chars) (required) -l|--rhlogin rhlogin Red Hat login (RHN or OpenShift login with OpenShift Express access) (required) -p|--password password RHLogin password (optional, will prompt) -a|--alter Alter namespace (will change urls) and/or ssh key -d|--debug Print Debug info -h|--help Show Usage info# So we setup one for our Java application. Note that we already have # setup my ssh keys for OpenShift, if you have not yet done that, # then it will walk you through it. # $ rhc-create-domain -n inthe -l [rhcloud-user] -p [mypassword]OpenShift Express key found at /home/[homedir]/.ssh/libra_id_rsa. Reusing... Contacting Creation successfulYou may now create an application. Please make note of your local config file in /home/[homedir]/.openshift/express.conf which has been created and populated for you.Create your application Next we want to create our application, which means we want to tell the OpenShift Express which stack we need. This is done with the rhc-create-app client tool. # Let's take a look at the options available before we setup a Java # instance for our application. # $ rhc-create-app --help Contacting to obtain list of cartridges... (please excuse the delay)Usage: /usr/bin/rhc-create-app Create an OpenShift Express app.-a|--app application Application name (alphanumeric - max 16 chars) (required) -t|--type type Type of app to create (perl-5.10, jbossas-7.0, wsgi-3.2, rack-1.1, php-5.3) (required) -l|--rhlogin rhlogin Red Hat login (RHN or OpenShift login with OpenShift Express access) (Default: xxxxxxxxx) -p|--password password RHLogin password (optional, will prompt) -r|--repo path Git Repo path (defaults to ./$app_name) -n|--nogit Only create remote space, don't pull it locally -d|--debug Print Debug info -h|--help Show Usage info# It seems we can choose between several but we want the jboss-as7.0 # stack (called a cartridge). Provide a user, password and location # for the git repo to be created called 'jbpmmigration', see the # documentation for the defaults. Let's watch the magic happen! # $ rhc-create-app -a jbpmmigration -t jbossas-7.0 -l [rhcloud-user] -p [mypassword] -r /home/[homedir]/git-projects/jbpmmigrationFound a bug? Post to the forum and we'll get right on it. IRC: #openshift on freenode Forums: to create remote application space: jbpmmigration Contacting API version: 1.1.1 Broker version: 1.1.1RESULT: Successfully created application: jbpmmigrationChecking ~/.ssh/config Contacting Found in ~/.ssh/config... No need to adjust Now your new domain name is being propagated worldwide (this might take a minute)... Pulling new repo down Warning: Permanently added ',' (RSA) to the list of known hosts. Confirming application jbpmmigration is available Attempt # 1Success! Your application is now published here: remote repository is located here:ssh:// make changes to your application, commit to jbpmmigration/. Then run 'git push' to update your OpenShift Express space .If we take a look at my given path to the repo we find a git-projects/jbpmmigration git repository. Note that if you decide to alter your domain name you will have to adjust the git repository config file to reflect where the remote repository is, see above the line with ‘ssh:…..’. Also the page is already live at It is just a splash screen to get you started, so now we move on to deploying our existing jBPM Migration project. First lets look at the provided README in our git project which gives some insight to the repository layout. Repo layout =========== deployments/ - location for built wars (Details below) src/ - maven src structure pom.xml - maven build file .openshift/ - location for openshift specific files .openshift/config/ - location for configuration files such as standalone.xml (used to modify jboss config such as datasources) ../data - For persistent data (also in env var OPENSHIFT_DATA_DIR) .openshift/action_hooks/build - Script that gets run every push, just prior to starting your app For this article we only will examine the deployments and src directories. You can just drop in your WAR files, remove the pom.xml file in the root of the project and they will be automatically deployed. If you want to deploy exploded WAR files then you just add a file called ‘.dodeploy’ as outlined in the README file. For real project development we want to push our code through the normal src directory structure and this is also possible by working with the provided pom.xml file. The README file provided gives all the details needed to get your started. Our demo application, jbpmmigration also comes with a README file that provides the instructions to add the project contents to our new git repository, so we will run these commands to pull the files into our local project. # placing our application into our express git repo. # $ cd jbpmmigration $ git remote add upstream -m master git:// $ git pull -s recursive -X theirs upstream master# now we need to push the content. # $ git push origin[jbpmmigration maven build log output removed] ... remote: [INFO] ------------------------------------------------------------------------ remote: [INFO] BUILD SUCCESS remote: [INFO] ------------------------------------------------------------------------ remote: [INFO] Total time: 3.114s remote: [INFO] Finished at: Mon Nov 14 10:26:57 EST 2011 remote: [INFO] Final Memory: 5M/141M remote: [INFO] ------------------------------------------------------------------------ remote: ~/git/jbpmmigration.git remote: Running .openshift/action_hooks/build remote: Running .openshift/action_hooks/deploy remote: Starting application... remote: Done remote: Running .openshift/action_hooks/post_deploy To ssh:// 410a1c9..7ea0003 master -> master As you can see we have now pushed our content to the rhcloud instance we created, it deployed the content and started our instance. Now we should be able to find our application online at The final step would then be that you are finished working on this application and want to free it up for a new application. You can then make a backup with the rhc-snapshot client tool and then remove your instance with rhc-ctl-app client tool. # Ready to get rid of our application now. # $ rhc-ctl-app -a jbpmmigration -l eschabell -c destroy Password: ********Contacting !!!! WARNING !!!! WARNING !!!! WARNING !!!! You are about to destroy the jbpmmigration application.This is NOT reversible, all remote data for this application will be removed. Do you want to destroy this application (y/n): yContacting API version: 1.1.1 Broker version: 1.1.1RESULT: Successfully destroyed application: jbpmmigration As you can see, it is really easy to get started with the five free instances you have to play with for your application development. You might notice that there are limitation, with no ability to use specific integrated monitoring tooling, auto-scaling features are missing and control of the configuration is limited. For those needing more access and features, take a look at the next step up with OpenShift Flex[6]. This completes our tour of the OpenShift Express project where we provided you with a glimpse of the possibilities that await you and your applications. It was a breeze to create your domain, define your applications needs and import your project into the provided git project. After pushing your changes to the new Express instance you are off and testing your application development in the cloud. This is real. This is easy. Now get out there and raise your code above the cloud hype. Related links:OpenShift, Project overview OpenShift, JBoss AS7 in the Cloud, jBPM Migration project web application, OpenShift Express Quick Start, OpenShift Flex Quick Start, Rise above the Cloud hype with OpenShift from our JCG partner Eric D. Schabell at the Thoughts on Middleware, Linux, software, cycling and other news… blog....

How I explained Dependency Injection to My Team

Recently our company started developing a new java based web application and after some evaluation process we decided to use Spring.But many of the team members are not aware of Spring and Dependency Injection principles. So I was asked to give a crash course on what is Dependency Injection and basics on Spring. Instead of telling all the theory about IOC/DI I thought of explaining with an example. Requirement: We will get some Customer Address and we need to validate the address. After some evaluation we thought of using Google Address Validation Service. Legacy(Bad) Approach: Just create an AddressVerificationService class and implement the logic. Assume GoogleAddressVerificationService is a service provided by Google which takes Address as a String and Return longitude/latitude. class AddressVerificationService { public String validateAddress(String address) { GoogleAddressVerificationService gavs = new GoogleAddressVerificationService(); String result = gavs.validateAddress(address); return result; } }Issues with this approach:  1. If you want to change your Address Verification Service Provider you need to change the logic. 2. You can’t Unit Test with some Dummy AddressVerificationService (Using Mock Objects) Due to some reason Client ask us to support multiple AddressVerificationService Providers and we need to determine which service to use at runtime. To accomidate this you may thought of changing the above class as below: class AddressVerificationService { //This method validates the given address and return longitude/latitude details. public String validateAddress(String address) { String result = null; int serviceCode = 2; // read this code value from a config file if(serviceCode == 1) { GoogleAddressVerificationService googleAVS = new GoogleAddressVerificationService(); result = googleAVS.validateAddress(address); } else if(serviceCode == 2) { YahooAddressVerificationService yahooAVS = new YahooAddressVerificationService(); result = yahooAVS.validateAddress(address); } return result; } }Issues with this approach:    1. Whenever you need to support a new Service Provider you need to add/change logic using if-else-if. 2. You can’t Unit Test with some Dummy AddressVerificationService (Using Mock Objects) IOC/DI Approach: In the above approaches AddressVerificationService is taking the control of creating its dependencies. So whenever there is a change in its dependencies the AddressVerificationService will change. Now let us rewrite the AddressVerificationService using IOC/DI pattern. class AddressVerificationService { private AddressVerificationServiceProvider serviceProvider; public AddressVerificationService(AddressVerificationServiceProvider serviceProvider) { this.serviceProvider = serviceProvider; } public String validateAddress(String address) { return this.serviceProvider.validateAddress(address); } } interface AddressVerificationServiceProvider { public String validateAddress(String address); }Here we are injecting the AddressVerificationService dependency AddressVerificationServiceProvider. Now let us implement the AddressVerificationServiceProvider with multiple provider services. class YahooAVS implements AddressVerificationServiceProvider { @Override public String validateAddress(String address) { System.out.println("Verifying address using YAHOO AddressVerificationService"); return yahooAVSAPI.validate(address); } }class GoogleAVS implements AddressVerificationServiceProvider { @Override public String validateAddress(String address) { System.out.println("Verifying address using Google AddressVerificationService"); return googleAVSAPI.validate(address); } }Now the Client can choose which Service Provider’s service to use as follows: AddressVerificationService verificationService = null; AddressVerificationServiceProvider provider = null; provider = new YahooAVS();//to use YAHOO AVS provider = new GoogleAVS();//to use Google AVS verificationService = new AddressVerificationService(provider); String lnl = verificationService.validateAddress("HitechCity, Hyderabad"); System.out.println(lnl);For Unit Testing we can implement a Mock AddressVerificationServiceProvider. class MockAVS implements AddressVerificationServiceProvider { @Override public String validateAddress(String address) { System.out.println("Verifying address using MOCK AddressVerificationService"); return "<response><longitude>123</longitude><latitude>4567</latitude>"; } } AddressVerificationServiceProvider provider = null; provider = new MockAVS();//to use MOCK AVS AddressVerificationServiceIOC verificationService = new AddressVerificationServiceIOC(provider); String lnl = verificationService.validateAddress("Somajiguda, Hyderabad"); System.out.println(lnl);With this approach we elemenated the issues with above Non-IOC/DI based approaches. 1. We can provide support for as many Provides as we wish. Just implement AddressVerificationServiceProvider and inject it. 2. We can unit test using Dummy Data using Mock Implementation. So by following Dependency Injection principle we can create interface-based loosely-coupled and easily testable services. Reference: How I explained Dependency Injection to My Team from our JCG partner Siva Reddy at the My Experiments on Technology blog....

Proof of Concept: Play! Framework

We are starting a new project and we have to choose the web framework. Our default choice is grails, because the team already has experience with it, but I decided to give Play! and Scala a chance. Play! has a lot of cool things for which it received many pluses in my evaluation, but in the end we decided to stick with grails. It’s not that grails is perfect and meets all the requirements, but Play! is not sufficiently better to make us switch. Anyway, here’s a list of areas where Play! failed my evaluation. Please correct me if I’ve got something wrong:template engine – UI developers were furious with the template engine used in the previous project – freemarker, because it wasn’t null-safe – it blew up each time a chain of invocations had null. Play templates use scala, and so they are not null-safe. Scala has a different approach to nulls – Option, but third party libraries and our core code will be in Java and we’d have to introduce some null-to-Option conversion, and it will get ugly. This question shows a way to handle the case, but the comments make me hesitant to use it. That’s only part of the story – with all my respect and awe for static typing, the UI layer must use a simple scripting language. EL/JSTL is a good example. It doesn’t explode if it doesn’t find some value. static assets – this is hard, and I couldn’t find anything about using Play! with a CDN or how to merge multiple assets into one file. Is there an easy way to do that? IDE-support – the only was to edit the templates is through the scala editor, but it doesn’t have html support. This is not a deal-breaker, but tooling around the framework is a good thing to have. community – there is a good community around Play!, but I viewed it compared to grails. Play! is an older framework, and it has 2.5k questions on stackoverflow, while grails has 7.5k. module fragmentation – some of the important modules that I found were only for 1.x without direct replacements in 2.0.Other factors:I won’t be working with it – UI developers will. Although I might be fine with all the type-safety and peculiar scala concepts, UI developers will probably not be. scala is ugly – now bash me for that. Yes, I’m not a Scala guy, but this being a highly upvoted answer kind of drove me off. It looks like a low-level programming language, and relevant to the previous point – it definitely doesn’t look OK to our UI developers. change of programming model – I mentioned the Option vs null, but there are tons of other things. This is not a problem of scala, of course, it even makes it the cool and good thing that has generated all the hype, but it’s a problem that too many people will have to switch their perspective at the same time we have been using Spring and Spring-MVC a lot, and Play’s integration with spring isn’t as smooth as that of Grails (which is built ontop of spring-mvc) you can see, many of the problems are not universal – they are relevant to our experience and expectations. You may not need to use a CDN, and your UI developers may be scala-gurus instead of groovy developers. And as I said in the beginning, Play! definitely looks good and has a lot of cool things that I omitted here (the list would be long).Reference: Proof of Concept: Play! Framework from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

Software Architects Need Not Apply

I saw an online job posting several years ago that listed a set of desired software development and programming skills and concluded with the statement, “Architects Need Not Apply.” Joe Winchester has written that Those Who Can, Code; Those Who Can’t, Architect (beware an extremely obnoxious Flash-based popup) and has stated that part of his proposed Hippocratic Oath for Programmers would be to “swear that my desire to enter the computing profession is not to become an architect.” Andriy Solovey has written the post Do We Need Software Architects? 10 Reasons Why Not and Sergey Mikhanov has proclaimed Why I don’t believe in software architects. More recent posts have talked of Frustration with the Role and Purpose of Architects on Software Projects and The frustrated architect. In this post, I look at some of the reasons software architects are often held in low esteem in the software development community.I have been (and am) a software architect at times and a software developer at times. Often, I must move rapidly between the two roles. This has allowed me to see both sides of the issue and I believe that the best software architects are those who do architecture work, design work, and lower level implementation coding and testing.In Chapter 5 (“The Second-System Effect“) The Mythical Man-Month, Frederick P. Brooks, Jr., wrote of the qualities and characteristics of a successful architect. These are listed next:An architect “suggests” (“not dictates”) implementation because the programmer/coder/builder has the “inventive and creative responsibility.” An architect should have an idea of how to implement his or her architecture, but should be “prepared to accept any other way that meets the objectives as well.” An architect should be “ready to forego credit for suggested improvements.” An architect should “listen to the builder’s suggestions for architecture improvements.” An architect should strive for work to be “spare and clean,” avoiding “functional ornamentation” and “extrapolation of functions that are obviated by changes in assumptions and purposes.”Although the first edition of The Mythical Man-Month was published more than 35 years ago in 1975, violations of Brooks’s suggestions for being a successful architect remain, in my opinion, the primary reason why software architecture as a discipline has earned some disrespect in the software development community.One of the problems developers often have with software architects is the feeling that the software architect is micromanaging their technical decisions. As Brooks suggests, successful architects need to listen to the developers’ alternative suggestions and recommendations for improvements. Indeed, in some cases, the open-minded architect might even be willing to go with a significant architectural change if the benefits outweigh the costs. In my opinion, good architects (like good software developers) should be willing to learn and even expect to learn from others (including developers).A common complaint among software developers regarding architects is that architects are so high-level that they miss important details or ignore important concerns with their idealistic architectures. I have found that I’m a better architect when I have recently worked with low-level software implementation. The farther and longer removed I am from design and implementation, the less successful I can be in helping architect the best solutions. Software developers are more confident in the architect’s vision when they know that the architect is capable of implementing the architecture himself or herself if needed. An architect needs to be working among the masses and not lounging in the ivory tower. Indeed, it would be nice if the title “software architect” was NOT frequently seen as an euphemism for “can no longer code.”The longer I work in the software development industry, the more convinced I am that “spare and clean” should be the hallmarks of all good designs and architectures. Modern software principles seem to support this. Concepts like Don’t Repeat Yourself (DRY) and You Ain’t Gonna Need It (YAGNI) have become popular for good reason.Some software architects have an inflated opinion of their own value due to their title or other recognition. For these types, it is very difficult to follow Brooks’s recommendation to “forego credit” for architecture and implementation improvements. Software developers are much more likely to embrace the architect who shares credit as appropriate and does not take credit for the developers’ ideas and work.I think there is a place for software architecture, but a portion of our fellow software architects have harmed the reputation of the discipline. Following Brooks’s suggestions can begin to improve the reputation of software architects and their discipline, but, more importantly, can lead to better and more efficient software solutions.Reference: Software Architects Need Not Apply from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Java Thread deadlock – Case Study

This article will describe the complete root cause analysis of a recent Java deadlock problem observed from a Weblogic 11g production system running on the IBM JVM 1.6. This case study will also demonstrate the importance of mastering Thread Dump analysis skills; including for the IBM JVM Thread Dump format. Environment specifications – Java EE server: Oracle Weblogic Server 11g & Spring 2.0.5 – OS: AIX 5.3 – Java VM: IBM JRE 1.6.0 – Platform type: Portal & ordering application Monitoring and troubleshooting tools – JVM Thread Dump (IBM JVM format) – Compuware Server Vantage (Weblogic JMX monitoring & alerting) Problem overview A major stuck Threads problem was observed & reported from Compuware Server Vantage and affecting 2 of our Weblogic 11g production managed servers causing application impact and timeout conditions from our end users. Gathering and validation of facts As usual, a Java EE problem investigation requires gathering of technical and non-technical facts so we can either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts below were verified in order to conclude on the root cause: · What is the client impact? MEDIUM (only 2 managed servers / JVM affected out of 16) · Recent change of the affected platform? Yes (new JMS related asynchronous component) · Any recent traffic increase to the affected platform? No · How does this problem manifest itself? A sudden increase of Threads was observed leading to rapid Thread depletion · Did a Weblogic managed server restart resolve the problem? Yes, but problem is returning after few hours (unpredictable & intermittent pattern) – Conclusion #1 : The problem is related to an intermittent stuck Threads behaviour affecting only a few Weblogic managed servers at the time – Conclusion #2 : Since problem is intermittent, a global root cause such as a non-responsive downstream system is not likely Thread Dump analysis – first pass The first thing to do when dealing with stuck Thread problems is to generate a JVM Thread Dump. This is a golden rule regardless of your environment specifications & problem context. A JVM Thread Dump snapshot provides you with crucial information about the active Threads and what type of processing / tasks they are performing at that time. Now back to our case study, an IBM JVM Thread Dump ( format) was generated which did reveal the following Java Thread deadlock condition below: 1LKDEADLOCK Deadlock detected !!! NULL --------------------- NULL 2LKDEADLOCKTHR Thread '[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'' (0x000000012CC08B00) 3LKDEADLOCKWTR is waiting for: 4LKDEADLOCKMON sys_mon_t:0x0000000126171DF8 infl_mon_t: 0x0000000126171E38: 4LKDEADLOCKOBJ weblogic/jms/frontend/FESession@0x07000000198048C0/0x07000000198048D8: 3LKDEADLOCKOWN which is owned by: 2LKDEADLOCKTHR Thread '[STUCK] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)'' (0x000000012E560500) 3LKDEADLOCKWTR which is waiting for: 4LKDEADLOCKMON sys_mon_t:0x000000012884CD60 infl_mon_t: 0x000000012884CDA0: 4LKDEADLOCKOBJ weblogic/jms/frontend/FEConnection@0x0700000019822F08/0x0700000019822F20: 3LKDEADLOCKOWN which is owned by: 2LKDEADLOCKTHR Thread '[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'' (0x000000012CC08B00)This deadlock situation can be translated as per below: – Weblogic Thread #8 is waiting to acquire an Object monitor lock owned by Weblogic Thread #10 – Weblogic Thread #10 is waiting to acquire an Object monitor lock owned by Weblogic Thread #8 Conclusion: both Weblogic Threads #8 & #10 are waiting on each other; forever! Now before going any deeper in this root cause analysis, let me provide you a high level overview on Java Thread deadlocks. Java Thread deadlock overview Most of you are probably familiar with Java Thread deadlock principles but did you really experience a true deadlock problem? From my experience, true Java deadlocks are rare and I have only seen ~5 occurrences over the last 10 years. The reason is that most stuck Threads related problems are due to Thread hanging conditions (waiting on remote IO call etc.) but not involved in a true deadlock condition with other Thread(s). A Java Thread deadlock is a situation for example where Thread A is waiting to acquire an Object monitor lock held by Thread B which is itself waiting to acquire an Object monitor lock held by Thread A. Both these Threads will wait for each other forever. This situation can be visualized as per below diagram:Thread deadlock is confirmed…now what can you do? Once the deadlock is confirmed (most JVM Thread Dump implementations will highlight it for you), the next step is to perform a deeper dive analysis by reviewing each Thread involved in the deadlock situation along with their current task & wait condition. Find below the partial Thread Stack Trace from our problem case for each Thread involved in the deadlock condition: ** Please note that the real application Java package name was renamed for confidentiality purposes ** Weblogic Thread #8 '[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'' J9VMThread:0x000000012CC08B00, j9thread_t:0x00000001299E5100, java/lang/Thread:0x070000001D72EE00, state:B, prio=1 (native thread ID:0x111200F, native priority:0x1, native policy:UNKNOWN) Java callstack: at weblogic/jms/frontend/FEConnection.stop( Code)) at weblogic/jms/frontend/FEConnection.invoke( Code)) at weblogic/messaging/dispatcher/Request.wrappedFiniteStateMachine( Code)) at weblogic/messaging/dispatcher/DispatcherImpl.syncRequest( Code)) at weblogic/messaging/dispatcher/DispatcherImpl.dispatchSync( Code)) at weblogic/jms/dispatcher/DispatcherAdapter.dispatchSync( Code)) at weblogic/jms/client/JMSConnection.stop( Code)) at weblogic/jms/client/WLConnectionImpl.stop( at org/springframework/jms/connection/SingleConnectionFactory.closeConnection( at org/springframework/jms/connection/SingleConnectionFactory.resetConnection( at org/app/JMSReceiver.receive() ……………………………………………………………………Weblogic Thread #10 '[STUCK] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)'' J9VMThread:0x000000012E560500, j9thread_t:0x000000012E35BCE0, java/lang/Thread:0x070000001ECA9200, state:B, prio=1 (native thread ID:0x4FA027, native priority:0x1, native policy:UNKNOWN) Java callstack: at weblogic/jms/frontend/FEConnection.getPeerVersion( Code)) at weblogic/jms/frontend/FESession.setUpBackEndSession( Code)) at weblogic/jms/frontend/FESession.consumerCreate( Code)) at weblogic/jms/frontend/FESession.invoke( Code)) at weblogic/messaging/dispatcher/Request.wrappedFiniteStateMachine( Code)) at weblogic/messaging/dispatcher/DispatcherImpl.syncRequest( Code)) at weblogic/messaging/dispatcher/DispatcherImpl.dispatchSync( Code)) at weblogic/jms/dispatcher/DispatcherAdapter.dispatchSync( Code)) at weblogic/jms/client/JMSSession.consumerCreate( Code)) at weblogic/jms/client/JMSSession.setupConsumer( Code)) at weblogic/jms/client/JMSSession.createConsumer( Code)) at weblogic/jms/client/JMSSession.createReceiver( Code)) at weblogic/jms/client/WLSessionImpl.createReceiver( Code)) at org/springframework/jms/core/JmsTemplate102.createConsumer( Code)) at org/springframework/jms/core/JmsTemplate.doReceive( Code)) at org/springframework/jms/core/JmsTemplate$10.doInJms( Code)) at org/springframework/jms/core/JmsTemplate.execute( Code)) at org/springframework/jms/core/JmsTemplate.receiveSelected( Code)) at org/springframework/jms/core/JmsTemplate.receiveSelected( Code)) at org/app/JMSReceiver.receive() ……………………………………………………………As you can see in the above Thread Strack Traces, such deadlock did originate from our application code which is using the Spring framework API for the JMS consumer implementation (very useful when not using MDB’s). The Stack Traces are quite interesting and revealing that both Threads are in a race condition against the same Weblogic JMS consumer session / connection and leading to a deadlock situation: – Weblogic Thread #8 is attempting to reset and close the current JMS connection – Weblogic Thread #10 is attempting to use the same JMS Connection / Session in order to create a new JMS consumer – Thread deadlock is triggered! Root cause: non Thread safe Spring JMS SingleConnectionFactory implementation A code review and a quick research from Spring JIRA bug database did reveal the following Thread safe defect below with a perfect correlation with the above analysis: # SingleConnectionFactory’s resetConnection is causing deadlocks with underlying OracleAQ’s JMS connection A patch for Spring SingleConnectionFactory was released back in 2009 which did involve adding proper synchronized{} block in order to prevent Thread deadlock in the event of a JMS Connection reset operation: synchronized (connectionMonitor) { //if condition added to avoid possible deadlocks when trying to reset the target connection if (!started) {; started = true; } }Solution Our team is currently planning to integrate this Spring patch in to our production environment shortly. The initial tests performed in our test environment are positive. Conclusion  I hope this case study has helped understand a real-life Java Thread deadlock problem and how proper Thread Dump analysis skills can allow you to quickly pinpoint the root cause of stuck Thread related problems at the code level. Please don’t hesitate to post any comment or question. Reference: Java Thread deadlock – Case Study from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog....

JavaFX 2: Create Login Form

In this tutorial I will design a nice looking Login Form with JavaFX 2 and CSS. It’s clasic login form with username and password, and login button. In order to follow this tutorial I strongly recommend you to check these tutorials below:Getting started with JavaFX 2 in Eclipse IDE JavaFX 2: HBox JavaFX 2: GridPane JavaFX 2: Styling Buttons JavaFX 2: Working with Text and Text EffectsUsername: JavaFX2 Password: password You can enter this information above and click on Login button. It will tell you with a little message that login is successful, but if you enter wrong information, it will tell you with a little message that login isn’t successful. The final output screenshot of this tutorial will be like below image.JavaFX 2 Login FormHere is Java code of our example: import javafx.application.Application; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.geometry.Insets; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.control.Label; import javafx.scene.control.PasswordField; import javafx.scene.control.TextField; import javafx.scene.effect.DropShadow; import javafx.scene.effect.Reflection; import javafx.scene.layout.BorderPane; import javafx.scene.layout.GridPane; import javafx.scene.layout.HBox; import javafx.scene.paint.Color; import javafx.scene.text.Font; import javafx.scene.text.FontWeight; import javafx.scene.text.Text; import javafx.stage.Stage; /** * * @web */ public class Login extends Application { String user = "JavaFX2"; String pw = "password"; String checkUser, checkPw; public static void main(String[] args) { launch(args); } @Override public void start(Stage primaryStage) { primaryStage.setTitle("JavaFX 2 Login"); BorderPane bp = new BorderPane(); bp.setPadding(new Insets(10,50,50,50)); //Adding HBox HBox hb = new HBox(); hb.setPadding(new Insets(20,20,20,30)); //Adding GridPane GridPane gridPane = new GridPane(); gridPane.setPadding(new Insets(20,20,20,20)); gridPane.setHgap(5); gridPane.setVgap(5); //Implementing Nodes for GridPane Label lblUserName = new Label("Username"); final TextField txtUserName = new TextField(); Label lblPassword = new Label("Password"); final PasswordField pf = new PasswordField(); Button btnLogin = new Button("Login"); final Label lblMessage = new Label(); //Adding Nodes to GridPane layout gridPane.add(lblUserName, 0, 0); gridPane.add(txtUserName, 1, 0); gridPane.add(lblPassword, 0, 1); gridPane.add(pf, 1, 1); gridPane.add(btnLogin, 2, 1); gridPane.add(lblMessage, 1, 2); //Reflection for gridPane Reflection r = new Reflection(); r.setFraction(0.7f); gridPane.setEffect(r); //DropShadow effect DropShadow dropShadow = new DropShadow(); dropShadow.setOffsetX(5); dropShadow.setOffsetY(5); //Adding text and DropShadow effect to it Text text = new Text("JavaFX 2 Login"); text.setFont(Font.font("Courier New", FontWeight.BOLD, 28)); text.setEffect(dropShadow); //Adding text to HBox hb.getChildren().add(text); //Add ID's to Nodes bp.setId("bp"); gridPane.setId("root"); btnLogin.setId("btnLogin"); text.setId("text"); //Action for btnLogin btnLogin.setOnAction(new EventHandler() { public void handle(ActionEvent event) { checkUser = txtUserName.getText().toString(); checkPw = pf.getText().toString(); if(checkUser.equals(user) && checkPw.equals(pw)){ lblMessage.setText("Congratulations!"); lblMessage.setTextFill(Color.GREEN); } else{ lblMessage.setText("Incorrect user or pw."); lblMessage.setTextFill(Color.RED); } txtUserName.setText(""); pf.setText(""); } }); //Add HBox and GridPane layout to BorderPane Layout bp.setTop(hb); bp.setCenter(gridPane); //Adding BorderPane to the scene and loading CSS Scene scene = new Scene(bp); scene.getStylesheets().add(getClass().getClassLoader().getResource("login.css").toExternalForm()); primaryStage.setScene(scene); primaryStage.titleProperty().bind( scene.widthProperty().asString(). concat(" : "). concat(scene.heightProperty().asString())); //primaryStage.setResizable(false);; } }In order to style this application properly you’ll need to create login.css file in /src folder of your project. If you dont know how to do that, please check out JavaFX 2: Styling Buttons tutorial. Here is CSS code of our example:#root { -fx-background-color: linear-gradient(lightgray, gray); -fx-border-color: white; -fx-border-radius: 20; -fx-padding: 10 10 10 10; -fx-background-radius: 20; }#bp { -fx-background-color: linear-gradient(gray,DimGrey ); }#btnLogin { -fx-background-radius: 30, 30, 29, 28; -fx-padding: 3px 10px 3px 10px; -fx-background-color: linear-gradient(orange, orangered ); }#text { -fx-fill: linear-gradient(orange , orangered); }Thats’all folks for this tutorial, if you have any comments or problems, feel free to comment. If you like this tutorial, you can check out more JavFX 2 tutorials on this blog. You might want to take a look at these tutorials below:JavaFX 2: Styling Buttons with CSS JavaFX 2: Styling Text with CSSReference: JavaFX 2: Create Nice Login Form from our JCG partner Zoran Pavlovic at the Zoran Pavlovic blog blog....

Google Services Authentication in App Engine, Part 1

This post will illustrate how to build a simple Google App Engine (GAE) Java application that authenticates against Google as well as leverages Google’s OAuth for authorizing access to Google’s API services such as Google Docs. In addition, building on some of the examples already provided by Google, it will also illustrate how to persist data using the App Engine Datastore and Objectify. Project Source Code The motivation behind this post is that I struggled to previously find any examples that really tied these technologies together. Yet, these technologies really represent the building-blocks for many web applications that want to leverage the vast array of Google API services. To keep things simple, the demo will simply allow the user to login via a Google domain; authorize access to the user’s Google Docs services; and display a list of the user’s Google Docs Word and Spreadsheet documents. Throughout this tutorial I do make several assumptions about the reader’s expertise, such as a pretty deep familiarity with Java.Overview of the Flow Before we jump right into the tutorial/demo, let’s take a brief look at the navigation flow.While it may look rather complicated, the main flow can be summarized as:User requests access to listFiles.jsp (actually any of the JSP pages can be used). A check is make to see if the user is logged into Google. If not, they are re-directed to a Google login page — once logged in, they are returned back. A check is then made to determine whether the user is stored in the local datastore. If not, the user is added along with the user’s Google domain email address. Next, we check to see if the user has granted OAuth credentials to the Google Docs API service. If not, the OAuth authentication process is initiated. Once the OAuth credentials are granted, they are stored in the local user table (so we don’t have to ask each time the user attempts to access the services). Finally, a list of Google Docs Spreadsheet or Word docs is displayed.This same approach could be used to access other Google services, such as YouTube (you might display a list of the user’s favorite videos, for example). Environment Setup For this tutorial, I am using the following:Eclipse Indigo Service Release 2 along with the Google Plugin for Eclipse (see setup instructions). Google GData Java SDK Eclipse plugin version 1.47.1 (see setup instructions). Google App Engine release 1.6.5. Some problems exist with earlier versions, so I’d recommend making sure you are using it. It should install automatically as part of the Google Plugin for Eclipse. Objectify version 3.1. The required library is installed already in the project’s war/WEB-INF/lib directory.After you have imported the project into Eclipse, your build path should resemble:The App Engine settings should resemble:You will need to setup your own GAE application, along with specifying your own Application ID (see the Google GAE developer docs). The best tutorial I’ve seen that describes how to use OAuth to access Google API services can be found here. One of the more confusing aspects I found was how to acquire the necessary consumer key and consumer secret values that are required when placing the OAuth request. The way I accomplished this was:Create the GAE application using the GAE Admin Console. You will need to create your own Application ID (just a name for your webapp). Once you have it, you will update your Application ID in the Eclipse App Engine settings panel that is shown above. Create a new Domain for the application. For example, since my Application ID was specified above as ‘tennis-coachrx’, I configured the target URL path prefix as: You will see how we configure that servlet to receive the credentials shortly. To complete the domain registration, Google will provide you an HTML file that you can upload. Include that file the root path under the /src/war directory and upload the application to GAE. This way, when Google runs it’s check, the file will be present and it will generate the necessary consumer credentials. Here’s a screenshot of what the setup looks like after it is completed:Once you have the OAuth Consumer Key and OAuth Consumer Secret, you will then replace the following values in the com.zazarie.shared.Constant file: final static String CONSUMER_KEY = ‘ ‘; final static String CONSUMER_SECRET = ‘ ‘; Whew, that seemed like a lot of work! However, it’s a one-time deal, and you shouldn’t have to fuss with it again. Code Walkthrough Now that we got that OAuth configuration/setup out of the way, we can dig into the code. Let’s begin by looking at the structure of the war directory, where your web assets reside:The listFiles.jsp is the default JSP page that is displayed when your first enter the webapp. Let’s now look at the web.xml file to see how this is configured, along with the servlet filter which is central to everything. <?xml version='1.0' encoding='UTF-8'?> <web-app xmlns:xsi='http:www.w3.org2001XMLSchema-instance' xsi:schemaLocation='http:java.sun.comxmlnsjavaee http:java.sun.comxmlnsjavaeeweb-app_2_5.xsd' version='2.5' xmlns='http:java.sun.comxmlnsjavaee'> <!-- Filters --> <filter> <filter-name>AuthorizationFilter<filter-name> <filter-class>com.zazarie.server.servlet.filter.AuthorizationFilter<filter-class> <filter> <filter-mapping> <filter-name>AuthorizationFilter<filter-name> <url-pattern>html*<url-pattern> <filter-mapping> <!-- Servlets --> <servlet> <servlet-name>Step2<servlet-name> <servlet-class>com.zazarie.server.servlet.RequestTokenCallbackServlet<servlet-class> <servlet> <servlet-mapping> <servlet-name>Step2<servlet-name> <url-pattern>authSub<url-pattern> <servlet-mapping> <!-- Default page to serve --> <welcome-file-list> <welcome-file>htmllistFiles.jsp<welcome-file> <welcome-file-list> <web-app>The servlet filter called AuthorizationFilter is invoked whenever a JSP file located in the html directory is requested. The filter, as we’ll look at in a moment, is responsible for ensuring that the user is logged into Google, and if so, then ensures that the OAuth credentials have been granted for that user (i.e., it will kick off the OAuth credentialing process, if required). The servlet name of Step2 represents the servlet that is invoked by Google when the OAuth credentials have been granted — think of it as a callback. We will look at this in more detail in a bit. Let’s take a more detailed look at the AuthorizationFilter. AuthorizationFilter Deep Dive The doFilter method is where the work takes place in a servlet filter. Here’s the implementation: @Override public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException { HttpServletRequest request = (HttpServletRequest) req; HttpServletResponse response = (HttpServletResponse) res; HttpSession session = request.getSession();'Invoking Authorization Filter');'Destination URL is: ' + request.getRequestURI()); if (filterConfig == null) return; get the Google user AppUser appUser = LoginService.login(request, response); if (appUser != null) { session.setAttribute(Constant.AUTH_USER, appUser); } identify if user has an OAuth accessToken - it not, will set in motion oauth procedure if (appUser.getCredentials() == null) { need to save the target URI in session so we can forward to it when oauth is completed session.setAttribute(Constant.TARGET_URI, request.getRequestURI()); OAuthRequestService.requestOAuth(request, response, session); return; } else store DocService in the session so it can be reused session.setAttribute(Constant.DOC_SESSION_ID, LoginService.docServiceFactory(appUser)); chain.doFilter(request, response); } Besides the usual housekeeping stuff, the main logic begins with the line: AppUser appUser = LoginService.login(request, response); As we will see in a moment, the LoginService is responsible for logging the user into Google, and also will create the user in the local BigTable datastore. By storing the user locally, we can then store the user’s OAuth credentials, eliminating the need for the user to have to grant permissions every time they access a restricted/filtered page. After LoginService has returned the user ( AppUser object), we then store that user object into the session (NOTE: to enable sessions, you must set sessions-enabled in the appengine-web.xml file): session.setAttribute(Constant.AUTH_USER, appUser); We then check to see whether the OAuth credentials are associated with that user: if (appUser.getCredentials() == null) { session.setAttribute(Constant.TARGET_URI, request.getRequestURI()); OAuthRequestService.requestOAuth(request, response, session); return; } else session.setAttribute(Constant.DOC_SESSION_ID,LoginService.docServiceFactory(appUser)); If getCredentials() returns a null, the OAuth credentials have not already been assigned for the user. This means that the OAuth process needs to be kicked off. Since this involves a two-step process of posting the request to Google and then retrieving back the results via the callback ( Step2 servlet mentioned above), we need to store the destination URL so that we can later redirect the user to it once the authorization process is completed. This is done by storing the URL requested into the session using the setAttribute method. We then kick off the OAuth process by calling the OAuthRequestService.requestOAuth() method (details discussed below). In the event that if getCredentials() returns a non-null value, this indicates that we already have the user’s OAuth credentials from their local AppUser entry in the datastore, and we simply add it to the session so that we can use it later. LoginService Deep Dive The LoginService class has one main method called login, followed by a bunch of JPA helper methods for saving or updating the local user in the datastore. We will focus on login(), since that is where most of the business logic resides. public static AppUser login(HttpServletRequest req, HttpServletResponse res) { LOGGER.setLevel(Constant.LOG_LEVEL);'Initializing LoginService'); String URI = req.getRequestURI(); UserService userService = UserServiceFactory.getUserService(); User user = userService.getCurrentUser(); if (user != null) {'User id is: '' + userService.getCurrentUser().getUserId() + '''); String userEmail = userService.getCurrentUser().getEmail(); AppUser appUser = (AppUser) req.getSession().getAttribute( Constant.AUTH_USER); if (appUser == null) {'appUser not found in session'); see if it is a new user appUser = findUser(userEmail); if (appUser == null) {'User not found in datastore...creating'); appUser = addUser(userEmail); } else {'User found in datastore...updating'); appUser = updateUserTimeStamp(appUser); } } else { appUser = updateUserTimeStamp(appUser); } return appUser; } else {'Redirecting user to login page'); try { res.sendRedirect(userService.createLoginURL(URI)); } catch (IOException e) { e.printStackTrace(); } } return null; } The first substantive thing we do is use Google UserService class to determine whether the user is logged into Google: UserService userService = UserServiceFactory.getUserService(); User user = userService.getCurrentUser(); If the User object returned by Google’s call is null, the user isn’t logged into Google, and they are redirected to a login page using: res.sendRedirect(userService.createLoginURL(URI)); If the user is logged (i.e., not null), the next thing we do is determine whether that user exists in the local datastore. This is done by looking up the user with their logged-in Google email address with appUser = findUser(userEmail). Since JPA/Objectify isn’t the primary discussion point for this tutorial, I won’t go into how that method works. However, the Objectify web site has some great tutorials/documentation. If the user doesn’t exist locally, the object is populated with Google’s email address and created using appUser = addUser(userEmail). If the user does exist, we simply update the login timestamp for logging purposes.OAuthRequestService Deep DiveAs you may recall from earlier, once the user is setup locally, the AuthorizationFilter will then check to see whether the OAuth credentials have been granted by the user. If not, the OAuthRequestService.requestOAuth() method is invoked. It is shown below: public static void requestOAuth(HttpServletRequest req, HttpServletResponse res, HttpSession session) { LOGGER.setLevel(Constant.LOG_LEVEL);'Initializing OAuthRequestService'); GoogleOAuthParameters oauthParameters = new GoogleOAuthParameters(); oauthParameters.setOAuthConsumerKey(Constant.CONSUMER_KEY); oauthParameters.setOAuthConsumerSecret(Constant.CONSUMER_SECRET); Set the scope. oauthParameters.setScope(Constant.GOOGLE_RESOURCE); Sets the callback URL. oauthParameters.setOAuthCallback(Constant.OATH_CALLBACK); GoogleOAuthHelper oauthHelper = new GoogleOAuthHelper( new OAuthHmacSha1Signer()); try { Request is still unauthorized at this point oauthHelper.getUnauthorizedRequestToken(oauthParameters); Generate the authorization URL String approvalPageUrl = oauthHelper .createUserAuthorizationUrl(oauthParameters); session.setAttribute(Constant.SESSION_OAUTH_TOKEN, oauthParameters.getOAuthTokenSecret());'Session attributes are: ' + session.getAttributeNames().hasMoreElements()); res.getWriter().print( '<a href='' + approvalPageUrl + ''>Request token for the Google Documents Scope'); } catch (OAuthException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } To simplify working with OAuth, Google has a set of Java helper classes that we are utilizing. The first thing we need to do is setup the consumer credentials (acquiring those was discussed earlier): GoogleOAuthParameters oauthParameters = new GoogleOAuthParameters(); oauthParameters.setOAuthConsumerKey(Constant.CONSUMER_KEY); oauthParameters.setOAuthConsumerSecret(Constant.CONSUMER_SECRET); Then, we set the scope of the OAuth request using: oauthParameters.setScope(Constant.GOOGLE_RESOURCE); Where Constant.GOOGLE_RESOURCE resolves to When you make an OAuth request, you specify the scope of what resources you are attempting to gain access. In this case, we are trying to access Google Docs (the GData API’s for each service have the scope URL provided). Next, we establish where we want Google to return the reply. oauthParameters.setOAuthCallback(Constant.OATH_CALLBACK); This value changes whether we are running locally in dev mode, or deployed to the Google App Engine. Here’s how the values are defined in the the Constant interface: // Use for running on GAE //final static String OATH_CALLBACK = ‘'; // Use for local testing final static String OATH_CALLBACK = ‘'; When then sign the request using Google’s helper: GoogleOAuthHelper oauthHelper = new GoogleOAuthHelper(new OAuthHmacSha1Signer()); We then generate the URL that the user will navigate to in order to authorize access to the resource. This is generated dynamically using: String approvalPageUrl = oauthHelper.createUserAuthorizationUrl(oauthParameters); The last step is to provide a link to the user so that they can navigate to that URL to approve the request. This is done by constructing some simple HTML that is output using res.getWriter().print(). Once the user has granted access, Google calls back to the servlet identified by the URL parameter /authSub, which corresponds to the servlet class RequestTokenCallbackServlet. We will examine this next. RequestTokenCallbackServlet Deep DiveThe servlet uses the Google OAuth helper classes to generate the required access token and secret access token’s that will be required on subsequent calls to to the Google API docs service. Here is the doGet method that receives the call back response from Google: public void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { Create an instance of GoogleOAuthParameters GoogleOAuthParameters oauthParameters = new GoogleOAuthParameters(); oauthParameters.setOAuthConsumerKey(Constant.CONSUMER_KEY); oauthParameters.setOAuthConsumerSecret(Constant.CONSUMER_SECRET); GoogleOAuthHelper oauthHelper = new GoogleOAuthHelper( new OAuthHmacSha1Signer()); String oauthTokenSecret = (String) req.getSession().getAttribute( Constant.SESSION_OAUTH_TOKEN); AppUser appUser = (AppUser) req.getSession().getAttribute( Constant.AUTH_USER); oauthParameters.setOAuthTokenSecret(oauthTokenSecret); oauthHelper.getOAuthParametersFromCallback(req.getQueryString(), oauthParameters); try { String accessToken = oauthHelper.getAccessToken(oauthParameters); String accessTokenSecret = oauthParameters.getOAuthTokenSecret(); appUser = LoginService.getById(appUser.getId()); appUser = LoginService.updateUserCredentials(appUser, new OauthCredentials(accessToken, accessTokenSecret)); req.getSession().setAttribute(Constant.DOC_SESSION_ID, LoginService.docServiceFactory(appUser)); RequestDispatcher dispatcher = req.getRequestDispatcher((String) req .getSession().getAttribute(Constant.TARGET_URI)); if (dispatcher != null) dispatcher.forward(req, resp); } catch (OAuthException e) { e.printStackTrace(); } } The Google GoogleOAuthHelper is used to perform the housekeeping tasks required to populate the two values we are interested in: String accessToken = oauthHelper.getAccessToken(oauthParameters); String accessTokenSecret = oauthParameters.getOAuthTokenSecret(); Once we have these values, we then requery the user object from the datastore, and save those values into the AppUser.OauthCredentials subclass: appUser = LoginService.getById(appUser.getId()); appUser = LoginService.updateUserCredentials(appUser, new OauthCredentials(accessToken, accessTokenSecret)); req.getSession().setAttribute(Constant.DOC_SESSION_ID, LoginService.docServiceFactory(appUser)); In addition, you’ll see they are also stored into the session so we have them readily available when the API request to Google Docs is placed. Now that we’ve got everything we need, we simply redirect the user back to the resource they had originally requested: RequestDispatcher dispatcher = req.getRequestDispatcher((String) req .getSession().getAttribute(Constant.TARGET_URI)); dispatcher.forward(req, resp); Now, when they access the JSP page listing their documents, everything should work! Here’s a screencast demo of the final product: Hope you enjoyed the tutorial and demo — look forward to your comments! Continue to the second part of this tutorial. Reference: Authenticating for Google Services in Google App Engine from our JCG partner Jeff Davis at the Jeff’s SOA Ruminations blog....

Google Services Authentication in App Engine, Part 2

In the first part of the tutorial I described how to use OAuth for access/authentication for Google’s API services. Unfortunately, as I discovered a bit later, the approach I used was OAuth 1.0, which has apparently now been officially deprecated by Google in favor of version 2.0 of OAuth. Obviously, I was a bit bummed to discovered this, and promised I would create a new blog entry with instructions on how to use 2.0. The good news is that, with the 2.0 support, Google has added some additional helper classes that make things easier, especially if you are using Google App Engine, which is what I’m using for this tutorial. The Google Developers site now has a pretty good description on how to setup OAuth 2.0. However, it still turned out to be a challenge to configure a real-life example of how it’s done, so I figured I’d document what I’ve learned. Tutorial Scenario In the last tutorial, the project I created illustrated how to access a listing of a user’s Google Docs files. In this tutorial, I changed things up a bit, and instead use YouTube’s API to display a list of a user’s favorite videos. Accessing a user’s favorites does require authentication with OAuth, so this was a good test. Getting Started (Eclipse project for this tutorial can be found here). The first thing you must do is follow the steps outlined in Google’s official docs on using OAuth 2.0. Since I’m creating a web-app, you’ll want to follow the section in those docs titled ‘Web Server Applications’. In addition, the steps I talked about previously for setting up a Google App Engine are still relevant, so I’m going to jump right into the code and bypass these setup steps. (NOTE: The Eclipse project can be found here — I again elected not to use Maven in order to keep things simple for those who don’t have it installed or are knowledgeable in Maven). The application flow is very simple (assuming a first-time user):When the user accesses the webapp (assuming you are running it locally at http://localhost:8888 using the GAE developer emulator), they must first login to Google using their gmail or Google domain account. Once logged in, the user is redirected to a simple JSP page that has a link to their YouTube favorite videos. When the click on the link, a servlet will initiate the OAuth process to acquire access to their YouTube account. The first part of this process is being redirected to a Google Page that prompts them whether they want to grant the application access. Assuming the user responds affirmative, a list of 10 favorites will be displayed with links. If they click on the link, the video will load.Here’s the depiction of the first 3 pages flow:And here’s the last two pages (assuming that the user clicks on a given link):While this example is specific to YouTube, the same general principles apply for accessing any of the Google-based cloud services, such as Google+, Google Drive, Docs etc. They key enabler for creating such integrations is obviously OAuth, so let’s look at how that process works. OAuth 2.0 Process Flow Using OAuth can be a bit overwhelming for the new developer first learning the technology. The main premise behind it is to allow users to selectively identify which ‘private’ resources they want to make accessible to an external application, such as we are developing for this tutorial. By using OAuth, the user can avoid having to share their login credentials with a 3rd party, but instead can simply grant that 3rd party access to some of their information. To achieve this capability, the user is navigated to the source where their private data resides, in this case, YouTube. They can then either allow or reject the access request. If they allow it, the source of the private data (YouTube) then returns a single-use authorization code to the 3rd party application. Since it’s rather tedious for the user to have to grant access every time access is desired, there is an additional call that can be played that will ‘trade-in’ their single use authorization for a longer term one. The overall flow for the web application we’re developing for this tutorial can be seen below. OAuth FlowThe first step that takes place is to determine whether the user is already logged into Google using either their gmail or Google Domain account. While not directly tied to the OAuth process, it’s very convenient to enable users to login with their Google account as opposed to requiring them to sign up with your web site. That’s the first callout that is made to Google. Then, once logged in, the application determines whether the user has a local account setup with OAuth permissions granted. If they are logging in for the first time, they won’t. In that case, the OAuth process is initiated. The first step of that process is to specify to the OAuth provider, in this case Google YouTube, what ‘scope’ of access is being requested. Since Google has a lot of services, they have a lot of scopes. You can determine this most easily using their OAuth 2.0 sandbox. When you kickoff the OAuth process, you provide them the scope(s) you want access to, along with the OAuth client credentials that Google has provided you (these steps are actually rather generic to any provider that supports OAuth). For our purposes, we’re seeking access to the user’s YouTube account, so the scope provided by Google is: If the end-user grants access to the resource identify by the scope, Google will then post back an authorization code to the application. This is captured in a servlet. Since the returned code is only a ‘single-use’ code, it is exchanged for a longer running access token (and related refresh token). That step is represented above by the activity/box titled ‘Access & Refresh Token Requested’. Once armed with the access token, the application can then access the users’ private data by placing an API call along with the token. If everything checks out, the API will return the results. It’s not a terrible complicated process — it just involves a few steps. Let’s look at some of the specific implementation details, beginning with the servlet filter that determines whether the user has already logged into Google and/or granted OAuth access. AuthorizationFilter Let’s take a look at the first few lines of the AuthorizationFilter (to see how it’s configured as a filter, see the web.xml file). public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException { HttpServletRequest request = (HttpServletRequest) req; HttpServletResponse response = (HttpServletResponse) res; HttpSession session = request.getSession(); if not present, add credential store to servlet context if (session.getServletContext().getAttribute(Constant.GOOG_CREDENTIAL_STORE) == null) { LOGGER.fine('Adding credential store to context ' + credentialStore); session.getServletContext().setAttribute(Constant.GOOG_CREDENTIAL_STORE, credentialStore); } if google user isn't in session, add it if (session.getAttribute(Constant.AUTH_USER_ID) == null) { LOGGER.fine('Add user to session'); UserService userService = UserServiceFactory.getUserService(); User user = userService.getCurrentUser(); session.setAttribute(Constant.AUTH_USER_ID, user.getUserId()); session.setAttribute(Constant.AUTH_USER_NICKNAME, user.getNickname()); if not running on app engine prod, hard-code my email address for testing if (SystemProperty.environment.value() == SystemProperty.Environment.Value.Production) { session.setAttribute(Constant.AUTH_USER_EMAIL, user.getEmail()); } else { session.setAttribute(Constant.AUTH_USER_EMAIL, ''); } }The first few lines simply cast the generic servlet request and response to their corresponding Http equivalents — this is necessary since we want access to the HTTP session. The next step is to determine whether a CredentialStore is present in the servlet context. As we’ll see, this is used to store the user’s credentials, so it’s convenient to have it readily available in subsequent servlets. The guts of the matter begin when we check to see whether the user is already present in the session using: if (session.getAttribute(Constant.AUTH_USER_ID) == null) { If not, we get their Google login credentials using Google’s UserService class. This is a helper class available to GAE users to fetch the user’s Google userid, email and nickname. Once we get this info from UserService, we store some of the user’s details in the session. At this point, we haven’t done anything with OAuth, but that will change in the next series of code lines: try { Utils.getActiveCredential(request, credentialStore); } catch (NoRefreshTokenException e1) { // if this catch block is entered, we need to perform the oauth process‘No user found – authorization URL is: ‘ + e1.getAuthorizationUrl()); response.sendRedirect(e1.getAuthorizationUrl()); } A helper class called Utils is used for most of the OAuth processing. In this case, we’re calling the static method getActiveCredential(). As we will see in a moment, this method will return a NoRefreshTokenException if no OAuth credentials have been previously captured for the user. As a custom exception, it will return URL value that is used for redirecting the user to Google to seek OAuth approval. Let’s take a look at the getActiveCredential() method in more detail, as that’s where much of the OAuth handling is managed. public static Credential getActiveCredential(HttpServletRequest request, CredentialStore credentialStore) throws NoRefreshTokenException { String userId = (String) request.getSession().getAttribute(Constant.AUTH_USER_ID); Credential credential = null; try { if (userId != null) { credential = getStoredCredential(userId, credentialStore); } if ((credential == null || credential.getRefreshToken() == null) && request.getParameter('code') != null) { credential = exchangeCode(request.getParameter('code')); LOGGER.fine('Credential access token is: ' + credential.getAccessToken()); if (credential != null) { if (credential.getRefreshToken() != null) {, credential); } } } if (credential == null || credential.getRefreshToken() == null) { String email = (String) request.getSession().getAttribute(Constant.AUTH_USER_EMAIL); String authorizationUrl = getAuthorizationUrl(email, request); throw new NoRefreshTokenException(authorizationUrl); } } catch (CodeExchangeException e) { e.printStackTrace(); } return credential; } The first thing we do is fetch the Google userId from the session (they can’t get this far without it being populated). Next, we attempt to get the user’s OAuth credentials (stored in the Google class with the same name) from the CredentialStore using the Utils static method getStoredCredential(). If no credentials are found for that user, the Utils method called getAuthorizationUrl() is invoked. This method, which is shown below, is used to construct the URL that the browser is redirected to which is used to prompt the user to authorize access to their private data (the URL is served up by Google, since it will ask the user for approval). private static String getAuthorizationUrl(String emailAddress, HttpServletRequest request) { GoogleAuthorizationCodeRequestUrl urlBuilder = null; try { urlBuilder = new GoogleAuthorizationCodeRequestUrl( getClientCredential().getWeb().getClientId(), Constant.OATH_CALLBACK, Constant.SCOPES) .setAccessType('offline') .setApprovalPrompt('force'); } catch (IOException e) { TODO Auto-generated catch block e.printStackTrace(); } urlBuilder.set('state', request.getRequestURI()); if (emailAddress != null) { urlBuilder.set('user_id', emailAddress); } return; } As you can see, this method is using the class (from Google) called GoogleAuthorizationCodeRequestUrl. It constructs an HTTP call using the OAuth client credentials that is provided by Google when you sign up for using OAuth (those credentials, coincidentally, are stored in a file called client_secrets.json. Other parameters include the scope of the OAuth request and the URL that the user will be redirected back to if approval is granted by the user. That URL is the one you specified when signing up for Google’s OAuth access:Now, if the user had already granted OAuth access, the getActiveCredential()method would instead grab the credentials from the CredentialStore. Turning back to the URL that receives the results of the OAuth credentials, in this case, http://localhost:8888/authSub, you maybe wondering, how can Google post to that internal-only address? Well, it’s the user’s browser that is actually posting back the results, so localhost, in this case, resolves just fine. Let’s look that the servlet called OAuth2Callback that is used to process this callback (see the web.xml for how the servlet mapping for authSub is done). public class OAuth2Callback extends HttpServlet { private static final long serialVersionUID = 1L; private final static Logger LOGGER = Logger.getLogger(OAuth2Callback.class.getName()); public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException { StringBuffer fullUrlBuf = request.getRequestURL(); Credential credential = null; if (request.getQueryString() != null) { fullUrlBuf.append('?').append(request.getQueryString()); }'requestURL is: ' + fullUrlBuf); AuthorizationCodeResponseUrl authResponse = new AuthorizationCodeResponseUrl(fullUrlBuf.toString()); check for user-denied error if (authResponse.getError() != null) {'User-denied access'); } else {'User granted oauth access'); String authCode = authResponse.getCode(); request.getSession().setAttribute('code', authCode); response.sendRedirect(authResponse.getState()); } } } The most important take-away from this class is the line: AuthorizationCodeResponseUrl authResponse = new AuthorizationCodeResponseUrl(fullUrlBuf.toString()); The AuthorizationCodeResponseUrl class is provided as a convenience by Google to parse the results of the OAuth request. If the getError() method of that class isn’t null, that means that the user rejected the request. In the event that it is null, indicating the user approved the request, the method call getCode() is used to retrieve the one-time authorization code. This code value is placed into the user’s session, and when the Utils.getActiveCredential() is invoked following the redirect to the user’s target URL (via the filter), it will exchange that authorization code for a longer-term access and refresh token using the call: credential = exchangeCode((String) request.getSession().getAttribute(‘code’)); The Utils.exchangeCode() method is shown next: public static Credential exchangeCode(String authorizationCode) throws CodeExchangeException { try { GoogleTokenResponse response = new GoogleAuthorizationCodeTokenRequest( new NetHttpTransport(), Constant.JSON_FACTORY, Utils .getClientCredential().getWeb().getClientId(), Utils .getClientCredential().getWeb().getClientSecret(), authorizationCode, Constant.OATH_CALLBACK).execute(); return Utils.buildEmptyCredential().setFromTokenResponse(response); } catch (IOException e) { e.printStackTrace(); throw new CodeExchangeException(); } } This method also uses a Google class called GoogleAuthorizationCodeTokenRequest that is used to call Google to exchange the one-time OAuth authorization code for the longer-duration access token. Now that we’ve (finally) got our access token that is needed for the YouTube API, we’re ready to display to the user 10 of their video favorites.Calling the YouTube API Services With the access token in hand, we can now proceed to display the user their list of favorites. In order to do this, a servlet called FavoritesServlet is invoked. It will call the YouTube API, parse the resulting JSON-C format into some local Java classes via Jackson, and then send the results to the JSP page for processing. Here’s the servlet: public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { LOGGER.fine('Running FavoritesServlet'); Credential credential = Utils.getStoredCredential((String) request.getSession().getAttribute(Constant.AUTH_USER_ID), (CredentialStore) request.getSession().getServletContext().getAttribute(Constant.GOOG_CREDENTIAL_STORE)); VideoFeed feed = null; if the request fails, it's likely because access token is expired - we'll refresh try { LOGGER.fine('Using access token: ' + credential.getAccessToken()); feed = YouTube.fetchFavs(credential.getAccessToken()); } catch (Exception e) { LOGGER.fine('Refreshing credentials'); credential.refreshToken(); credential = Utils.refreshToken(request, credential); GoogleCredential googleCredential = Utils.refreshCredentials(credential); LOGGER.fine('Using refreshed access token: ' + credential.getAccessToken()); retry feed = YouTube.fetchFavs(credential.getAccessToken()); } LOGGER.fine('Video feed results are: ' + feed); request.setAttribute(Constant.VIDEO_FAVS, feed); RequestDispatcher dispatcher = getServletContext().getRequestDispatcher('htmllistVids.jsp'); dispatcher.forward(request, response); } Since this post is mainly about the OAuth process, I won’t go into too much detail how the API call is placed, but the most important line of code is: feed = YouTube.fetchFavs(credential.getAccessToken()); Where feed is an instance of VideoFeed. As you can see, another helper class called YouTube is used for doing the heavy-lifting. Just to wrap things up, I’ll show the fetchFavs() method. public static VideoFeed fetchFavs(String accessToken) throws IOException, HttpResponseException { HttpTransport transport = new NetHttpTransport(); final JsonFactory jsonFactory = new JacksonFactory(); HttpRequestFactory factory = transport.createRequestFactory(new HttpRequestInitializer() { @Override public void initialize(HttpRequest request) { set the parser JsonCParser parser = new JsonCParser(jsonFactory); request.addParser(parser); set up the Google headers GoogleHeaders headers = new GoogleHeaders(); headers.setApplicationName('YouTube Favorites1.0'); headers.gdataVersion = '2'; request.setHeaders(headers); } }); build the YouTube URL YouTubeUrl url = new YouTubeUrl(Constant.GOOGLE_YOUTUBE_FEED); url.maxResults = 10; url.access_token = accessToken; build the HTTP GET request HttpRequest request = factory.buildGetRequest(url); HttpResponse response = request.execute(); execute the request and the parse video feed VideoFeed feed = response.parseAs(VideoFeed.class); return feed; } It uses the Google class called HttpRequestFactory to construct an outbound HTTP API call to YouTube. Since we’re using GAE, we’re limited as to which classes we can use to place such requests. Notice the line of code: url.access_token = accessToken; That’s where we are using the access token that was acquired through the OAuth process. So, while it took a fair amount of code to get the OAuth stuff working correctly, once it’s in place, you are ready to rock-and-roll with calling all sorts of Google API services! Reference: Authenticating for Google Services, Part 2 from our JCG partner Jeff Davis at the Jeff’s SOA Ruminations blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.