Featured FREE Whitepapers

What's New Here?


How I explained Dependency Injection to My Team

Recently our company started developing a new java based web application and after some evaluation process we decided to use Spring.But many of the team members are not aware of Spring and Dependency Injection principles. So I was asked to give a crash course on what is Dependency Injection and basics on Spring. Instead of telling all the theory about IOC/DI I thought of explaining with an example. Requirement: We will get some Customer Address and we need to validate the address. After some evaluation we thought of using Google Address Validation Service. Legacy(Bad) Approach: Just create an AddressVerificationService class and implement the logic. Assume GoogleAddressVerificationService is a service provided by Google which takes Address as a String and Return longitude/latitude. class AddressVerificationService { public String validateAddress(String address) { GoogleAddressVerificationService gavs = new GoogleAddressVerificationService(); String result = gavs.validateAddress(address); return result; } }Issues with this approach:  1. If you want to change your Address Verification Service Provider you need to change the logic. 2. You can’t Unit Test with some Dummy AddressVerificationService (Using Mock Objects) Due to some reason Client ask us to support multiple AddressVerificationService Providers and we need to determine which service to use at runtime. To accomidate this you may thought of changing the above class as below: class AddressVerificationService { //This method validates the given address and return longitude/latitude details. public String validateAddress(String address) { String result = null; int serviceCode = 2; // read this code value from a config file if(serviceCode == 1) { GoogleAddressVerificationService googleAVS = new GoogleAddressVerificationService(); result = googleAVS.validateAddress(address); } else if(serviceCode == 2) { YahooAddressVerificationService yahooAVS = new YahooAddressVerificationService(); result = yahooAVS.validateAddress(address); } return result; } }Issues with this approach:    1. Whenever you need to support a new Service Provider you need to add/change logic using if-else-if. 2. You can’t Unit Test with some Dummy AddressVerificationService (Using Mock Objects) IOC/DI Approach: In the above approaches AddressVerificationService is taking the control of creating its dependencies. So whenever there is a change in its dependencies the AddressVerificationService will change. Now let us rewrite the AddressVerificationService using IOC/DI pattern. class AddressVerificationService { private AddressVerificationServiceProvider serviceProvider; public AddressVerificationService(AddressVerificationServiceProvider serviceProvider) { this.serviceProvider = serviceProvider; } public String validateAddress(String address) { return this.serviceProvider.validateAddress(address); } } interface AddressVerificationServiceProvider { public String validateAddress(String address); }Here we are injecting the AddressVerificationService dependency AddressVerificationServiceProvider. Now let us implement the AddressVerificationServiceProvider with multiple provider services. class YahooAVS implements AddressVerificationServiceProvider { @Override public String validateAddress(String address) { System.out.println("Verifying address using YAHOO AddressVerificationService"); return yahooAVSAPI.validate(address); } }class GoogleAVS implements AddressVerificationServiceProvider { @Override public String validateAddress(String address) { System.out.println("Verifying address using Google AddressVerificationService"); return googleAVSAPI.validate(address); } }Now the Client can choose which Service Provider’s service to use as follows: AddressVerificationService verificationService = null; AddressVerificationServiceProvider provider = null; provider = new YahooAVS();//to use YAHOO AVS provider = new GoogleAVS();//to use Google AVS verificationService = new AddressVerificationService(provider); String lnl = verificationService.validateAddress("HitechCity, Hyderabad"); System.out.println(lnl);For Unit Testing we can implement a Mock AddressVerificationServiceProvider. class MockAVS implements AddressVerificationServiceProvider { @Override public String validateAddress(String address) { System.out.println("Verifying address using MOCK AddressVerificationService"); return "<response><longitude>123</longitude><latitude>4567</latitude>"; } } AddressVerificationServiceProvider provider = null; provider = new MockAVS();//to use MOCK AVS AddressVerificationServiceIOC verificationService = new AddressVerificationServiceIOC(provider); String lnl = verificationService.validateAddress("Somajiguda, Hyderabad"); System.out.println(lnl);With this approach we elemenated the issues with above Non-IOC/DI based approaches. 1. We can provide support for as many Provides as we wish. Just implement AddressVerificationServiceProvider and inject it. 2. We can unit test using Dummy Data using Mock Implementation. So by following Dependency Injection principle we can create interface-based loosely-coupled and easily testable services. Reference: How I explained Dependency Injection to My Team from our JCG partner Siva Reddy at the My Experiments on Technology blog....

Proof of Concept: Play! Framework

We are starting a new project and we have to choose the web framework. Our default choice is grails, because the team already has experience with it, but I decided to give Play! and Scala a chance. Play! has a lot of cool things for which it received many pluses in my evaluation, but in the end we decided to stick with grails. It’s not that grails is perfect and meets all the requirements, but Play! is not sufficiently better to make us switch. Anyway, here’s a list of areas where Play! failed my evaluation. Please correct me if I’ve got something wrong:template engine – UI developers were furious with the template engine used in the previous project – freemarker, because it wasn’t null-safe – it blew up each time a chain of invocations had null. Play templates use scala, and so they are not null-safe. Scala has a different approach to nulls – Option, but third party libraries and our core code will be in Java and we’d have to introduce some null-to-Option conversion, and it will get ugly. This question shows a way to handle the case, but the comments make me hesitant to use it. That’s only part of the story – with all my respect and awe for static typing, the UI layer must use a simple scripting language. EL/JSTL is a good example. It doesn’t explode if it doesn’t find some value. static assets – this is hard, and I couldn’t find anything about using Play! with a CDN or how to merge multiple assets into one file. Is there an easy way to do that? IDE-support – the only was to edit the templates is through the scala editor, but it doesn’t have html support. This is not a deal-breaker, but tooling around the framework is a good thing to have. community – there is a good community around Play!, but I viewed it compared to grails. Play! is an older framework, and it has 2.5k questions on stackoverflow, while grails has 7.5k. module fragmentation – some of the important modules that I found were only for 1.x without direct replacements in 2.0.Other factors:I won’t be working with it – UI developers will. Although I might be fine with all the type-safety and peculiar scala concepts, UI developers will probably not be. scala is ugly – now bash me for that. Yes, I’m not a Scala guy, but this being a highly upvoted answer kind of drove me off. It looks like a low-level programming language, and relevant to the previous point – it definitely doesn’t look OK to our UI developers. change of programming model – I mentioned the Option vs null, but there are tons of other things. This is not a problem of scala, of course, it even makes it the cool and good thing that has generated all the hype, but it’s a problem that too many people will have to switch their perspective at the same time we have been using Spring and Spring-MVC a lot, and Play’s integration with spring isn’t as smooth as that of Grails (which is built ontop of spring-mvc) http://zeroturnaround.com/blog/play-framework-unfeatures-that-irk-my-inner-geek/As you can see, many of the problems are not universal – they are relevant to our experience and expectations. You may not need to use a CDN, and your UI developers may be scala-gurus instead of groovy developers. And as I said in the beginning, Play! definitely looks good and has a lot of cool things that I omitted here (the list would be long).Reference: Proof of Concept: Play! Framework from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

Software Architects Need Not Apply

I saw an online job posting several years ago that listed a set of desired software development and programming skills and concluded with the statement, “Architects Need Not Apply.” Joe Winchester has written that Those Who Can, Code; Those Who Can’t, Architect (beware an extremely obnoxious Flash-based popup) and has stated that part of his proposed Hippocratic Oath for Programmers would be to “swear that my desire to enter the computing profession is not to become an architect.” Andriy Solovey has written the post Do We Need Software Architects? 10 Reasons Why Not and Sergey Mikhanov has proclaimed Why I don’t believe in software architects. More recent posts have talked of Frustration with the Role and Purpose of Architects on Software Projects and The frustrated architect. In this post, I look at some of the reasons software architects are often held in low esteem in the software development community.I have been (and am) a software architect at times and a software developer at times. Often, I must move rapidly between the two roles. This has allowed me to see both sides of the issue and I believe that the best software architects are those who do architecture work, design work, and lower level implementation coding and testing.In Chapter 5 (“The Second-System Effect“) The Mythical Man-Month, Frederick P. Brooks, Jr., wrote of the qualities and characteristics of a successful architect. These are listed next:An architect “suggests” (“not dictates”) implementation because the programmer/coder/builder has the “inventive and creative responsibility.” An architect should have an idea of how to implement his or her architecture, but should be “prepared to accept any other way that meets the objectives as well.” An architect should be “ready to forego credit for suggested improvements.” An architect should “listen to the builder’s suggestions for architecture improvements.” An architect should strive for work to be “spare and clean,” avoiding “functional ornamentation” and “extrapolation of functions that are obviated by changes in assumptions and purposes.”Although the first edition of The Mythical Man-Month was published more than 35 years ago in 1975, violations of Brooks’s suggestions for being a successful architect remain, in my opinion, the primary reason why software architecture as a discipline has earned some disrespect in the software development community.One of the problems developers often have with software architects is the feeling that the software architect is micromanaging their technical decisions. As Brooks suggests, successful architects need to listen to the developers’ alternative suggestions and recommendations for improvements. Indeed, in some cases, the open-minded architect might even be willing to go with a significant architectural change if the benefits outweigh the costs. In my opinion, good architects (like good software developers) should be willing to learn and even expect to learn from others (including developers).A common complaint among software developers regarding architects is that architects are so high-level that they miss important details or ignore important concerns with their idealistic architectures. I have found that I’m a better architect when I have recently worked with low-level software implementation. The farther and longer removed I am from design and implementation, the less successful I can be in helping architect the best solutions. Software developers are more confident in the architect’s vision when they know that the architect is capable of implementing the architecture himself or herself if needed. An architect needs to be working among the masses and not lounging in the ivory tower. Indeed, it would be nice if the title “software architect” was NOT frequently seen as an euphemism for “can no longer code.”The longer I work in the software development industry, the more convinced I am that “spare and clean” should be the hallmarks of all good designs and architectures. Modern software principles seem to support this. Concepts like Don’t Repeat Yourself (DRY) and You Ain’t Gonna Need It (YAGNI) have become popular for good reason.Some software architects have an inflated opinion of their own value due to their title or other recognition. For these types, it is very difficult to follow Brooks’s recommendation to “forego credit” for architecture and implementation improvements. Software developers are much more likely to embrace the architect who shares credit as appropriate and does not take credit for the developers’ ideas and work.I think there is a place for software architecture, but a portion of our fellow software architects have harmed the reputation of the discipline. Following Brooks’s suggestions can begin to improve the reputation of software architects and their discipline, but, more importantly, can lead to better and more efficient software solutions.Reference: Software Architects Need Not Apply from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Java Thread deadlock – Case Study

This article will describe the complete root cause analysis of a recent Java deadlock problem observed from a Weblogic 11g production system running on the IBM JVM 1.6. This case study will also demonstrate the importance of mastering Thread Dump analysis skills; including for the IBM JVM Thread Dump format. Environment specifications – Java EE server: Oracle Weblogic Server 11g & Spring 2.0.5 – OS: AIX 5.3 – Java VM: IBM JRE 1.6.0 – Platform type: Portal & ordering application Monitoring and troubleshooting tools – JVM Thread Dump (IBM JVM format) – Compuware Server Vantage (Weblogic JMX monitoring & alerting) Problem overview A major stuck Threads problem was observed & reported from Compuware Server Vantage and affecting 2 of our Weblogic 11g production managed servers causing application impact and timeout conditions from our end users. Gathering and validation of facts As usual, a Java EE problem investigation requires gathering of technical and non-technical facts so we can either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts below were verified in order to conclude on the root cause: · What is the client impact? MEDIUM (only 2 managed servers / JVM affected out of 16) · Recent change of the affected platform? Yes (new JMS related asynchronous component) · Any recent traffic increase to the affected platform? No · How does this problem manifest itself? A sudden increase of Threads was observed leading to rapid Thread depletion · Did a Weblogic managed server restart resolve the problem? Yes, but problem is returning after few hours (unpredictable & intermittent pattern) – Conclusion #1 : The problem is related to an intermittent stuck Threads behaviour affecting only a few Weblogic managed servers at the time – Conclusion #2 : Since problem is intermittent, a global root cause such as a non-responsive downstream system is not likely Thread Dump analysis – first pass The first thing to do when dealing with stuck Thread problems is to generate a JVM Thread Dump. This is a golden rule regardless of your environment specifications & problem context. A JVM Thread Dump snapshot provides you with crucial information about the active Threads and what type of processing / tasks they are performing at that time. Now back to our case study, an IBM JVM Thread Dump ( javacore.xyz format) was generated which did reveal the following Java Thread deadlock condition below: 1LKDEADLOCK Deadlock detected !!! NULL --------------------- NULL 2LKDEADLOCKTHR Thread '[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'' (0x000000012CC08B00) 3LKDEADLOCKWTR is waiting for: 4LKDEADLOCKMON sys_mon_t:0x0000000126171DF8 infl_mon_t: 0x0000000126171E38: 4LKDEADLOCKOBJ weblogic/jms/frontend/FESession@0x07000000198048C0/0x07000000198048D8: 3LKDEADLOCKOWN which is owned by: 2LKDEADLOCKTHR Thread '[STUCK] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)'' (0x000000012E560500) 3LKDEADLOCKWTR which is waiting for: 4LKDEADLOCKMON sys_mon_t:0x000000012884CD60 infl_mon_t: 0x000000012884CDA0: 4LKDEADLOCKOBJ weblogic/jms/frontend/FEConnection@0x0700000019822F08/0x0700000019822F20: 3LKDEADLOCKOWN which is owned by: 2LKDEADLOCKTHR Thread '[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'' (0x000000012CC08B00)This deadlock situation can be translated as per below: – Weblogic Thread #8 is waiting to acquire an Object monitor lock owned by Weblogic Thread #10 – Weblogic Thread #10 is waiting to acquire an Object monitor lock owned by Weblogic Thread #8 Conclusion: both Weblogic Threads #8 & #10 are waiting on each other; forever! Now before going any deeper in this root cause analysis, let me provide you a high level overview on Java Thread deadlocks. Java Thread deadlock overview Most of you are probably familiar with Java Thread deadlock principles but did you really experience a true deadlock problem? From my experience, true Java deadlocks are rare and I have only seen ~5 occurrences over the last 10 years. The reason is that most stuck Threads related problems are due to Thread hanging conditions (waiting on remote IO call etc.) but not involved in a true deadlock condition with other Thread(s). A Java Thread deadlock is a situation for example where Thread A is waiting to acquire an Object monitor lock held by Thread B which is itself waiting to acquire an Object monitor lock held by Thread A. Both these Threads will wait for each other forever. This situation can be visualized as per below diagram:Thread deadlock is confirmed…now what can you do? Once the deadlock is confirmed (most JVM Thread Dump implementations will highlight it for you), the next step is to perform a deeper dive analysis by reviewing each Thread involved in the deadlock situation along with their current task & wait condition. Find below the partial Thread Stack Trace from our problem case for each Thread involved in the deadlock condition: ** Please note that the real application Java package name was renamed for confidentiality purposes ** Weblogic Thread #8 '[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'' J9VMThread:0x000000012CC08B00, j9thread_t:0x00000001299E5100, java/lang/Thread:0x070000001D72EE00, state:B, prio=1 (native thread ID:0x111200F, native priority:0x1, native policy:UNKNOWN) Java callstack: at weblogic/jms/frontend/FEConnection.stop(FEConnection.java:671(Compiled Code)) at weblogic/jms/frontend/FEConnection.invoke(FEConnection.java:1685(Compiled Code)) at weblogic/messaging/dispatcher/Request.wrappedFiniteStateMachine(Request.java:961(Compiled Code)) at weblogic/messaging/dispatcher/DispatcherImpl.syncRequest(DispatcherImpl.java:184(Compiled Code)) at weblogic/messaging/dispatcher/DispatcherImpl.dispatchSync(DispatcherImpl.java:212(Compiled Code)) at weblogic/jms/dispatcher/DispatcherAdapter.dispatchSync(DispatcherAdapter.java:43(Compiled Code)) at weblogic/jms/client/JMSConnection.stop(JMSConnection.java:863(Compiled Code)) at weblogic/jms/client/WLConnectionImpl.stop(WLConnectionImpl.java:843) at org/springframework/jms/connection/SingleConnectionFactory.closeConnection(SingleConnectionFactory.java:342) at org/springframework/jms/connection/SingleConnectionFactory.resetConnection(SingleConnectionFactory.java:296) at org/app/JMSReceiver.receive() ……………………………………………………………………Weblogic Thread #10 '[STUCK] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)'' J9VMThread:0x000000012E560500, j9thread_t:0x000000012E35BCE0, java/lang/Thread:0x070000001ECA9200, state:B, prio=1 (native thread ID:0x4FA027, native priority:0x1, native policy:UNKNOWN) Java callstack: at weblogic/jms/frontend/FEConnection.getPeerVersion(FEConnection.java:1381(Compiled Code)) at weblogic/jms/frontend/FESession.setUpBackEndSession(FESession.java:755(Compiled Code)) at weblogic/jms/frontend/FESession.consumerCreate(FESession.java:1025(Compiled Code)) at weblogic/jms/frontend/FESession.invoke(FESession.java:2995(Compiled Code)) at weblogic/messaging/dispatcher/Request.wrappedFiniteStateMachine(Request.java:961(Compiled Code)) at weblogic/messaging/dispatcher/DispatcherImpl.syncRequest(DispatcherImpl.java:184(Compiled Code)) at weblogic/messaging/dispatcher/DispatcherImpl.dispatchSync(DispatcherImpl.java:212(Compiled Code)) at weblogic/jms/dispatcher/DispatcherAdapter.dispatchSync(DispatcherAdapter.java:43(Compiled Code)) at weblogic/jms/client/JMSSession.consumerCreate(JMSSession.java:2982(Compiled Code)) at weblogic/jms/client/JMSSession.setupConsumer(JMSSession.java:2749(Compiled Code)) at weblogic/jms/client/JMSSession.createConsumer(JMSSession.java:2691(Compiled Code)) at weblogic/jms/client/JMSSession.createReceiver(JMSSession.java:2596(Compiled Code)) at weblogic/jms/client/WLSessionImpl.createReceiver(WLSessionImpl.java:991(Compiled Code)) at org/springframework/jms/core/JmsTemplate102.createConsumer(JmsTemplate102.java:204(Compiled Code)) at org/springframework/jms/core/JmsTemplate.doReceive(JmsTemplate.java:676(Compiled Code)) at org/springframework/jms/core/JmsTemplate$10.doInJms(JmsTemplate.java:652(Compiled Code)) at org/springframework/jms/core/JmsTemplate.execute(JmsTemplate.java:412(Compiled Code)) at org/springframework/jms/core/JmsTemplate.receiveSelected(JmsTemplate.java:650(Compiled Code)) at org/springframework/jms/core/JmsTemplate.receiveSelected(JmsTemplate.java:641(Compiled Code)) at org/app/JMSReceiver.receive() ……………………………………………………………As you can see in the above Thread Strack Traces, such deadlock did originate from our application code which is using the Spring framework API for the JMS consumer implementation (very useful when not using MDB’s). The Stack Traces are quite interesting and revealing that both Threads are in a race condition against the same Weblogic JMS consumer session / connection and leading to a deadlock situation: – Weblogic Thread #8 is attempting to reset and close the current JMS connection – Weblogic Thread #10 is attempting to use the same JMS Connection / Session in order to create a new JMS consumer – Thread deadlock is triggered! Root cause: non Thread safe Spring JMS SingleConnectionFactory implementation A code review and a quick research from Spring JIRA bug database did reveal the following Thread safe defect below with a perfect correlation with the above analysis: # SingleConnectionFactory’s resetConnection is causing deadlocks with underlying OracleAQ’s JMS connection https://jira.springsource.org/browse/SPR-5987 A patch for Spring SingleConnectionFactory was released back in 2009 which did involve adding proper synchronized{} block in order to prevent Thread deadlock in the event of a JMS Connection reset operation: synchronized (connectionMonitor) { //if condition added to avoid possible deadlocks when trying to reset the target connection if (!started) { this.target.start(); started = true; } }Solution Our team is currently planning to integrate this Spring patch in to our production environment shortly. The initial tests performed in our test environment are positive. Conclusion  I hope this case study has helped understand a real-life Java Thread deadlock problem and how proper Thread Dump analysis skills can allow you to quickly pinpoint the root cause of stuck Thread related problems at the code level. Please don’t hesitate to post any comment or question. Reference: Java Thread deadlock – Case Study from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog....

JavaFX 2: Create Login Form

In this tutorial I will design a nice looking Login Form with JavaFX 2 and CSS. It’s clasic login form with username and password, and login button. In order to follow this tutorial I strongly recommend you to check these tutorials below:Getting started with JavaFX 2 in Eclipse IDE JavaFX 2: HBox JavaFX 2: GridPane JavaFX 2: Styling Buttons JavaFX 2: Working with Text and Text EffectsUsername: JavaFX2 Password: password You can enter this information above and click on Login button. It will tell you with a little message that login is successful, but if you enter wrong information, it will tell you with a little message that login isn’t successful. The final output screenshot of this tutorial will be like below image.JavaFX 2 Login FormHere is Java code of our example: import javafx.application.Application; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.geometry.Insets; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.control.Label; import javafx.scene.control.PasswordField; import javafx.scene.control.TextField; import javafx.scene.effect.DropShadow; import javafx.scene.effect.Reflection; import javafx.scene.layout.BorderPane; import javafx.scene.layout.GridPane; import javafx.scene.layout.HBox; import javafx.scene.paint.Color; import javafx.scene.text.Font; import javafx.scene.text.FontWeight; import javafx.scene.text.Text; import javafx.stage.Stage; /** * * @web http://zoranpavlovic.blogspot.com/ */ public class Login extends Application { String user = "JavaFX2"; String pw = "password"; String checkUser, checkPw; public static void main(String[] args) { launch(args); } @Override public void start(Stage primaryStage) { primaryStage.setTitle("JavaFX 2 Login"); BorderPane bp = new BorderPane(); bp.setPadding(new Insets(10,50,50,50)); //Adding HBox HBox hb = new HBox(); hb.setPadding(new Insets(20,20,20,30)); //Adding GridPane GridPane gridPane = new GridPane(); gridPane.setPadding(new Insets(20,20,20,20)); gridPane.setHgap(5); gridPane.setVgap(5); //Implementing Nodes for GridPane Label lblUserName = new Label("Username"); final TextField txtUserName = new TextField(); Label lblPassword = new Label("Password"); final PasswordField pf = new PasswordField(); Button btnLogin = new Button("Login"); final Label lblMessage = new Label(); //Adding Nodes to GridPane layout gridPane.add(lblUserName, 0, 0); gridPane.add(txtUserName, 1, 0); gridPane.add(lblPassword, 0, 1); gridPane.add(pf, 1, 1); gridPane.add(btnLogin, 2, 1); gridPane.add(lblMessage, 1, 2); //Reflection for gridPane Reflection r = new Reflection(); r.setFraction(0.7f); gridPane.setEffect(r); //DropShadow effect DropShadow dropShadow = new DropShadow(); dropShadow.setOffsetX(5); dropShadow.setOffsetY(5); //Adding text and DropShadow effect to it Text text = new Text("JavaFX 2 Login"); text.setFont(Font.font("Courier New", FontWeight.BOLD, 28)); text.setEffect(dropShadow); //Adding text to HBox hb.getChildren().add(text); //Add ID's to Nodes bp.setId("bp"); gridPane.setId("root"); btnLogin.setId("btnLogin"); text.setId("text"); //Action for btnLogin btnLogin.setOnAction(new EventHandler() { public void handle(ActionEvent event) { checkUser = txtUserName.getText().toString(); checkPw = pf.getText().toString(); if(checkUser.equals(user) && checkPw.equals(pw)){ lblMessage.setText("Congratulations!"); lblMessage.setTextFill(Color.GREEN); } else{ lblMessage.setText("Incorrect user or pw."); lblMessage.setTextFill(Color.RED); } txtUserName.setText(""); pf.setText(""); } }); //Add HBox and GridPane layout to BorderPane Layout bp.setTop(hb); bp.setCenter(gridPane); //Adding BorderPane to the scene and loading CSS Scene scene = new Scene(bp); scene.getStylesheets().add(getClass().getClassLoader().getResource("login.css").toExternalForm()); primaryStage.setScene(scene); primaryStage.titleProperty().bind( scene.widthProperty().asString(). concat(" : "). concat(scene.heightProperty().asString())); //primaryStage.setResizable(false); primaryStage.show(); } }In order to style this application properly you’ll need to create login.css file in /src folder of your project. If you dont know how to do that, please check out JavaFX 2: Styling Buttons tutorial. Here is CSS code of our example:#root { -fx-background-color: linear-gradient(lightgray, gray); -fx-border-color: white; -fx-border-radius: 20; -fx-padding: 10 10 10 10; -fx-background-radius: 20; }#bp { -fx-background-color: linear-gradient(gray,DimGrey ); }#btnLogin { -fx-background-radius: 30, 30, 29, 28; -fx-padding: 3px 10px 3px 10px; -fx-background-color: linear-gradient(orange, orangered ); }#text { -fx-fill: linear-gradient(orange , orangered); }Thats’all folks for this tutorial, if you have any comments or problems, feel free to comment. If you like this tutorial, you can check out more JavFX 2 tutorials on this blog. You might want to take a look at these tutorials below:JavaFX 2: Styling Buttons with CSS JavaFX 2: Styling Text with CSSReference: JavaFX 2: Create Nice Login Form from our JCG partner Zoran Pavlovic at the Zoran Pavlovic blog blog....

Google Services Authentication in App Engine, Part 1

This post will illustrate how to build a simple Google App Engine (GAE) Java application that authenticates against Google as well as leverages Google’s OAuth for authorizing access to Google’s API services such as Google Docs. In addition, building on some of the examples already provided by Google, it will also illustrate how to persist data using the App Engine Datastore and Objectify. Project Source Code The motivation behind this post is that I struggled to previously find any examples that really tied these technologies together. Yet, these technologies really represent the building-blocks for many web applications that want to leverage the vast array of Google API services. To keep things simple, the demo will simply allow the user to login via a Google domain; authorize access to the user’s Google Docs services; and display a list of the user’s Google Docs Word and Spreadsheet documents. Throughout this tutorial I do make several assumptions about the reader’s expertise, such as a pretty deep familiarity with Java.Overview of the Flow Before we jump right into the tutorial/demo, let’s take a brief look at the navigation flow.While it may look rather complicated, the main flow can be summarized as:User requests access to listFiles.jsp (actually any of the JSP pages can be used). A check is make to see if the user is logged into Google. If not, they are re-directed to a Google login page — once logged in, they are returned back. A check is then made to determine whether the user is stored in the local datastore. If not, the user is added along with the user’s Google domain email address. Next, we check to see if the user has granted OAuth credentials to the Google Docs API service. If not, the OAuth authentication process is initiated. Once the OAuth credentials are granted, they are stored in the local user table (so we don’t have to ask each time the user attempts to access the services). Finally, a list of Google Docs Spreadsheet or Word docs is displayed.This same approach could be used to access other Google services, such as YouTube (you might display a list of the user’s favorite videos, for example). Environment Setup For this tutorial, I am using the following:Eclipse Indigo Service Release 2 along with the Google Plugin for Eclipse (see setup instructions). Google GData Java SDK Eclipse plugin version 1.47.1 (see setup instructions). Google App Engine release 1.6.5. Some problems exist with earlier versions, so I’d recommend making sure you are using it. It should install automatically as part of the Google Plugin for Eclipse. Objectify version 3.1. The required library is installed already in the project’s war/WEB-INF/lib directory.After you have imported the project into Eclipse, your build path should resemble:The App Engine settings should resemble:You will need to setup your own GAE application, along with specifying your own Application ID (see the Google GAE developer docs). The best tutorial I’ve seen that describes how to use OAuth to access Google API services can be found here. One of the more confusing aspects I found was how to acquire the necessary consumer key and consumer secret values that are required when placing the OAuth request. The way I accomplished this was:Create the GAE application using the GAE Admin Console. You will need to create your own Application ID (just a name for your webapp). Once you have it, you will update your Application ID in the Eclipse App Engine settings panel that is shown above. Create a new Domain for the application. For example, since my Application ID was specified above as ‘tennis-coachrx’, I configured the target URL path prefix as: http://tennis-coachrx.appspot.com/authSub. You will see how we configure that servlet to receive the credentials shortly. To complete the domain registration, Google will provide you an HTML file that you can upload. Include that file the root path under the /src/war directory and upload the application to GAE. This way, when Google runs it’s check, the file will be present and it will generate the necessary consumer credentials. Here’s a screenshot of what the setup looks like after it is completed:Once you have the OAuth Consumer Key and OAuth Consumer Secret, you will then replace the following values in the com.zazarie.shared.Constant file: final static String CONSUMER_KEY = ‘ ‘; final static String CONSUMER_SECRET = ‘ ‘; Whew, that seemed like a lot of work! However, it’s a one-time deal, and you shouldn’t have to fuss with it again. Code Walkthrough Now that we got that OAuth configuration/setup out of the way, we can dig into the code. Let’s begin by looking at the structure of the war directory, where your web assets reside:The listFiles.jsp is the default JSP page that is displayed when your first enter the webapp. Let’s now look at the web.xml file to see how this is configured, along with the servlet filter which is central to everything. <?xml version='1.0' encoding='UTF-8'?> <web-app xmlns:xsi='http:www.w3.org2001XMLSchema-instance' xsi:schemaLocation='http:java.sun.comxmlnsjavaee http:java.sun.comxmlnsjavaeeweb-app_2_5.xsd' version='2.5' xmlns='http:java.sun.comxmlnsjavaee'> <!-- Filters --> <filter> <filter-name>AuthorizationFilter<filter-name> <filter-class>com.zazarie.server.servlet.filter.AuthorizationFilter<filter-class> <filter> <filter-mapping> <filter-name>AuthorizationFilter<filter-name> <url-pattern>html*<url-pattern> <filter-mapping> <!-- Servlets --> <servlet> <servlet-name>Step2<servlet-name> <servlet-class>com.zazarie.server.servlet.RequestTokenCallbackServlet<servlet-class> <servlet> <servlet-mapping> <servlet-name>Step2<servlet-name> <url-pattern>authSub<url-pattern> <servlet-mapping> <!-- Default page to serve --> <welcome-file-list> <welcome-file>htmllistFiles.jsp<welcome-file> <welcome-file-list> <web-app>The servlet filter called AuthorizationFilter is invoked whenever a JSP file located in the html directory is requested. The filter, as we’ll look at in a moment, is responsible for ensuring that the user is logged into Google, and if so, then ensures that the OAuth credentials have been granted for that user (i.e., it will kick off the OAuth credentialing process, if required). The servlet name of Step2 represents the servlet that is invoked by Google when the OAuth credentials have been granted — think of it as a callback. We will look at this in more detail in a bit. Let’s take a more detailed look at the AuthorizationFilter. AuthorizationFilter Deep Dive The doFilter method is where the work takes place in a servlet filter. Here’s the implementation: @Override public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException { HttpServletRequest request = (HttpServletRequest) req; HttpServletResponse response = (HttpServletResponse) res; HttpSession session = request.getSession(); LOGGER.info('Invoking Authorization Filter'); LOGGER.info('Destination URL is: ' + request.getRequestURI()); if (filterConfig == null) return; get the Google user AppUser appUser = LoginService.login(request, response); if (appUser != null) { session.setAttribute(Constant.AUTH_USER, appUser); } identify if user has an OAuth accessToken - it not, will set in motion oauth procedure if (appUser.getCredentials() == null) { need to save the target URI in session so we can forward to it when oauth is completed session.setAttribute(Constant.TARGET_URI, request.getRequestURI()); OAuthRequestService.requestOAuth(request, response, session); return; } else store DocService in the session so it can be reused session.setAttribute(Constant.DOC_SESSION_ID, LoginService.docServiceFactory(appUser)); chain.doFilter(request, response); } Besides the usual housekeeping stuff, the main logic begins with the line: AppUser appUser = LoginService.login(request, response); As we will see in a moment, the LoginService is responsible for logging the user into Google, and also will create the user in the local BigTable datastore. By storing the user locally, we can then store the user’s OAuth credentials, eliminating the need for the user to have to grant permissions every time they access a restricted/filtered page. After LoginService has returned the user ( AppUser object), we then store that user object into the session (NOTE: to enable sessions, you must set sessions-enabled in the appengine-web.xml file): session.setAttribute(Constant.AUTH_USER, appUser); We then check to see whether the OAuth credentials are associated with that user: if (appUser.getCredentials() == null) { session.setAttribute(Constant.TARGET_URI, request.getRequestURI()); OAuthRequestService.requestOAuth(request, response, session); return; } else session.setAttribute(Constant.DOC_SESSION_ID,LoginService.docServiceFactory(appUser)); If getCredentials() returns a null, the OAuth credentials have not already been assigned for the user. This means that the OAuth process needs to be kicked off. Since this involves a two-step process of posting the request to Google and then retrieving back the results via the callback ( Step2 servlet mentioned above), we need to store the destination URL so that we can later redirect the user to it once the authorization process is completed. This is done by storing the URL requested into the session using the setAttribute method. We then kick off the OAuth process by calling the OAuthRequestService.requestOAuth() method (details discussed below). In the event that if getCredentials() returns a non-null value, this indicates that we already have the user’s OAuth credentials from their local AppUser entry in the datastore, and we simply add it to the session so that we can use it later. LoginService Deep Dive The LoginService class has one main method called login, followed by a bunch of JPA helper methods for saving or updating the local user in the datastore. We will focus on login(), since that is where most of the business logic resides. public static AppUser login(HttpServletRequest req, HttpServletResponse res) { LOGGER.setLevel(Constant.LOG_LEVEL); LOGGER.info('Initializing LoginService'); String URI = req.getRequestURI(); UserService userService = UserServiceFactory.getUserService(); User user = userService.getCurrentUser(); if (user != null) { LOGGER.info('User id is: '' + userService.getCurrentUser().getUserId() + '''); String userEmail = userService.getCurrentUser().getEmail(); AppUser appUser = (AppUser) req.getSession().getAttribute( Constant.AUTH_USER); if (appUser == null) { LOGGER.info('appUser not found in session'); see if it is a new user appUser = findUser(userEmail); if (appUser == null) { LOGGER.info('User not found in datastore...creating'); appUser = addUser(userEmail); } else { LOGGER.info('User found in datastore...updating'); appUser = updateUserTimeStamp(appUser); } } else { appUser = updateUserTimeStamp(appUser); } return appUser; } else { LOGGER.info('Redirecting user to login page'); try { res.sendRedirect(userService.createLoginURL(URI)); } catch (IOException e) { e.printStackTrace(); } } return null; } The first substantive thing we do is use Google UserService class to determine whether the user is logged into Google: UserService userService = UserServiceFactory.getUserService(); User user = userService.getCurrentUser(); If the User object returned by Google’s call is null, the user isn’t logged into Google, and they are redirected to a login page using: res.sendRedirect(userService.createLoginURL(URI)); If the user is logged (i.e., not null), the next thing we do is determine whether that user exists in the local datastore. This is done by looking up the user with their logged-in Google email address with appUser = findUser(userEmail). Since JPA/Objectify isn’t the primary discussion point for this tutorial, I won’t go into how that method works. However, the Objectify web site has some great tutorials/documentation. If the user doesn’t exist locally, the object is populated with Google’s email address and created using appUser = addUser(userEmail). If the user does exist, we simply update the login timestamp for logging purposes.OAuthRequestService Deep DiveAs you may recall from earlier, once the user is setup locally, the AuthorizationFilter will then check to see whether the OAuth credentials have been granted by the user. If not, the OAuthRequestService.requestOAuth() method is invoked. It is shown below: public static void requestOAuth(HttpServletRequest req, HttpServletResponse res, HttpSession session) { LOGGER.setLevel(Constant.LOG_LEVEL); LOGGER.info('Initializing OAuthRequestService'); GoogleOAuthParameters oauthParameters = new GoogleOAuthParameters(); oauthParameters.setOAuthConsumerKey(Constant.CONSUMER_KEY); oauthParameters.setOAuthConsumerSecret(Constant.CONSUMER_SECRET); Set the scope. oauthParameters.setScope(Constant.GOOGLE_RESOURCE); Sets the callback URL. oauthParameters.setOAuthCallback(Constant.OATH_CALLBACK); GoogleOAuthHelper oauthHelper = new GoogleOAuthHelper( new OAuthHmacSha1Signer()); try { Request is still unauthorized at this point oauthHelper.getUnauthorizedRequestToken(oauthParameters); Generate the authorization URL String approvalPageUrl = oauthHelper .createUserAuthorizationUrl(oauthParameters); session.setAttribute(Constant.SESSION_OAUTH_TOKEN, oauthParameters.getOAuthTokenSecret()); LOGGER.info('Session attributes are: ' + session.getAttributeNames().hasMoreElements()); res.getWriter().print( '<a href='' + approvalPageUrl + ''>Request token for the Google Documents Scope'); } catch (OAuthException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } To simplify working with OAuth, Google has a set of Java helper classes that we are utilizing. The first thing we need to do is setup the consumer credentials (acquiring those was discussed earlier): GoogleOAuthParameters oauthParameters = new GoogleOAuthParameters(); oauthParameters.setOAuthConsumerKey(Constant.CONSUMER_KEY); oauthParameters.setOAuthConsumerSecret(Constant.CONSUMER_SECRET); Then, we set the scope of the OAuth request using: oauthParameters.setScope(Constant.GOOGLE_RESOURCE); Where Constant.GOOGLE_RESOURCE resolves to https://docs.google.com/feeds/. When you make an OAuth request, you specify the scope of what resources you are attempting to gain access. In this case, we are trying to access Google Docs (the GData API’s for each service have the scope URL provided). Next, we establish where we want Google to return the reply. oauthParameters.setOAuthCallback(Constant.OATH_CALLBACK); This value changes whether we are running locally in dev mode, or deployed to the Google App Engine. Here’s how the values are defined in the the Constant interface: // Use for running on GAE //final static String OATH_CALLBACK = ‘http://tennis-coachrx.appspot.com/authSub'; // Use for local testing final static String OATH_CALLBACK = ‘'; When then sign the request using Google’s helper: GoogleOAuthHelper oauthHelper = new GoogleOAuthHelper(new OAuthHmacSha1Signer()); We then generate the URL that the user will navigate to in order to authorize access to the resource. This is generated dynamically using: String approvalPageUrl = oauthHelper.createUserAuthorizationUrl(oauthParameters); The last step is to provide a link to the user so that they can navigate to that URL to approve the request. This is done by constructing some simple HTML that is output using res.getWriter().print(). Once the user has granted access, Google calls back to the servlet identified by the URL parameter /authSub, which corresponds to the servlet class RequestTokenCallbackServlet. We will examine this next. RequestTokenCallbackServlet Deep DiveThe servlet uses the Google OAuth helper classes to generate the required access token and secret access token’s that will be required on subsequent calls to to the Google API docs service. Here is the doGet method that receives the call back response from Google: public void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { Create an instance of GoogleOAuthParameters GoogleOAuthParameters oauthParameters = new GoogleOAuthParameters(); oauthParameters.setOAuthConsumerKey(Constant.CONSUMER_KEY); oauthParameters.setOAuthConsumerSecret(Constant.CONSUMER_SECRET); GoogleOAuthHelper oauthHelper = new GoogleOAuthHelper( new OAuthHmacSha1Signer()); String oauthTokenSecret = (String) req.getSession().getAttribute( Constant.SESSION_OAUTH_TOKEN); AppUser appUser = (AppUser) req.getSession().getAttribute( Constant.AUTH_USER); oauthParameters.setOAuthTokenSecret(oauthTokenSecret); oauthHelper.getOAuthParametersFromCallback(req.getQueryString(), oauthParameters); try { String accessToken = oauthHelper.getAccessToken(oauthParameters); String accessTokenSecret = oauthParameters.getOAuthTokenSecret(); appUser = LoginService.getById(appUser.getId()); appUser = LoginService.updateUserCredentials(appUser, new OauthCredentials(accessToken, accessTokenSecret)); req.getSession().setAttribute(Constant.DOC_SESSION_ID, LoginService.docServiceFactory(appUser)); RequestDispatcher dispatcher = req.getRequestDispatcher((String) req .getSession().getAttribute(Constant.TARGET_URI)); if (dispatcher != null) dispatcher.forward(req, resp); } catch (OAuthException e) { e.printStackTrace(); } } The Google GoogleOAuthHelper is used to perform the housekeeping tasks required to populate the two values we are interested in: String accessToken = oauthHelper.getAccessToken(oauthParameters); String accessTokenSecret = oauthParameters.getOAuthTokenSecret(); Once we have these values, we then requery the user object from the datastore, and save those values into the AppUser.OauthCredentials subclass: appUser = LoginService.getById(appUser.getId()); appUser = LoginService.updateUserCredentials(appUser, new OauthCredentials(accessToken, accessTokenSecret)); req.getSession().setAttribute(Constant.DOC_SESSION_ID, LoginService.docServiceFactory(appUser)); In addition, you’ll see they are also stored into the session so we have them readily available when the API request to Google Docs is placed. Now that we’ve got everything we need, we simply redirect the user back to the resource they had originally requested: RequestDispatcher dispatcher = req.getRequestDispatcher((String) req .getSession().getAttribute(Constant.TARGET_URI)); dispatcher.forward(req, resp); Now, when they access the JSP page listing their documents, everything should work! Here’s a screencast demo of the final product: Hope you enjoyed the tutorial and demo — look forward to your comments! Continue to the second part of this tutorial. Reference: Authenticating for Google Services in Google App Engine from our JCG partner Jeff Davis at the Jeff’s SOA Ruminations blog....

Google Services Authentication in App Engine, Part 2

In the first part of the tutorial I described how to use OAuth for access/authentication for Google’s API services. Unfortunately, as I discovered a bit later, the approach I used was OAuth 1.0, which has apparently now been officially deprecated by Google in favor of version 2.0 of OAuth. Obviously, I was a bit bummed to discovered this, and promised I would create a new blog entry with instructions on how to use 2.0. The good news is that, with the 2.0 support, Google has added some additional helper classes that make things easier, especially if you are using Google App Engine, which is what I’m using for this tutorial. The Google Developers site now has a pretty good description on how to setup OAuth 2.0. However, it still turned out to be a challenge to configure a real-life example of how it’s done, so I figured I’d document what I’ve learned. Tutorial Scenario In the last tutorial, the project I created illustrated how to access a listing of a user’s Google Docs files. In this tutorial, I changed things up a bit, and instead use YouTube’s API to display a list of a user’s favorite videos. Accessing a user’s favorites does require authentication with OAuth, so this was a good test. Getting Started (Eclipse project for this tutorial can be found here). The first thing you must do is follow the steps outlined in Google’s official docs on using OAuth 2.0. Since I’m creating a web-app, you’ll want to follow the section in those docs titled ‘Web Server Applications’. In addition, the steps I talked about previously for setting up a Google App Engine are still relevant, so I’m going to jump right into the code and bypass these setup steps. (NOTE: The Eclipse project can be found here — I again elected not to use Maven in order to keep things simple for those who don’t have it installed or are knowledgeable in Maven). The application flow is very simple (assuming a first-time user):When the user accesses the webapp (assuming you are running it locally at http://localhost:8888 using the GAE developer emulator), they must first login to Google using their gmail or Google domain account. Once logged in, the user is redirected to a simple JSP page that has a link to their YouTube favorite videos. When the click on the link, a servlet will initiate the OAuth process to acquire access to their YouTube account. The first part of this process is being redirected to a Google Page that prompts them whether they want to grant the application access. Assuming the user responds affirmative, a list of 10 favorites will be displayed with links. If they click on the link, the video will load.Here’s the depiction of the first 3 pages flow:And here’s the last two pages (assuming that the user clicks on a given link):While this example is specific to YouTube, the same general principles apply for accessing any of the Google-based cloud services, such as Google+, Google Drive, Docs etc. They key enabler for creating such integrations is obviously OAuth, so let’s look at how that process works. OAuth 2.0 Process Flow Using OAuth can be a bit overwhelming for the new developer first learning the technology. The main premise behind it is to allow users to selectively identify which ‘private’ resources they want to make accessible to an external application, such as we are developing for this tutorial. By using OAuth, the user can avoid having to share their login credentials with a 3rd party, but instead can simply grant that 3rd party access to some of their information. To achieve this capability, the user is navigated to the source where their private data resides, in this case, YouTube. They can then either allow or reject the access request. If they allow it, the source of the private data (YouTube) then returns a single-use authorization code to the 3rd party application. Since it’s rather tedious for the user to have to grant access every time access is desired, there is an additional call that can be played that will ‘trade-in’ their single use authorization for a longer term one. The overall flow for the web application we’re developing for this tutorial can be seen below. OAuth FlowThe first step that takes place is to determine whether the user is already logged into Google using either their gmail or Google Domain account. While not directly tied to the OAuth process, it’s very convenient to enable users to login with their Google account as opposed to requiring them to sign up with your web site. That’s the first callout that is made to Google. Then, once logged in, the application determines whether the user has a local account setup with OAuth permissions granted. If they are logging in for the first time, they won’t. In that case, the OAuth process is initiated. The first step of that process is to specify to the OAuth provider, in this case Google YouTube, what ‘scope’ of access is being requested. Since Google has a lot of services, they have a lot of scopes. You can determine this most easily using their OAuth 2.0 sandbox. When you kickoff the OAuth process, you provide them the scope(s) you want access to, along with the OAuth client credentials that Google has provided you (these steps are actually rather generic to any provider that supports OAuth). For our purposes, we’re seeking access to the user’s YouTube account, so the scope provided by Google is: https://gdata.youtube.com/. If the end-user grants access to the resource identify by the scope, Google will then post back an authorization code to the application. This is captured in a servlet. Since the returned code is only a ‘single-use’ code, it is exchanged for a longer running access token (and related refresh token). That step is represented above by the activity/box titled ‘Access & Refresh Token Requested’. Once armed with the access token, the application can then access the users’ private data by placing an API call along with the token. If everything checks out, the API will return the results. It’s not a terrible complicated process — it just involves a few steps. Let’s look at some of the specific implementation details, beginning with the servlet filter that determines whether the user has already logged into Google and/or granted OAuth access. AuthorizationFilter Let’s take a look at the first few lines of the AuthorizationFilter (to see how it’s configured as a filter, see the web.xml file). public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException { HttpServletRequest request = (HttpServletRequest) req; HttpServletResponse response = (HttpServletResponse) res; HttpSession session = request.getSession(); if not present, add credential store to servlet context if (session.getServletContext().getAttribute(Constant.GOOG_CREDENTIAL_STORE) == null) { LOGGER.fine('Adding credential store to context ' + credentialStore); session.getServletContext().setAttribute(Constant.GOOG_CREDENTIAL_STORE, credentialStore); } if google user isn't in session, add it if (session.getAttribute(Constant.AUTH_USER_ID) == null) { LOGGER.fine('Add user to session'); UserService userService = UserServiceFactory.getUserService(); User user = userService.getCurrentUser(); session.setAttribute(Constant.AUTH_USER_ID, user.getUserId()); session.setAttribute(Constant.AUTH_USER_NICKNAME, user.getNickname()); if not running on app engine prod, hard-code my email address for testing if (SystemProperty.environment.value() == SystemProperty.Environment.Value.Production) { session.setAttribute(Constant.AUTH_USER_EMAIL, user.getEmail()); } else { session.setAttribute(Constant.AUTH_USER_EMAIL, 'jeffdavisco@gmail.com'); } }The first few lines simply cast the generic servlet request and response to their corresponding Http equivalents — this is necessary since we want access to the HTTP session. The next step is to determine whether a CredentialStore is present in the servlet context. As we’ll see, this is used to store the user’s credentials, so it’s convenient to have it readily available in subsequent servlets. The guts of the matter begin when we check to see whether the user is already present in the session using: if (session.getAttribute(Constant.AUTH_USER_ID) == null) { If not, we get their Google login credentials using Google’s UserService class. This is a helper class available to GAE users to fetch the user’s Google userid, email and nickname. Once we get this info from UserService, we store some of the user’s details in the session. At this point, we haven’t done anything with OAuth, but that will change in the next series of code lines: try { Utils.getActiveCredential(request, credentialStore); } catch (NoRefreshTokenException e1) { // if this catch block is entered, we need to perform the oauth process LOGGER.info(‘No user found – authorization URL is: ‘ + e1.getAuthorizationUrl()); response.sendRedirect(e1.getAuthorizationUrl()); } A helper class called Utils is used for most of the OAuth processing. In this case, we’re calling the static method getActiveCredential(). As we will see in a moment, this method will return a NoRefreshTokenException if no OAuth credentials have been previously captured for the user. As a custom exception, it will return URL value that is used for redirecting the user to Google to seek OAuth approval. Let’s take a look at the getActiveCredential() method in more detail, as that’s where much of the OAuth handling is managed. public static Credential getActiveCredential(HttpServletRequest request, CredentialStore credentialStore) throws NoRefreshTokenException { String userId = (String) request.getSession().getAttribute(Constant.AUTH_USER_ID); Credential credential = null; try { if (userId != null) { credential = getStoredCredential(userId, credentialStore); } if ((credential == null || credential.getRefreshToken() == null) && request.getParameter('code') != null) { credential = exchangeCode(request.getParameter('code')); LOGGER.fine('Credential access token is: ' + credential.getAccessToken()); if (credential != null) { if (credential.getRefreshToken() != null) { credentialStore.store(userId, credential); } } } if (credential == null || credential.getRefreshToken() == null) { String email = (String) request.getSession().getAttribute(Constant.AUTH_USER_EMAIL); String authorizationUrl = getAuthorizationUrl(email, request); throw new NoRefreshTokenException(authorizationUrl); } } catch (CodeExchangeException e) { e.printStackTrace(); } return credential; } The first thing we do is fetch the Google userId from the session (they can’t get this far without it being populated). Next, we attempt to get the user’s OAuth credentials (stored in the Google class with the same name) from the CredentialStore using the Utils static method getStoredCredential(). If no credentials are found for that user, the Utils method called getAuthorizationUrl() is invoked. This method, which is shown below, is used to construct the URL that the browser is redirected to which is used to prompt the user to authorize access to their private data (the URL is served up by Google, since it will ask the user for approval). private static String getAuthorizationUrl(String emailAddress, HttpServletRequest request) { GoogleAuthorizationCodeRequestUrl urlBuilder = null; try { urlBuilder = new GoogleAuthorizationCodeRequestUrl( getClientCredential().getWeb().getClientId(), Constant.OATH_CALLBACK, Constant.SCOPES) .setAccessType('offline') .setApprovalPrompt('force'); } catch (IOException e) { TODO Auto-generated catch block e.printStackTrace(); } urlBuilder.set('state', request.getRequestURI()); if (emailAddress != null) { urlBuilder.set('user_id', emailAddress); } return urlBuilder.build(); } As you can see, this method is using the class (from Google) called GoogleAuthorizationCodeRequestUrl. It constructs an HTTP call using the OAuth client credentials that is provided by Google when you sign up for using OAuth (those credentials, coincidentally, are stored in a file called client_secrets.json. Other parameters include the scope of the OAuth request and the URL that the user will be redirected back to if approval is granted by the user. That URL is the one you specified when signing up for Google’s OAuth access:Now, if the user had already granted OAuth access, the getActiveCredential()method would instead grab the credentials from the CredentialStore. Turning back to the URL that receives the results of the OAuth credentials, in this case, http://localhost:8888/authSub, you maybe wondering, how can Google post to that internal-only address? Well, it’s the user’s browser that is actually posting back the results, so localhost, in this case, resolves just fine. Let’s look that the servlet called OAuth2Callback that is used to process this callback (see the web.xml for how the servlet mapping for authSub is done). public class OAuth2Callback extends HttpServlet { private static final long serialVersionUID = 1L; private final static Logger LOGGER = Logger.getLogger(OAuth2Callback.class.getName()); public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException { StringBuffer fullUrlBuf = request.getRequestURL(); Credential credential = null; if (request.getQueryString() != null) { fullUrlBuf.append('?').append(request.getQueryString()); } LOGGER.info('requestURL is: ' + fullUrlBuf); AuthorizationCodeResponseUrl authResponse = new AuthorizationCodeResponseUrl(fullUrlBuf.toString()); check for user-denied error if (authResponse.getError() != null) { LOGGER.info('User-denied access'); } else { LOGGER.info('User granted oauth access'); String authCode = authResponse.getCode(); request.getSession().setAttribute('code', authCode); response.sendRedirect(authResponse.getState()); } } } The most important take-away from this class is the line: AuthorizationCodeResponseUrl authResponse = new AuthorizationCodeResponseUrl(fullUrlBuf.toString()); The AuthorizationCodeResponseUrl class is provided as a convenience by Google to parse the results of the OAuth request. If the getError() method of that class isn’t null, that means that the user rejected the request. In the event that it is null, indicating the user approved the request, the method call getCode() is used to retrieve the one-time authorization code. This code value is placed into the user’s session, and when the Utils.getActiveCredential() is invoked following the redirect to the user’s target URL (via the filter), it will exchange that authorization code for a longer-term access and refresh token using the call: credential = exchangeCode((String) request.getSession().getAttribute(‘code’)); The Utils.exchangeCode() method is shown next: public static Credential exchangeCode(String authorizationCode) throws CodeExchangeException { try { GoogleTokenResponse response = new GoogleAuthorizationCodeTokenRequest( new NetHttpTransport(), Constant.JSON_FACTORY, Utils .getClientCredential().getWeb().getClientId(), Utils .getClientCredential().getWeb().getClientSecret(), authorizationCode, Constant.OATH_CALLBACK).execute(); return Utils.buildEmptyCredential().setFromTokenResponse(response); } catch (IOException e) { e.printStackTrace(); throw new CodeExchangeException(); } } This method also uses a Google class called GoogleAuthorizationCodeTokenRequest that is used to call Google to exchange the one-time OAuth authorization code for the longer-duration access token. Now that we’ve (finally) got our access token that is needed for the YouTube API, we’re ready to display to the user 10 of their video favorites.Calling the YouTube API Services With the access token in hand, we can now proceed to display the user their list of favorites. In order to do this, a servlet called FavoritesServlet is invoked. It will call the YouTube API, parse the resulting JSON-C format into some local Java classes via Jackson, and then send the results to the JSP page for processing. Here’s the servlet: public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { LOGGER.fine('Running FavoritesServlet'); Credential credential = Utils.getStoredCredential((String) request.getSession().getAttribute(Constant.AUTH_USER_ID), (CredentialStore) request.getSession().getServletContext().getAttribute(Constant.GOOG_CREDENTIAL_STORE)); VideoFeed feed = null; if the request fails, it's likely because access token is expired - we'll refresh try { LOGGER.fine('Using access token: ' + credential.getAccessToken()); feed = YouTube.fetchFavs(credential.getAccessToken()); } catch (Exception e) { LOGGER.fine('Refreshing credentials'); credential.refreshToken(); credential = Utils.refreshToken(request, credential); GoogleCredential googleCredential = Utils.refreshCredentials(credential); LOGGER.fine('Using refreshed access token: ' + credential.getAccessToken()); retry feed = YouTube.fetchFavs(credential.getAccessToken()); } LOGGER.fine('Video feed results are: ' + feed); request.setAttribute(Constant.VIDEO_FAVS, feed); RequestDispatcher dispatcher = getServletContext().getRequestDispatcher('htmllistVids.jsp'); dispatcher.forward(request, response); } Since this post is mainly about the OAuth process, I won’t go into too much detail how the API call is placed, but the most important line of code is: feed = YouTube.fetchFavs(credential.getAccessToken()); Where feed is an instance of VideoFeed. As you can see, another helper class called YouTube is used for doing the heavy-lifting. Just to wrap things up, I’ll show the fetchFavs() method. public static VideoFeed fetchFavs(String accessToken) throws IOException, HttpResponseException { HttpTransport transport = new NetHttpTransport(); final JsonFactory jsonFactory = new JacksonFactory(); HttpRequestFactory factory = transport.createRequestFactory(new HttpRequestInitializer() { @Override public void initialize(HttpRequest request) { set the parser JsonCParser parser = new JsonCParser(jsonFactory); request.addParser(parser); set up the Google headers GoogleHeaders headers = new GoogleHeaders(); headers.setApplicationName('YouTube Favorites1.0'); headers.gdataVersion = '2'; request.setHeaders(headers); } }); build the YouTube URL YouTubeUrl url = new YouTubeUrl(Constant.GOOGLE_YOUTUBE_FEED); url.maxResults = 10; url.access_token = accessToken; build the HTTP GET request HttpRequest request = factory.buildGetRequest(url); HttpResponse response = request.execute(); execute the request and the parse video feed VideoFeed feed = response.parseAs(VideoFeed.class); return feed; } It uses the Google class called HttpRequestFactory to construct an outbound HTTP API call to YouTube. Since we’re using GAE, we’re limited as to which classes we can use to place such requests. Notice the line of code: url.access_token = accessToken; That’s where we are using the access token that was acquired through the OAuth process. So, while it took a fair amount of code to get the OAuth stuff working correctly, once it’s in place, you are ready to rock-and-roll with calling all sorts of Google API services! Reference: Authenticating for Google Services, Part 2 from our JCG partner Jeff Davis at the Jeff’s SOA Ruminations blog....

Fast, Predictable & Highly-Available @ 1 TB/Node

The world is pushing huge amounts of data to applications every second, from mobiles, the web, and various gadgets. More applications these days have to deal with this data. To preserve performance, these applications need fast access to the data tier.RAM prices have crumbled over the past few years and we can now get hardware with a Terabyte of RAM much more cheaply. OK, got the hardware, now what? We generally use virtualization to create smaller virtual machines to meet applications scale-out requirements, as having a Java application with a terabyte of heap is impractical. JVM Garbage Collection will slaughter your application right away. Ever imagined how much time will it take to do a single full garbage collection for a terabyte of heap? It can pause an application for hours, making it unusable.BigMemory is the key to access terabytes of data with milliseconds of latency, with no maintenance of disk/raid configurations/databases.BigMemory = Big Data + In-memory  BigMemory can utilize your hardware to the last byte of RAM. BigMemory can store up to a terabyte of data in single java process.BigMemory provides “fast”, “predictable” and “highly-available” data at 1 terabytes per node.The following test uses two boxes, each with a terabyte of RAM. Leaving enough room for the OS, we were able to allocate 2 x 960 GB of BigMemory, for a total of 1.8+ TB of data. Without facing the problems of high latencies, huge scale-out architectures … just using the hardware as it is.Test results: 23K readonly transactions per second with 20 ms latency. Graphs for test throughput and periodic latency over time.Readonly Periodic Throughput GraphReadonly Periodic Latency GraphReference: Fast, Predictable & Highly-Available @ 1 TB/Node from our JCG partner Himadri Singh at the Billions & Terabytes blog....

JavaOne 2012 – 2400 hours to go! Some recommendations

As you might have seen the JavaOne 2012 Content Catalog is online. The Program Committee had some very intense weeks of sorting, reviewing, rating and discussing every single proposal and we finally managed to setup a (hopefully) interesting mix for you. With exactly 105 days or 2400 hours to go I thought it could be a good day to offer you a list of my favorites to come.I had the pleasure to work with two teams on the program tracks this year so you will get some recommendations for both the Java EE Web Profile and Platform Technologies and the Enterprise Service Architectures and the Cloud track. Let’s start with the first one.Java EE Web Profile and Platform Technologies   This track has 64 sessions, 31 BoFs and 6 tutorials overall. It’s hard to highlight the right ones here without being unfair to anybody. All the speakers did a great job in submitting their proposals and it was amazingly hard to pick the final ones. Even if I have my favorites I highly encourage you to look at all the content in this track to make your own decisions!Oracle 50 Tips in 50 Minutes ForGlassFish Fans Arun Gupta, Christopher Kasso ( CON4701) This fast-paced session presents 50 tips and tricks for using GlassFish Server technology. Presented by two GlassFish experts, the session offers tips to help novice users as well as seasoned developers get the most out of GlassFish.Apache TomEE, Java EE 6 Web Profile on Tomcat David Blevins IBM (CON7469) Making its Java EE 6 Web Profile certification debut at JavaOne 2011, Apache TomEE combines the simplicity of Tomcat with the power of Java EE. If you’re a Tomcat lover or a TomEE enthusiast, this is the session you don’t want to miss!Building HTML5 Web Applications with Avatar Bryan Atsatt and Santiago Pericasgeertsen – Oracle (CON7042) This session focuses on how to build HTML5, thin-server Web applications with the Avatar framework. It introduces the notion of thin-server architectures as well as the major features in the Avatar framework for building rich UI applications.Standardizing Web Flow Technology with JSF Faces Flows Edward Burns and David Schneider – Oracle (CON4627) With the introduction of Faces Flows, a flow technology based on Oracle Application Development Framework (Oracle ADF) task flows and Spring Web flows. This session provides an overview of the Faces Flows technology and how it can be used to increase application modularity and code reuse. Real-World Java EE 6 Tutorial Paul Bakker and BERT ERTMAN – Luminis Technologies (TUT5064) This tutorial demonstrates how to use the Java EE 6 APIs together to build a portable, full-stack enterprise application and solve real-world problems. It not only focuses on the APIs but also shows you how to set up a vanilla Maven build from scratch and do unit and integration testing—going into almost all parts of the Java EE 6 specs.GlassFish Community BOF Anil Gaur and Arun Gupta – Oracle (BOF4670) The GlassFish Community is large and vibrant and has had a tradition of getting together at JavaOne for the past few years. Attend this BOF to meet with the key members of the Oracle GlassFish team. They will share the roadmap for how Java EE 7 will provide a standards-based PaaS platform for running your enterprise Java applications in the cloud.Nashorn, Node, and Java Persistence Douglas Clarke and Akhil Arora – Oracle (BOF6661) With Project Nashorn, developers will have a full and modern JavaScript engine available on the JVM. In addition, they will have support for running Node applications with Node.jar. This unique combination of capabilities opens the door for best-of-breed applications combining Node with Java SE and Java EE.Meet the Java EE 7 Specification Leads Linda Demichiel, William Shannon – Oracle (BOF4213) This is your chance to meet face-to-face with the engineers who are developing the next version of the Java EE platform. In this session, the specification leads for the leading technologies that are part of the Java EE 7 platform discuss new and upcoming features and answer your questions. Come prepared with your questions, your feedback, and your suggestions for new features in Java EE 7 and beyond.The Arquillian Universe: A Tour Around the Astrophysics Lab Daniel Allen and Aslak Kntusen – Red Hat (CON6918) This presentation guides you through the Arquillian extensions by demonstrating how specific extensions solve common problematic testing scenarios faced by enterprise developers. You will get a overview of what is available and possible today as well as what is brewing in the community. Enterprise Service Architectures and the Cloud   This track has 61 sessions, 24 BoFs and 4 tutorials. The same as for the previous track applies here. That is far too many content to feature the one and only ones. So please see this as a good excuse to make your own decisions ;)GlassFish 4: From Clustering to the Cloud Fabien Leroy – SERLI (CON4930) Expected by the end of 2012, GlassFish 4 leverages the 3.1 clustering functionalities to enter into the cloud computing era. The session takes a look under the hood to show you what makes GlassFish 4 a PaaS solution able to dynamically allocate all the services needed by an application. See a live demo of GlassFish cloud features already running with a VMware virtual cluster.Making Apps Scale with CDI and Data Grids Manik Surtani – Red Hat (CON5875) This session walks through a live demo of building a Website with CDI, clustering it with Java EE clustering capabilities, and then introducing a data grid into the mix to dramatically boost performance and load-handling capacity.Utilize the Full Power of GlassFish Server and Java EE Security Masoud Kalali – Oracle (CON3964) In this session, learn how to utilize Java EE security and what GlassFish Server technology provides to address your security requirements. The presentation explains a two-phase authentication mechanism.Other News and Noteworthy Things   You might have heard that the Java Strategy, Partner, and Technical keynotes will be held on the Sunday of conference week, beginning at 4:00 p.m. at the historic Masonic Auditorium on Nob Hill. After the keynotes, attendees can go to the official JavaOne Open House at the Taylor Street Café @ the Zone. As in years past, Sunday will feature User Group meetings (at Moscone West) and Java University courses (Hilton San Francisco Union Square). On Thursday, the Java Community keynote will return. More information should flow out in the next couple of weeks. If you have not already done so, register.Planning to go? Have a look at my post 10 Ways to make the Best out of a Conference.Reference: JavaOne 2012 – 2400 hours to go! Some recommendations from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....

Moving Beyond Core Hamcrest in JUnit

In the post Improving On assertEquals with JUnit and Hamcrest I introduced use of Hamcrest with JUnit. I then looked at JUnit’s Built-in Hamcrest Core Matcher Support. In this post, I look at how to apply Hamcrest’s non-core matchers with JUnit. These non-core matchers are NOT included with JUnit by default, but are available by including a Hamcrest JAR in the classpath.Although JUnit‘s inclusion of Hamcrest core matchers makes them easier to use if one only wants to use the core matchers, this inclusion can make use of the non-core matchers more difficult and is a well-known issue.Because the non-core Hamcrest matchers are not included with JUnit, the Hamcrest JAR needs to be downloaded. For my examples in this post, I am using hamcrest-all-1.2.jar.The next screen snapshot indicates the problems with combining the hamcrest-all JAR with the normal JUnit library (JUnit 4.10 as provided by NetBeans 7.2 beta in my example). As the screen snapshot indicates, when the junit-4.10.jar is included in the NetBeans libraries BEFORE the hamcrest-all-1.2.jar, the previously working code (from my previous post) breaks. Both NetBeans and the command-line compiler show this breakage in this screen snapshot.Switching the order of the test libraries so that the Hamcrest library is listed first and the JUnit JAR listed after it, makes the compiler break on the test code go away. This is shown in the next screen snapshot.Although switching the order of the dependent libraries so that the Hamcrest JAR is included before the JUnit JAR does prevent the build problem, this is not typically a satisfactory approach. This approach is too fragile for long-term maintainability. Fortunately, there is a better approach that JUnit directly supports to deal with this issue.A special Hamcrest-less JUnit JAR can be downloaded. The next screen snapshot shows the one I use in this example: junit-dep-4.10.jar. The -dep in the JAR name is the clue that it’s Hamcrest-free. The notation next to the JAR on the download page (screen snapshot shown next) points this out as well (“Jar without hamcrest”).With the Hamcrest-free “dep” version of the JUnit JAR, I can include it in the test libraries at any point I like with relation to the Hamcrest JAR and will still be able to build the test code. This is a much more favorable approach than relying on a specific order of test libraries. The next image shows the screen snapshot of NetBeans and the command-line build being successful even with the JUnit JAR listed first.With the appropriate libraries in use (JUnit-dep JAR and the Hamcrest “all” JAR), all of Hamcrest’s matchers can be used with JUnit-based tests. Hamcrest provides numerous matchers beyond the core matches that are now bundled with JUnit. One way to get an idea of the additional matchers available is to look at the classes in the Hamcrest JAR. The following is output from running a jar tvf command against the Hamcrest JAR and removing many of the entries to leave some of the most interesting ones. The “core” matchers tend to be based on the classes in the “core” package and the non-core matchers tend to be based on the classes in all the other packages without “core” in their name. 4029 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/AllOf.java 3592 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/AnyOf.java 1774 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/CombinableMatcher.java 1754 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/DescribedAs.java 1104 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/Every.java 2088 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/Is.java 1094 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/IsAnything.java 2538 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/IsCollectionContaining.java 1862 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/IsEqual.java 2882 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/IsInstanceOf.java 1175 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/IsNot.java 1230 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/IsNull.java 960 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/IsSame.java 675 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/StringContains.java 667 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/StringEndsWith.java 678 Thu May 21 23:21:20 MDT 2009 org/hamcrest/core/StringStartsWith.java 2557 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsArray.java 1805 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsArrayContaining.java 1883 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsArrayContainingInAnyOrder.java 1765 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsArrayContainingInOrder.java 1388 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsArrayWithSize.java 1296 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsCollectionWithSize.java 812 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsEmptyCollection.java 866 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsEmptyIterable.java 1086 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsIn.java 3426 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsIterableContainingInAnyOrder.java 3479 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsIterableContainingInOrder.java 993 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsIterableWithSize.java 1899 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsMapContaining.java 1493 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsMapContainingKey.java 1421 Thu May 21 23:21:20 MDT 2009 org/hamcrest/collection/IsMapContainingValue.java1380 Thu May 21 23:21:20 MDT 2009 org/hamcrest/number/IsCloseTo.java 2878 Thu May 21 23:21:20 MDT 2009 org/hamcrest/number/OrderingComparison.java1082 Thu May 21 23:21:20 MDT 2009 org/hamcrest/object/HasToString.java 918 Thu May 21 23:21:20 MDT 2009 org/hamcrest/object/IsCompatibleType.java 2080 Thu May 21 23:21:20 MDT 2009 org/hamcrest/object/IsEventFrom.java1164 Thu May 21 23:21:20 MDT 2009 org/hamcrest/text/IsEmptyString.java 1389 Thu May 21 23:21:20 MDT 2009 org/hamcrest/text/IsEqualIgnoringCase.java 2058 Thu May 21 23:21:20 MDT 2009 org/hamcrest/text/IsEqualIgnoringWhiteSpace.java 1300 Thu May 21 23:21:20 MDT 2009 org/hamcrest/text/StringContainsInOrder.java4296 Thu May 21 23:21:20 MDT 2009 org/hamcrest/xml/HasXPath.javaJUnit’s providing of a JAR without Hamcrest automatically built in (the “dep” JAR) allows developers to more carefully building up their classpaths if Hamcrest matchers above and beyond the “core” matchers are desired for use with JUnit.Reference: Moving Beyond Core Hamcrest in JUnit from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: