Featured FREE Whitepapers

What's New Here?


Using Twitter4j with Scala to perform user actions

Introduction My previous post showed how to use Twitter4j in Scala to access Twitter streams. This post shows how to control a Twitter user’s actions using Twitter4j. The primary purpose of this functionality is perhaps to create interfaces for Twitter like TweetDeck, but it can also be used to create bots that take automated actions on Twitter (one bot I’m playing around with is @tshrdlu, using the code in this tutorial and the code in the tshrdlu repository). This post will only cover a small portion of the things you can do, but they are some of the more common things and I include a couple of simple but interesting use cases. Once you have these things in place, it is straightforward to figure out how to use the Twitter4j API docs (and Stack Overflow) to do the rest. Getting set up: code and authorization Rather than having the reader build the code up while going through the tutorial, I’ve set up the code in the repository twitter4j-tutorial. The version needed for this tutorial as v0.2.0. You can download a tarball of that version, which may be easier to work with if there have been further developments to the repository since the writing of this tutorial. Checkout or download that code now. The main file of interest is:src/main/scala/TwitterUser.scalaThis tutorial is mainly a walk through for that file in blog form, with some additional pointers and explanations here and there. You also need to set up the authorization details. See “Setting up authorization” section of the previous post to do this if you haven’t already. IMPORTANT : For this tutorial you must set the permissions for your application to be “Read and Write“. In the previous tutorial, authorization details were put into code. This time, we’ll use a twitter4j.properties file. This is easy: just add a file with that name to the twitter4j-tutorial directory with the following contents, substituting your details as appropriate. oauth.consumerKey=[your consumer key here] oauth.consumerSecret=[your consumer secret here] oauth.accessToken=[your access token here] oauth.accessTokenSecret=[your access token secret here] Rate limits and a note of caution Unlike streaming access to Twitter, performing user actions via the API is subject to rate limits. Once you hit your limit, Twitter will throw an exception and refuse to comply with your requests until a period of time has passed (usually 15 minutes). Twitter does this to limit bad bots and also preserve their computational resources. For more information on rate limits, see Twitter’s page about rate limiting. I’ll discuss how to manage rate limits later in the post, but I mention them up front in case you exceed them while messing around with things early on. A word of caution is also in order: since you are going to be able to take actions automatically, like following users, posting a status, and retweeting, you could end up doing many of these actions in rapid succession. This will (a) use up your rate limit very quickly, (b) probably not be interesting behavior, and (c) could get your account suspended. Make sure to follow the rules, especially those on following users. If you are going to mess around quite a bit with actual posting, you may also want to consider creating an account that is not your primary Twitter account so that you don’t annoy your actual followers. (Suggestion: see the paragraph on “Create account” in part one of project phase one of my Applied NLP course for tips on how to add multiple accounts with the same gmail address.) Basic interactions: searching, timelines, posting All of the examples belowe are implemented as objects with main methods that do something using a twitter4j.Twitter object. To make it so we don’t have to call the TwitterFactory repeatedly, we first define a trait that gets a Twitter instance set up and ready to use. trait TwitterInstance { val twitter = new TwitterFactory().getInstance } By extending this trait, our objects can access the twitter object conveniently. As a first simple example, we can search for tweets that match a query by using the search method. The following object takes a query string given on the command line query, searches for tweets using that query, and prints them. object QuerySearch extends TwitterInstance {def main(args: Array[String]) { val statuses = twitter.search(new Query(args(0))).getTweets statuses.foreach(status => println(status.getText + '\n')) }} Note that this uses a Query object, whereas with using a TwitterStream, a FilterQuery was needed. Also, for this to work, we must have the following import available: import collection.JavaConversions._ This ensures that we can use the java.util.List returned by the getTweets method (of twitter4j.QueryResult) as if it were a Scala collection with the method foreach (and map, filter, etc). This is done via implicit conversions that make working with Java libraries far nicer than it would be otherwise. To run this, go to the twitter4j-tutorial directory, and do the following (some example output shown): $ ./build > run-main bcomposes.twitter.QuerySearch scala [info] Running bcomposes.twitter.QuerySearch scala E' avvilente non sentirsi all'altezza di qualcosa o qualcuno, se non si possiede quella scala interiore sulla quale l'autostima pu? issarsiScala workshop will run with ECOOP, July 2nd in Montpellier, South of France. Call for papers is out. http://t.co/3WS6tHQyiF#scala http://t.co/JwNrzXTwm8 Even two of them in #cologne #germany . #thumbsupRT @MILLIB2DAL: @djcameo Birthday bash 30th march @ Scala nightclub 100 artists including myself make sur u reach its gonna be #Legendary@kot_2010 I think it's the same case with Scala: with macros it will tend to 'outsource' things to macro libs, keeping a small lang core.RT @waxzce: #scala hiring or job ? go there : http://t.co/NeEjoqwqwT@esten That's not only a front-end problem. Scala devs should use scalaz.Equal and === for type safe equality. /cc @sharonw<...more...>[success] Total time: 1 s, completed Feb 26, 2013 1:54:44 PM You might see some extra communications from SBT, which will probably need to download dependencies and compile the code. For the rest of the examples below, you can run them in a similar manner, substituting the right object name and providing any necessary arguments. There are various timelines available for each user, including the home timeline, mentions timeline, and user timeline. They are accessible as twitter4j.api.TimelineResources. For example, the following object shows the most recent statuses on the authenticating user’s home timeline (which are the tweets by people the user follows). object GetHomeTimeline extends TwitterInstance {def main(args: Array[String]) { val num = if (args.length == 1) args(0).toInt else 10 val statuses = twitter.getHomeTimeline.take(num) statuses.foreach(status => println(status.getText + '\n')) }} The number of tweets to show is given as the command-line argument. You can also update the status of the authenticating user from the command line using the following object. Calling it will post to the authenticating user’s account (so only do it if you are comfortable with the command-line argument you give it going onto your timeline). object UpdateStatus extends TwitterInstance { def main(args: Array[String]) { twitter.updateStatus(new StatusUpdate(args(0))) } } There are plenty of other useful methods that you can use to interact with Twitter, and if you have successfully run the above three, you should be able to look at the Twitter4j javadocs and start using them. Some examples doing more interesting things are given below. Replying to tweets written to you The following object goes through the most recent tweets that have mentioned the authenticating user, and replies “OK.” to them. It includes the author of the original tweet and any other entities that were mentioned in it. object ReplyOK extends TwitterInstance {def main(args: Array[String]) { val num = if (args.length == 1) args(0).toInt else 10 val userName = twitter.getScreenName val statuses = twitter.getMentionsTimeline.take(num) statuses.foreach { status => { val statusAuthor = status.getUser.getScreenName val mentionedEntities = status.getUserMentionEntities.map(_.getScreenName).toList val participants = (statusAuthor :: mentionedEntities).toSet - userName val text = participants.map(p=>'@'+p).mkString(' ') + ' OK.' val reply = new StatusUpdate(text).inReplyToStatusId(status.getId) println('Replying: ' + text) twitter.updateStatus(reply) }} }} This should be mostly self-explanatory, but there are a couple of things to note. First, you can find all the entities that have been mentioned (via @-mentions) in the tweet via the method getUserMentionEntities of the twitter4j.Status class. The code ensures that the author of the original tweet (who isn’t necessarily mentioned in it) is included as a participant for the response, and also we take out the authenticating user. So, if the message “@tshrdlu What do you think of @tshrdlc?” is sent from @jasonbaldridge, the response will be “@jasonbaldridge @tshrdlc OK.” Note how the screen names do not have the @ symbol, so that must be added in the tweet text of the reply. Second, notice that StatusUpdate objects can be created by chaining methods that add more information to them, e.g. setInReplyToStatusId and setLocation, which incrementally build up the StatusUpdate object that gets actually posted. (This is a common Java strategy that basically helps get around the fact that parameters to classes can neither be specified by name in Java nor have defaults, the way Scala does.) Checking and managing rate limit information None of the above code makes many requests from Twitter, so there was little danger of exceeding rate limits. These limits are a mixture of both time and number of requests: you basically get a certain number of requests every hour (currently 350) per authenticating user. Because of these limits, you should consider accessing tweets, timelines, and such using the streaming methods when you can. Every response you get from Twitter comes back as a sub-class of twitter4j.TwitterResponse, which not only gives you what you want (like a QueryResult) but also gives you information about your connection to Twitter. For rate limit information, you can use the getRateLimitStatus method, which can then inform you about the number of requests you can still make and the time until your limit resets. The trait RateChecker below has a function checkAndWait that, when given a TwitterResponse object, checks whether the rate limit has been exceeded and wait if it has. When the rate is exceeded, it finds out how much time remains until the rate limit is reset and makes the thread sleep until that time (plus 10 seconds) has passed. trait RateChecker {def checkAndWait(response: TwitterResponse, verbose: Boolean = false) { val rateLimitStatus = response.getRateLimitStatus if (verbose) println('RLS: ' + rateLimitStatus)if (rateLimitStatus != null && rateLimitStatus.getRemaining == 0) { println('*** You hit your rate limit. ***') val waitTime = rateLimitStatus.getSecondsUntilReset + 10 println('Waiting ' + waitTime + ' seconds ( ' + waitTime/60.0 + ' minutes) for rate limit reset.') Thread.sleep(waitTime*1000) } }} Using rate limits is actually more complex than this. For example, this strategy ignores the fact that different request types have different limits, but it keeps things simple. This is surely not an optimal solution, but it does the trick for present purposes. Note also that you can directly ask for rate limit information from the twitter4j.Twitter instance itself, using the getRateLimitStatus method. Unlike the results for the same method on a TwitterResponse, this gives a Map from various request types to the current rate limit statuses for each one. In a real application, you’d want to control each of these different limits at a more fine-grained level using this information. Not all of the methods of Twitter4j classes actually hit the Twitter API. To see whether a given method does, look at its Javadoc: if it’s description says “This method calls http://api.twitter.com/1.1/some/method.json“, then it does hit the API. Otherwise, it doesn’t and you don’t need to guard it. Examples using the checkAndWait function are given below. Creating a word cloud from followers’ descriptions Here’s a more interesting task: given a Twitter user, compute the counts of the words in the descriptions given in the bios of their followers and build a word cloud from them. The following code does this, outputing the resulting counts in a file, the contents of which can be pasted into Wordle’s advanced word cloud input. object DescribeFollowers extends TwitterInstance with RateChecker {def main(args: Array[String]) { val screenName = args(0) val maxUsers = if (args.length==2) args(1).toInt else 500 val followerIds = twitter.getFollowersIDs(screenName,-1).getIDsval descriptions = followerIds.take(maxUsers).flatMap { id => { val user = twitter.showUser(id) checkAndWait(user) if (user.isProtected) None else Some(user.getDescription) }}val tword = '''(?i)[a-z#@]+'''.r.pattern val words = descriptions.flatMap(_.toLowerCase.split('\\s+')) val filtered = words.filter(_.length > 3).filter(tword.matcher(_).matches) val counts = filtered.groupBy(x=>x).mapValues(_.length) val rankedCounts = counts.toSeq.sortBy(- _._2)import java.io._ val wordcountFile = '/tmp/follower_wordcount.txt' val writer = new BufferedWriter(new FileWriter(wordcountFile)) for ((w,c) <- rankedCounts) writer.write(w+':'+c+'\n') writer.flush writer.close }} The thing to consider is that if you are pointing this at a person with several hundred followers, you will exceed the rate limit. The call to getFollowersIDs is a single hit, and then each call to showUser is a hit. Because the showUser calls come in rapid succession, we check the rate limit status after each one using checkAndWait (which is available because we mixed in the RateChecker trait) and it waits for the limit to reset as previously discussed, keeping us from exceeding the rate limit and getting an exception from Twitter. The number of users returned by getFollowersIDs is at most 5000. If you run this on a user who has more followers, followers beyond 5000 won’t be considered. If you want to tackle such a user, you’ll need to use the cursor, which is the integer provided as the argument to getFollowersIDs, and make multiple calls while incrementing that cursor to get more. Most of the rest of the code is just standard Scala stuff for getting the word counts and outputting them to a file. Note that a small effort is done to reduce the non-alphabetic characters (but allowing # and @) and filtering out short words. As an example of the output, when put into Wordle, here is the word cloud for my followers.This looks about right for me—completely expected in fact—but it is still cool that it comes out of my followers’ self descriptions. One could start thinking of some fun algorithms for exploiting this kind of representation of a user to look into how well different users align or don’t align with their followers, or to look for clusters of different types of followers, etc. Retweeting automatically Tired of actually reading those tweets in your timeline and retweeting some of them? The following code gets some of the accounts the authenticating user follows, grabs twenty of those users, filters them to get interesting ones, and then takes up to 10 of the remaining ones and retweets their most recent statuses (provided they aren’t replies to someone else). object RetweetFriends extends TwitterInstance with RateChecker {def main(args: Array[String]) { val friendIds = twitter.getFriendsIDs(-1).getIDs val friends = friendIds.take(20).map { id => { val user = twitter.showUser(id) checkAndWait(user) user }}val filtered = friends.filter(admissable) val ranked = filtered.map(f => (f.getFollowersCount, f)).sortBy(- _._1).map(_._2)ranked.take(10).foreach { friend => { val status = friend.getStatus if (status!=null && status.getInReplyToStatusId == -1) { println('\nRetweeting ' + friend.getName + ':\n' + status.getText) twitter.retweetStatus(status.getId) Thread.sleep(30000) } }} }def admissable(user: User) = { val ratio = user.getFollowersCount.toDouble/user.getFriendsCount user.getFriendsCount < 1000 && ratio > 0.5 }} The getFriendsIDs method is used to get the users that the authenticating user is following (but who do not necessarily follow the authenticating user, despite the use of the word “friend”). We again take care with the rate limiting on gathering the users. We filter these users, looking for those who follow fewer than 1000 users and those who have a follower/friend ratio of greater than .5, in a simple attempt to filter out some less interesting (or spammy) accounts. The remaining users are then ranked according to their number of followers (most first). Finally, we take (up to) 10 of these (the take method returns 3 things if you ask for 10 but there are just 3), look at their most recent status, and if it is not null and isn’t a reply to someone, we retweet it. Between each of these, we wait for 30 seconds so that anyone following our account doesn’t get an avalanche of retweets. Conclusion This post and the related code should provide enough to get a decent feel for working with Twitter4j, including necessary setup and using some of the methods to start creating applications with it in Scala. See project phase three of my Applied NLP course to see exercises and code that takes this further to do interesting things for automated bots, including mixing streaming access and user access to get more complex behaviors.   Reference: Using Twitter4j with Scala to perform user actions from our JCG partner Jason Baldridge at the Bcomposes blog. ...

ListenableFuture in Guava

ListenableFuture in Guava is an attempt to define consistent API for Future objects to register completion callbacks. With the ability to add callback when Future completes, we can asynchronously and effectively respond to incoming events. If your application is highly concurrent with lots of future objects, I strongly recommend using ListenableFuture whenever you can. Technically ListenableFuture extends Future interface by adding simple:           void addListener(Runnable listener, Executor executor) method. That’s it. If you get a hold of ListenableFuture you can register Runnable to be executed immediately when future in question completes. You must also supply Executor (ExecutorService extends it) that will be used to execute your listener – so that long-running listeners do not occupy your worker threads. Let’s put that into action. We will start by refactoring our first example of web crawler to use ListenableFuture. Fortunately in case of thread pools it’s just a matter of wrapping them using MoreExecutors.listeningDecorator(): ListeningExecutorService pool = MoreExecutors.listeningDecorator(Executors.newFixedThreadPool(10));for (final URL siteUrl : topSites) { final ListenableFuture<String> future = pool.submit(new Callable<String>() { @Override public String call() throws Exception { return IOUtils.toString(siteUrl, StandardCharsets.UTF_8); } });future.addListener(new Runnable() { @Override public void run() { try { final String contents = future.get(); //...process web site contents } catch (InterruptedException e) { log.error("Interrupted", e); } catch (ExecutionException e) { log.error("Exception in task", e.getCause()); } } }, MoreExecutors.sameThreadExecutor()); } There are several interesting observations to make. First of all notice how ListeningExecutorService wraps existing Executor. This is similar to ExecutorCompletionService approach. Later on we register custom Runnable to be notified when each and every task finishes. Secondly notice how ugly error handling becomes: we have to handle InterruptedException (which should technically never happen as Future is already resolved and get() will never throw it) and ExecutionException. We haven’t covered that yet, but Future<T> must somehow handle exceptions occurring during asynchronous computation. Such exceptions are wrapped in ExecutionException (thus the getCause() invocation during logging) thrown from get(). Finally notice MoreExecutors.sameThreadExecutor() being used. It’s a handy abstraction which you can use every time some API wants to use an Executor/ExecutorService (presumably thread pool) while you are fine with using current thread. This is especially useful during unit testing – even if your production code uses asynchronous tasks, during tests you can run everything from the same thread. No matter how handy it is, whole code seems a bit cluttered. Fortunately there is a simple utility method in fantastic Futures class: Futures.addCallback(future, new FutureCallback<String>() { @Override public void onSuccess(String contents) { //...process web site contents }@Override public void onFailure(Throwable throwable) { log.error("Exception in task", throwable); } }); FutureCallback is a much simpler abstraction to work with, resolves future and does exception handling for you. Also you can still supply custom thread pool for listeners if you want. If you are stuck with some legacy API that still returns Future you may try JdkFutureAdapters.listenInPoolThread() which is an adapter converting plain Future<V> to ListenableFuture<V>. But keep in mind that once you start using addListener(), each such adapter will require one thread exclusively to work so this solution doesn’t scale at all and you should avoid it. Future<String> future = //... ListenableFuture<String> listenableFuture = JdkFutureAdapters.listenInPoolThread(future); Once we understand the basics we can dive deeply into biggest strength of listening futures: transformations and chaining. This is advanced stuff, you have been warned.   Reference: ListenableFuture in Guava from our JCG partner Tomasz Nurkiewicz at the NoBlogDefFound blog. ...

Spring Data, MongoDB and JSF Integration tutorial

Introduction to sample application (MongoShop Product Catalog) After this tutorial, a sample application (MongoShop Product Catalog) with the following functional requirement will be built:               1. Searching product with different criteria (e.g. sku, product type, title, stc)2. Create a new product with different category.3. Edit selected product details4. Delete selected product from the enquiry screen.Presentation Layer: JSF is used as presentation layer technology in this sample application. PrimeFaces is a one of lightweight component for enhancing the JSF UI. Frontend interaction is controlled by JSF backing bean in this layer. Service Layer: Spring managed singleton service object is used. Business service and application logic are written in this layer Data Layer: Spring data MongoDB component is used. It provides integration with the MongoDB document-oriented database. It provides MongoTemplate so that MongoDB operation could be performed easily. Moreover, Spring repository style data access layer could be easily written with spring data MongoDB. MongoDB schema design and data preparation MongoDB Introduction MongoDB is a open-source scalable, high-performance NoSQL database. It is a document-oriented Storage. It can store JSON-style documents with dynamic schemas. In this application, each product is stored as JSON-style document in MongoDB. Schema Design in MongoDB Each product in the catalog contains general product information (e.g. sku, title, and product type), price details (e.g. retail and list price) and product sub-details (e.g. tracks of audio CDs / chapters of books). In this application, MongoDB is used. The schema design will be focus more on the data usage. It is different from traditional RDBMS schema design. The schema design in MongoDB should be:Sample Data: x= { sku: '1000001', type: 'Audio Album', title: 'A Love Supreme', description: 'by John Coltrane', publisher: 'Sony Music', pricing: { list: 1200, retail: 1100 },details: { title: 'A Love Supreme [Original Recording Reissued]', artist: 'John Coltrane', genre: 'Jazz' , tracks: [ 'A Love Supreme Part I: Acknowledgement', 'A Love Supreme Part II - Resolution', 'A Love Supreme, Part III: Pursuance', 'A Love Supreme, Part IV-Psalm' ], } }y= { sku: '1000002', type: 'Audio Album', title: 'Love Song', description: 'by Khali Fong', publisher: 'Sony Music', pricing: { list: 1000, retail: 1200 },details: { title: 'Long Song [Original Recording Reissued]', artist: 'Khali Fong', genre: 'R&B', tracks: [ 'Love Song', 'Spring Wind Blow', 'Red Bean', 'SingAlongSong' ], } }z= { sku: '1000003', type: 'Book', title: 'Node.js for PHP Developers', description: 'by Owen Peter', publisher: 'OReilly Media',pricing: { list: 2500, retail: 2100 },details: { title: 'Node.js for PHP Developers', author: 'Mark Owen', genre: 'Technology', chapters: [ 'Introduction to Node', 'Server-side JS', 'PHP API', 'Example' ], } } Sample query to add the data: db.product.save(x); db.product.save(y); db.product.save(z); Sample query to test the sample data: db.product.find({'sku':'1000004'}); db.product.find({'type':'Audio Album'}); db.product.find({'type':'Audio Album', 'details.genre': 'Jazz'}); JSF (PrimeFaces) and Spring data MongoDB Integration pom.xml of the project <project xmlns='http://maven.apache.org/POM/4.0.0' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd'> <modelversion>4.0.0</modelVersion> <groupid>com.borislam</groupId> <artifactid>mongoShop</artifactId> <packaging>war</packaging> <version>1.0-SNAPSHOT</version> <name>MongoShop Webapp</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupid>org.jboss.el</groupId> <artifactid>com.springsource.org.jboss.el</artifactId> <version>2.0.0.GA</version> </dependency> <dependency> <groupid>org.primefaces.themes</groupId> <artifactid>all-themes</artifactId> <version>1.0.9</version> </dependency> <dependency> <groupid>org.primefaces</groupId> <artifactid>primefaces</artifactId> <version>3.4.2</version> </dependency> <dependency> <groupid>commons-beanutils</groupId> <artifactid>commons-beanutils</artifactId> <version>1.8.3</version> </dependency><dependency> <groupid>commons-codec</groupId> <artifactid>commons-codec</artifactId> <version>1.3</version> </dependency><dependency> <groupid>org.apache.directory.studio</groupId> <artifactid>org.apache.commons.lang</artifactId> <version>2.6</version> </dependency><dependency> <groupid>commons-digester</groupId> <artifactid>commons-digester</artifactId> <version>1.8</version> </dependency><dependency> <groupid>commons-collections</groupId> <artifactid>commons-collections</artifactId> <version>3.2</version> </dependency><dependency> <groupid>org.apache.myfaces.core</groupId> <artifactid>myfaces-api</artifactId> <version>2.1.9</version> </dependency><dependency> <groupid>org.apache.myfaces.core</groupId> <artifactid>myfaces-impl</artifactId> <version>2.1.9</version> </dependency><dependency> <groupid>org.mongodb</groupId> <artifactid>mongo-java-driver</artifactId> <version>2.10.1</version> </dependency> <dependency> <groupid>junit</groupId> <artifactid>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupid>org.springframework.data</groupId> <artifactid>spring-data-mongodb</artifactId> <version>1.0.3.RELEASE</version> </dependency> <dependency> <groupid>org.springframework</groupId> <artifactid>spring-context</artifactId> <version>3.2.0.RELEASE</version> </dependency> <dependency> <groupid>org.springframework</groupId> <artifactid>spring-web</artifactId> <version>3.2.0.RELEASE</version> </dependency> </dependencies> <repositories> <repository> <id>java.net</id> <url>https://maven.java.net/content/repositories/public/</url> </repository> <repository> <id>prime-repo</id> <name>PrimeFaces Maven Repository</name> <url>http://repository.primefaces.org</url> <layout>default</layout> </repository><repository> <id>com.springsource.repository.bundles.release</id> <name>SpringSource Enterprise Bundle Repository - SpringSource Releases</name> <url>http://repository.springsource.com/maven/bundles/release</url> </repository><repository> <id>com.springsource.repository.bundles.external</id> <name>SpringSource Enterprise Bundle Repository - External Releases</name> <url>http://repository.springsource.com/maven/bundles/external</url> </repository> <repository> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> <id>apache.snapshots</id> <name>Apache Snapshot Repository</name> <url>https://repository.apache.org/content/repositories/snapshots</url> </repository> <repository> <id>jboss-deprecated-repository</id> <name>JBoss Deprecated Maven Repository</name> <url>https://repository.jboss.org/nexus/content/repositories/deprecated/</url> <layout>default</layout> <releases> <enabled>true</enabled> <updatepolicy>never</updatePolicy> </releases> <snapshots> <enabled>false</enabled> <updatepolicy>never</updatePolicy> </snapshots> </repository> </repositories> <build> <finalname>mongoShop</finalName> </build> </project> MyFaces MyFaces is used as the JSF implementation in this application. The following details should be added in web.xml PrimeFaces Theme As said before, PrimeFaces library is used to enhance the UI. There is nearly no configuration required for this library. PrimeFaces provides many pre-designed theme for your web application. In our case, we use “blue-sky” theme. We just add the following setting in web.xml <context-param> <param-name>primefaces.THEME</param-name><param-value>glass-x</param-value></context-param> JSF and Spring Integration: To integrate JSF with Spring, you have to specify the SpringBeanFacesELResolver in Faces-config.xml Faces-config.xml <?xml version='1.0' encoding='UTF-8'?> <faces-config version='2.0' xmlns='http://java.sun.com/xml/ns/javaee' xmlns:xi='http://www.w3.org/2001/XInclude' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-facesconfig_2_0.xsd'> <application> <el-resolver>org.springframework.web.jsf.el.SpringBeanFacesELResolver</el-resolver> </application> <factory> <partial-view-context-factory>org.primefaces.context.PrimePartialViewContextFactory</partial-view-context-factory> </factory> </faces-config> Full web.xml <?xml version='1.0' encoding='UTF-8'?> <web-app id='WebApp_ID' version='3.0' xmlns='http://java.sun.com/xml/ns/javaee' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd'> <context-param> <param-name>contextConfigLocation</param-name><param-value>WEB-INF/spring-application-context.xml</param-value></context-param> <context-param> <param-name>errorPageUrl</param-name><param-value>/pages/systemError.do</param-value></context-param> <context-param> <param-name>facelets.DEVELOPMENT</param-name><param-value>false</param-value></context-param> <context-param> <param-name>facelets.REFRESH_PERIOD</param-name><param-value>2</param-value></context-param> <context-param> <param-name>javax.faces.STATE_SAVING_METHOD</param-name><param-value>client</param-value></context-param> <context-param> <param-name>javax.servlet.jsp.jstl.fmt.localizationContext</param-name><param-value>resources.application</param-value></context-param> <context-param> <param-name>org.apache.myfaces.ALLOW_JAVASCRIPT</param-name><param-value>true</param-value></context-param> <context-param> <param-name>org.apache.myfaces.AUTO_SCROLL</param-name><param-value>false</param-value></context-param> <context-param> <param-name>org.apache.myfaces.DETECT_JAVASCRIPT</param-name><param-value>false</param-value></context-param> <context-param> <param-name>org.apache.myfaces.ERROR_HANDLING</param-name><param-value>false</param-value></context-param> <context-param> <param-name>org.apache.myfaces.EXPRESSION_FACTORY</param-name><param-value>org.jboss.el.ExpressionFactoryImpl</param-value></context-param> <context-param> <param-name>org.apache.myfaces.PRETTY_HTML</param-name><param-value>false</param-value></context-param> <context-param> <param-name>primefaces.THEME</param-name><param-value>glass-x</param-value></context-param> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>org.apache.myfaces.webapp.MyFacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.jsf</url-pattern> </servlet-mapping> <listener> <listener-class>org.apache.myfaces.webapp.StartupServletContextListener</listener-class> </listener> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> </web-app> MongoDB connection Details In to order connect to MongoDB, you have to register a MongoDbFactory instance in XML. The connection details is specified in spring-application-context.xml spring-application-context.xml <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:context='http://www.springframework.org/schema/context' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:util='http://www.springframework.org/schema/util' xmlns:mongo='http://www.springframework.org/schema/data/mongo' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.2.xsdhttp://www.springframework.org/schema/data/mongohttp://www.springframework.org/schema/data/mongo/spring-mongo.xsdhttp://www.springframework.org/schema/data/repositoryhttp://www.springframework.org/schema/data/repository/spring-repository.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.2.xsdhttp://www.springframework.org/schema/utilhttp://www.springframework.org/schema/util/spring-util-3.2.xsd'><context:annotation-config/> <context:component-scan base-package='com.borislam'/><mongo:mongo host='localhost' port='27017'> <mongo:options connections-per-host='5' connect-timeout='30000' max-wait-time='10000' write-number='1' write-timeout='0' write-fsync='true'/> </mongo:mongo><mongo:db-factory dbname='test' mongo-ref='mongo'/><mongo:repositories base-package='com.borislam.repository' /><bean id='mongoTemplate' class='org.springframework.data.mongodb.core.MongoTemplate'> <constructor-arg ref='mongo'/><constructor-arg name='databaseName' value='test'/></bean> </beans> Enquriy data with spring data repository and mongotemplate Spring Data Repository: Spring Data repository abstraction reduces the boilerplate code to write the data access layer of the application. Automatic implementation of Repository interfaces provides simple operation on mongoDB. It helps our product save and delete function make MongoTemplate: MongoTemplate offers convenience operations to create, update, delete and query for MongoDB documents and provides a mapping between your domain objects and MongoDB documents. In our application, since the spring data repository cannot fulfill the requirement of searching function, we use MongoTemplate to archive the searching capability. Customizing Spring Data Repository: Since the searching of product cannot be easily implement with Spring data repository abstraction, we would like to implements the multi-criteira product search with MongoDBTemplate. To enrich Spring data repository with MongoTemplate, we can do the following to customize the repository: ProductRepository.java package com.borislam.repository;import java.util.List; import org.springframework.data.repository.PagingAndSortingRepository; import com.borislam.domain.Product;public interface ProductRepository extends PagingAndSortingRepository<Product, String> , ProductRepostitoryCustom{List<product> findByType(String type); List<product> findByTypeAndTitle(String type, String title); Product findBySku(String sku); } ProductRepositoryCustom.java package com.borislam.repository;import java.util.List; import com.borislam.domain.Product; import com.borislam.view.ProductSearchCriteria;public interface ProductRepostitoryCustom { public List<product> searchByCriteria(ProductSearchCriteria criteria);} ProductRepositoryImpl.java package com.borislam.repository.impl;import java.util.List; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.data.mongodb.core.MongoTemplate; import org.springframework.data.mongodb.core.query.Criteria; import org.springframework.data.mongodb.core.query.Query; import org.springframework.util.StringUtils; import com.borislam.domain.Product; import com.borislam.repository.ProductRepostitoryCustom; import com.borislam.view.ProductSearchCriteria;public class ProductRepositoryImpl implements ProductRepostitoryCustom{@Autowired private MongoTemplate mongoTemplate;@Override public List<product> searchByCriteria(ProductSearchCriteria criteria) { Query query = new Query(); if ( StringUtils.hasText(criteria.getSku())) { Criteria c = Criteria.where('sku').is(criteria.getSku()); query.addCriteria(c); } if (StringUtils.hasText(criteria.getTitle())) { Criteria c = Criteria.where('title').regex('.*' + criteria.getTitle() + '.*', 'i'); query.addCriteria(c); } if (StringUtils.hasText(criteria.getDescription())) { Criteria c = Criteria.where('description').regex('.*' + criteria.getDescription() + '.*', 'i'); query.addCriteria(c); } if (StringUtils.hasText(criteria.getProductType())) { Criteria c = Criteria.where('type').is(criteria.getProductType()); query.addCriteria(c);} if (StringUtils.hasText(criteria.getTrack())) { Criteria c = Criteria.where('details.tracks').regex('.*' + criteria.getTrack() + '.*', 'i'); query.addCriteria(c); } if (StringUtils.hasText(criteria.getChapter())) { Criteria c = Criteria.where('details.chapters').regex('.*' + criteria.getChapter() + '.*', 'i'); query.addCriteria(c); } return mongoTemplate.find(query, Product.class); }} Data Model:Product.java package com.borislam.domain;public class Product { private String id; private String sku ; private String type; private String title; private String description; private String publisher; private Pricing pricing; private Detail details; public String getId() { return id; } public void setId(String id) { this.id = id; } public String getSku() { return sku; } public void setSku(String sku) { this.sku = sku; } public String getType() { return type; } public void setType(String type) { this.type = type; } public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } public String getPublisher() { return publisher; } public void setPublisher(String publisher) { this.publisher = publisher; } public Pricing getPricing() { return pricing; } public void setPricing(Pricing pricing) { this.pricing = pricing; } public Detail getDetails() { return details; } public void setDetails(Detail details) { this.details = details; } } Pricing.java package com.borislam.domain;public class Pricing { private String id; private double list; private double retail; public String getId() { return id; } public void setId(String id) { this.id = id; } public double getList() { return list; } public void setList(double list) { this.list = list; } public double getRetail() { return retail; } public void setRetail(double retail) { this.retail = retail; } public Pricing(double list, double retail) { super(); this.list = list; this.retail = retail; }} Detail.java package com.borislam.domain;import java.util.List;public class Detail { private String id; private String title; private String author; private String artist; private String genre; private List<string> pic; private List<string> chapters; private List<string> tracks; public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public String getAuthor() { return author; } public void setAuthor(String author) { this.author = author; } public String getGenre() { return genre; } public void setGenre(String genre) { this.genre = genre; } public List<string> getPic() { return pic; } public void setPic(List<string> pic) { this.pic = pic; } public List<string> getChapters() { return chapters; } public void setChapters(List<string> chapters) { this.chapters = chapters; } public String getId() { return id; } public void setId(String id) { this.id = id; } public String getArtist() { return artist; } public void setArtist(String artist) { this.artist = artist; } public List<string> getTracks() { return tracks; } public void setTracks(List<string> tracks) { this.tracks = tracks; }} JSF Part:common.xhtml <html xmlns='http://www.w3.org/1999/xhtml' xmlns:h='http://java.sun.com/jsf/html' xmlns:f='http://java.sun.com/jsf/core' xmlns:ui='http://java.sun.com/jsf/facelets' xmlns:p='http://primefaces.org/ui'><f:view contentType='text/html'><h:head><f:facet name='first'> <meta http-equiv='Content-Type' content='text/html; charset=utf-8'/> <title><ui:insert name='pageTitle'>Page Title</ui:insert></title> <ui:insert name='head' /> </f:facet></h:head><h:body> <div style='margin:auto;width:1024px;'> <div id='header' class='ui-widget' > <div id='logo' style='border:1px solid #acbece; border-bottom: none; '> <p:graphicImage value='/resources/image/mongoshopheader.jpg'/></div> <div id='logo' style='border:1px solid #acbece;'> <p:menubar style='border:none'><p:menuitem value='Search' url='/search.jsf' icon='ui-icon-search' /><p:menuitem value='New Product' url='/detail.jsf' icon='ui-icon-document' /></p:menubar></div> </div> <div id='page' class='ui-widget' style='overflow:hidden;'> <div id='content' style='display:block'> <ui:insert name='content'>...</ui:insert> </div> </div> </div> </h:body></f:view> </html> Search.xhml <html xmlns='http://www.w3.org/1999/xhtml' xmlns:ui='http://java.sun.com/jsf/facelets' xmlns:h='http://java.sun.com/jsf/html' xmlns:f='http://java.sun.com/jsf/core' xmlns:p='http://primefaces.org/ui'><ui:composition template='/template/common.xhtml'><ui:define name='pageTitle'> <h:outputText value='Product Search' /> </ui:define><ui:define name='content'> <h:form id='searchForm'> <p:growl id='mainGrowl' sticky='true' /><p:panelGrid style='width:1024px'><f:facet name='header'> <p:row><p:column colspan='4'>Product Search </p:column></p:row></f:facet> <p:row><p:column><h:outputLabel for='sku' value='sku: ' /> </p:column><p:column><p:inputText id='sku' value='#{productSearchBean.criteria.sku}' /></p:column><p:column><h:outputLabel for='productType' value='Product Type: ' /> </p:column><p:column><p:selectOneMenu id='productType' label='Type' value='#{productSearchBean.criteria.productType}' ><f:selectItem itemLabel='Select One' itemValue='' /> <f:selectItem itemLabel='Audio Album' itemValue='Audio Album' /> <f:selectItem itemLabel='Book' itemValue='Book' /> </p:selectOneMenu></p:column></p:row><p:row><p:column><h:outputLabel for='title' value='Title: ' /> </p:column><p:column><p:inputText id='title' value='#{productSearchBean.criteria.title}' /></p:column><p:column><h:outputLabel for='description' value='Description: ' /> </p:column><p:column><p:inputText id='description' value='#{productSearchBean.criteria.description}' /></p:column></p:row><p:row><p:column><h:outputLabel for='track' value='Track: ' /> </p:column><p:column><p:inputText id='track' value='#{productSearchBean.criteria.track}' /></p:column><p:column><h:outputLabel for='chapter' value='Chapter: ' /> </p:column><p:column><p:inputText id='chapter' value='#{productSearchBean.criteria.chapter}' /></p:column></p:row></p:panelGrid><p:commandButton value='search' icon='ui-icon-search' actionListener='#{productSearchBean.doSearch}' update='dataTable'/><hr/><p:dataTable id='dataTable' var='prod' value='#{productSearchBean.productList}' paginator='true' rows='10'><p:column><f:facet name='header'> <h:outputText value='Sku' /> </f:facet> <h:outputText value='#{prod.sku}' /> </p:column><p:column><f:facet name='header'> <h:outputText value='Type' /> </f:facet> <h:outputText value='#{prod.type}' /> </p:column><p:column><f:facet name='header'> <h:outputText value='Title' /> </f:facet> <h:outputText value='#{prod.title}' /> </p:column><p:column><f:facet name='header'> <h:outputText value='Publisher' /> </f:facet> <h:outputText value='#{prod.publisher}' /> </p:column><p:column><f:facet name='header'> <h:outputText value='Artist' /> </f:facet> <h:outputText value='#{prod.details.artist}' /> </p:column><p:column><f:facet name='header'> <h:outputText value='Author' /> </f:facet> <h:outputText value='#{prod.details.author}' /> </p:column></p:dataTable></h:form> </ui:define> </ui:composition> </html> ProductSearchCriteria.java package com.borislam.view;public class ProductSearchCriteria { private String sku; private String description; private String productType; private String track; private String chapter; private String title; public String getSku() { return sku; } public void setSku(String sku) { this.sku = sku; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } public String getProductType() { return productType; } public void setProductType(String productType) { this.productType = productType; } public String getTrack() { return track; } public void setTrack(String track) { this.track = track; } public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public String getChapter() { return chapter; } public void setChapter(String chapter) { this.chapter = chapter; }} ProductSearchBean.java package com.borislam.view;import java.util.List; import javax.faces.application.FacesMessage; import javax.faces.context.FacesContext; import javax.faces.event.ActionEvent; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Scope; import org.springframework.dao.DataAccessException; import org.springframework.stereotype.Component; import com.borislam.domain.Product; import com.borislam.service.ProductService;@Component @Scope('session') public class ProductSearchBean {private Product selectedProduct;private ProductSearchCriteria criteria = new ProductSearchCriteria();private List<product> productList;public Product getSelectedProduct() { return selectedProduct; }public void setSelectedProduct(Product selectedProduct) { this.selectedProduct = selectedProduct; }public List<product> getProductList() { return productList; }public void setProductList(List<product> productList) { this.productList = productList; }public ProductSearchCriteria getCriteria() { return criteria; }public void setCriteria(ProductSearchCriteria criteria) { this.criteria = criteria; } @Autowired private ProductService productService;public void doSearch(ActionEvent event){ productList= productService.searchByCriteria(criteria); } } Service Layer:ProductService.java package com.borislam.service;import java.util.List; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import com.borislam.domain.Product; import com.borislam.repository.ProductRepository; import com.borislam.view.ProductSearchCriteria;@Service public class ProductService {@Autowired private ProductRepository productRepository;public List<product> searchByCriteria(ProductSearchCriteria criteria){ return productRepository.searchByCriteria(criteria); }public Product getProduct(String sku) { return productRepository.findBySku(sku); } } Create, Edit and Delete data with Spring data repository In the last part of this tutorial, we will add create, edit and delete function to the MongoShop Product Catalog application. The search page is modified. A modal confirm dialogue box is added before the product is physically deleted updated search.xhtml <html xmlns='http://www.w3.org/1999/xhtml' xmlns:ui='http://java.sun.com/jsf/facelets' xmlns:h='http://java.sun.com/jsf/html' xmlns:f='http://java.sun.com/jsf/core' xmlns:p='http://primefaces.org/ui'><ui:composition template='/template/common.xhtml'><ui:define name='pageTitle'> <h:outputText value='Product Search' /> </ui:define><ui:define name='content'> <h:form id='searchForm'> <p:growl id='mainGrowl' sticky='true' /> <p:panelGrid style='width:1024px'> <f:facet name='header'> <p:row> <p:column colspan='4'> Product Search </p:column> </p:row> </f:facet> <p:row> <p:column> <h:outputLabel for='sku' value='sku: ' /> </p:column> <p:column> <p:inputText id='sku' value='#{productSearchBean.criteria.sku}' /> </p:column> <p:column> <h:outputLabel for='productType' value='Product Type: ' /> </p:column> <p:column> <p:selectOneMenu id='productType' label='Type' value='#{productSearchBean.criteria.productType}' > <f:selectItem itemLabel='Select One' itemValue='' /> <f:selectItem itemLabel='Audio Album' itemValue='Audio Album' /> <f:selectItem itemLabel='Book' itemValue='Book' /> </p:selectOneMenu> </p:column> </p:row> <p:row> <p:column> <h:outputLabel for='title' value='Title: ' /> </p:column> <p:column> <p:inputText id='title' value='#{productSearchBean.criteria.title}' /> </p:column> <p:column> <h:outputLabel for='description' value='Description: ' /> </p:column> <p:column> <p:inputText id='description' value='#{productSearchBean.criteria.description}' /> </p:column> </p:row><p:row> <p:column> <h:outputLabel for='track' value='Track: ' /> </p:column> <p:column> <p:inputText id='track' value='#{productSearchBean.criteria.track}' /> </p:column> <p:column> <h:outputLabel for='chapter' value='Chapter: ' /> </p:column> <p:column> <p:inputText id='chapter' value='#{productSearchBean.criteria.chapter}' /> </p:column> </p:row></p:panelGrid> <p:commandButton value='search' icon='ui-icon-search' actionListener='#{productSearchBean.doSearch}' update='dataTable'/> <hr/><p:dataTable id='dataTable' var='prod' value='#{productSearchBean.productList}' paginator='true' rows='10'><p:column> <f:facet name='header'> <h:outputText value='Sku' /> </f:facet> <h:outputText value='#{prod.sku}' /> </p:column><p:column> <f:facet name='header'> <h:outputText value='Type' /> </f:facet> <h:outputText value='#{prod.type}' /> </p:column><p:column> <f:facet name='header'> <h:outputText value='Title' /> </f:facet> <h:outputText value='#{prod.title}' /> </p:column><p:column> <f:facet name='header'> <h:outputText value='Publisher' /> </f:facet> <h:outputText value='#{prod.publisher}' /> </p:column><p:column> <f:facet name='header'> <h:outputText value='Artist' /> </f:facet> <h:outputText value='#{prod.details.artist}' /> </p:column><p:column> <f:facet name='header'> <h:outputText value='Author' /> </f:facet> <h:outputText value='#{prod.details.author}' /> </p:column><p:column> <f:facet name='header'> <h:outputText value='Edit' /> </f:facet> <p:commandButton value='Edit' action='#{productSearchBean.doEditDetail}' ajax='false'> <f:setPropertyActionListener target='#{productSearchBean.selectedProduct}' value='#{prod}' /> </p:commandButton> </p:column><p:column> <f:facet name='header'> <h:outputText value='Delete' /> </f:facet><p:commandButton id='showDialogButton' value='Delete' oncomplete='confirmation.show()' ajax='true' update=':searchForm:confirmDialog'> <f:setPropertyActionListener target='#{productSearchBean.selectedProduct}' value='#{prod}' /> </p:commandButton></p:column></p:dataTable><p:confirmDialog id='confirmDialog' message='Are you sure to delete this product (#{productSearchBean.selectedProduct.sku})?' header='Delete Product' severity='alert' widgetVar='confirmation'><p:commandButton id='confirm' value='Yes' update='mainGrowl' oncomplete='confirmation.hide()' actionListener='#{productSearchBean.doDelete}' /> <p:commandButton id='decline' value='No' onclick='confirmation.hide()' type='button' /></p:confirmDialog></h:form> </ui:define> </ui:composition> </html> updated ProductSearchBean.java package com.borislam.view;import java.util.List;import javax.faces.application.FacesMessage; import javax.faces.context.FacesContext; import javax.faces.event.ActionEvent; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Scope; import org.springframework.dao.DataAccessException; import org.springframework.stereotype.Component; import com.borislam.domain.Product; import com.borislam.service.ProductService;@Component @Scope('session') public class ProductSearchBean {private Product selectedProduct;private ProductSearchCriteria criteria = new ProductSearchCriteria();private List<Product> productList;public Product getSelectedProduct() { return selectedProduct; }public void setSelectedProduct(Product selectedProduct) { this.selectedProduct = selectedProduct; }public List<Product> getProductList() { return productList; }public void setProductList(List<Product> productList) { this.productList = productList; }public ProductSearchCriteria getCriteria() { return criteria; }public void setCriteria(ProductSearchCriteria criteria) { this.criteria = criteria; } @Autowired private ProductService productService;public void doSearch(ActionEvent event){ productList= productService.searchByCriteria(criteria); }public String doEditDetail() { (FacesContext.getCurrentInstance().getExternalContext().getFlash()).put('selected', selectedProduct); return 'detail.xhtml'; }public void doDelete(ActionEvent event){ try { productService.deleteProduct(selectedProduct);FacesContext context = FacesContext.getCurrentInstance(); context.addMessage(null, new FacesMessage('Delete Successfully!')); } catch (DataAccessException e ) { FacesContext context = FacesContext.getCurrentInstance(); context.addMessage(null, new FacesMessage(FacesMessage.SEVERITY_ERROR,'Error when deleting product!',null)); }} } A product detail page is added to view the produc details. Creation and edition of product is done in the product detail page. detail.xhtml <html xmlns='http://www.w3.org/1999/xhtml' xmlns:ui='http://java.sun.com/jsf/facelets' xmlns:h='http://java.sun.com/jsf/html' xmlns:f='http://java.sun.com/jsf/core' xmlns:p='http://primefaces.org/ui'><ui:composition template='/template/common.xhtml'><ui:define name='pageTitle'> <h:outputText value='Product Search' /> </ui:define><ui:define name='content'> <f:event listener='#{productDetailBean.initProduct}' type='preRenderView' /><h:form id='mainForm'> <p:growl id='mainGrowl' sticky='true' /> <p:panelGrid style='width:1024px'> <f:facet name='header'> <p:row> <p:column colspan='2'> Product Details </p:column> </p:row> </f:facet> <p:row> <p:column> <h:outputLabel for='sku' value='sku: *' /> </p:column> <p:column> <p:inputText id='sku' required='true' value='#{productDetailBean.product.sku}' label='Sku' rendered='#{productDetailBean.newProduct}'/> <h:outputText value='#{productDetailBean.product.sku}' label='Sku' rendered='#{not productDetailBean.newProduct}'/> </p:column> </p:row> <p:row> <p:column> <h:outputLabel for='type' value='Type *' /> </p:column> <p:column> <p:selectOneMenu id='type' required='true' label='Type' valueChangeListener='#{productDetailBean.clearDetails}' value='#{productDetailBean.product.type}' > <f:selectItem itemLabel='Select One' itemValue='' /> <f:selectItem itemLabel='Audio Album' itemValue='Audio Album' /> <f:selectItem itemLabel='Book' itemValue='Book' /> <f:ajax render='buttonPanel trackPanel chapterPanel'/> </p:selectOneMenu> </p:column> </p:row> <p:row> <p:column> <h:outputLabel for='title' value='Title: *' /> </p:column> <p:column> <p:inputText id='title' required='true' value='#{productDetailBean.product.title}' label='Title' /> </p:column> </p:row> <p:row> <p:column> <h:outputLabel for='description' value='Description: *' /> </p:column> <p:column> <p:inputText id='description' required='true' value='#{productDetailBean.product.description}' label='Description' /> </p:column> </p:row> <p:row> <p:column> <h:outputLabel for='publisher' value='Publisher: *' /> </p:column> <p:column> <p:inputText id='publisher' required='true' value='#{productDetailBean.product.publisher}' label='Publisher' /> </p:column> </p:row><p:row> <p:column> <h:outputLabel for='artist' value='Artist: ' /> </p:column> <p:column> <p:inputText id='artist' value='#{productDetailBean.product.details.artist}' label='Artist' /> </p:column> </p:row><p:row> <p:column> <h:outputLabel for='listPrice' value='List Price: ' /> </p:column> <p:column> <p:inputText id='listPrice' required='true' value='#{productDetailBean.product.pricing.list}' label='List Price' /> </p:column> </p:row><p:row> <p:column> <h:outputLabel for='retailPrice' value='Retail Price: ' /> </p:column> <p:column> <p:inputText id='retailPrice' required='true' value='#{productDetailBean.product.pricing.retail}' label='REtail Price' /> </p:column> </p:row><p:row> <p:column> <h:outputLabel for='author' value='Author: ' /> </p:column> <p:column> <p:inputText id='author' value='#{productDetailBean.product.details.author}' label='Author' /> </p:column> </p:row><p:row> <p:column> <h:outputLabel for='genre' value='Genre: *' /> </p:column> <p:column> <p:inputText id='genre' required='true' value='#{productDetailBean.product.details.genre}' label='Genre' /> </p:column> </p:row><p:row> <p:column colspan='2' styleClass='ui-widget-header'> <p:outputPanel id='buttonPanel'> <p:commandButton value='Add Tracks' onclick='addTrackDlg.show();' type='button' rendered='#{productDetailBean.product.type == 'Audio Album'}'/> <p:commandButton value='Add Chapters' onclick='addChapterDlg.show();' type='button' rendered='#{productDetailBean.product.type == 'Book'}'/> </p:outputPanel> </p:column> </p:row><p:row> <p:column colspan='2' > <p:outputPanel id='trackPanel' > <p:dataList value='#{productDetailBean.product.details.tracks}' var='track' type='ordered' rendered='#{productDetailBean.product.details.tracks.size() > 0}'> #{track} </p:dataList> </p:outputPanel> <p:outputPanel id='chapterPanel' > <p:dataList value='#{productDetailBean.product.details.chapters}' var='chapter' type='ordered' rendered='#{productDetailBean.product.details.chapters.size() > 0}'> #{chapter} </p:dataList> </p:outputPanel></p:column> </p:row><f:facet name='footer'> <p:row> <p:column colspan='2'> <p:commandButton value='Save' icon='ui-icon-disk' actionListener='#{productDetailBean.doSave}' update='mainGrowl' /> <p:button value='Back to Search' icon='ui-icon-back' outcome='search.xhtml' /> </p:column> </p:row> </f:facet> </p:panelGrid></h:form><h:form> <p:growl id='trackGrowl' sticky='true' /> <p:dialog id='addTrackDlg' header='Adding Tracks for the product' widgetVar='addTrackDlg' modal='true' height='100' width='450' resizable='false'> <h:outputLabel for='track' value='Track: ' /> <p:inputText id='track' required='true' value='#{productDetailBean.newTrack}' label='Track' /> <p:commandButton value='Add' actionListener='#{productDetailBean.doAddTracks}' icon='ui-icon-check' update='trackGrowl, :mainForm:trackPanel' oncomplete='addTrackDlg.hide()'/> </p:dialog> </h:form><h:form> <p:growl id='chapterGrowl' sticky='true' /> <p:dialog id='addChapterDlg' header='Adding Chapters for the product' widgetVar='addChapterDlg' modal='true' height='100' width='450' resizable='false'> <h:outputLabel for='chapter' value='Chapter: ' /> <p:inputText id='chapter' required='true' value='#{productDetailBean.newChapter}' label='Chapter' /> <p:commandButton value='Add' actionListener='#{productDetailBean.doAddChapters}' icon='ui-icon-check' update='chapterGrowl, :mainForm:chapterPanel' oncomplete='addChapterDlg.hide()'/></p:dialog> </h:form> </ui:define> </ui:composition> </html> ProductDetailsBean.java package com.borislam.view;import java.util.ArrayList; import java.util.List;import javax.faces.application.FacesMessage; import javax.faces.context.FacesContext; import javax.faces.event.ActionEvent; import javax.faces.event.ValueChangeEvent;import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Scope; import org.springframework.stereotype.Component; import org.springframework.util.CollectionUtils; import org.springframework.dao.DataAccessException;import com.borislam.domain.Detail; import com.borislam.domain.Pricing; import com.borislam.domain.Product; import com.borislam.service.ProductService;@Component @Scope('session') public class ProductDetailBean {@Autowired private ProductService productService; private boolean newProduct; private Product product; private String newTrack; private String newChapter;public boolean isNewProduct() { return newProduct; }public void setNewProduct(boolean newProduct) { this.newProduct = newProduct; }public Product getProduct() { return product; }public void setProduct(Product product) { this.product = product; }public String getNewTrack() { return newTrack; }public void setNewTrack(String newTrack) { this.newTrack = newTrack; }public String getNewChapter() { return newChapter; }public void setNewChapter(String newChapter) { this.newChapter = newChapter; }public void initProduct(){ Object selectedProduct = (FacesContext.getCurrentInstance().getExternalContext().getFlash()).get('selected');if (selectedProduct==null && !FacesContext.getCurrentInstance().isPostback()) { product = new Product(); product.setDetails(new Detail()); product.setPricing(new Pricing(0,0)); setNewProduct(true); } if (selectedProduct!=null) { product = (Product)selectedProduct; setNewProduct(false); }}public void doSave(ActionEvent event) {try { productService.saveProduct(product);FacesContext context = FacesContext.getCurrentInstance(); context.addMessage(null, new FacesMessage('Save Successfully!')); } catch (DataAccessException e) { e.printStackTrace();FacesContext context = FacesContext.getCurrentInstance(); context.addMessage(null, new FacesMessage(FacesMessage.SEVERITY_ERROR,'Error when saving product!',null));}}public void doAddTracks(ActionEvent event) { List<String> tracks = product.getDetails().getTracks(); if (CollectionUtils.isEmpty(tracks)) { product.getDetails().setTracks(new ArrayList<String>()); } product.getDetails().getTracks().add(this.newTrack);}public void doAddChapters(ActionEvent event) { List<String> tracks = product.getDetails().getChapters(); if (CollectionUtils.isEmpty(tracks)) { product.getDetails().setChapters(new ArrayList<String>() ); } product.getDetails().getChapters().add(this.newChapter);}public void clearDetails(ValueChangeEvent event) {if ('Audio Album'.equalsIgnoreCase(event.getNewValue().toString()) ) { product.getDetails().setChapters(null); } if ('Book'.equalsIgnoreCase( event.getNewValue().toString())) { product.getDetails().setTracks(null); } } } updated ProductService.java package com.borislam.service;import java.util.List;import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service;import com.borislam.domain.Product; import com.borislam.repository.ProductRepository; import com.borislam.view.ProductSearchCriteria;@Service public class ProductService {@Autowired private ProductRepository productRepository;public List<Product> searchByCriteria(ProductSearchCriteria criteria){ return productRepository.searchByCriteria(criteria); }public Product getProduct(String sku) { return productRepository.findBySku(sku); }public void saveProduct(Product p){ productRepository.save(p); }public void deleteProduct(Product p){ productRepository.delete(p); }} Conclusion: 1. Spring Data Mongo DB provides MongoTemplate which allow you to perform MongoDB operation easily. 2. MongoDB JSON-style document could mapped to POJO easily with the help of Spring Data MongoDB 3. Repository abstraction of spring data reduces the boilerplate code write for accessing MongoDB. 4. You add custom behaviour to spring data repository.Get the source code  Reference: Introduction to sample application (MongoShop Product Catalog), MongoDB schema design and data preparation, JSF (PrimeFaces) and Spring data MongoDB Integration, Enquriy data with spring data repository and mongotemplate, Create, Edit and delete data, from our JCG partner Boris Lam at the Programming Peacefully blog. ...

Tweeting StackExchange with Spring Social – part 1

This article will cover a quick side-project – a bot to automatically tweet Top Questions from the various Q&A StackExchange sites, such as StackOverflow, ServerFault, SuperUser, etc. We will build a simple client for the StackExchange API and then we’ll setup the interaction with the Twitter API using Spring Social – this first part will focus on the StackExchange Client only. The initial purpose of this implementation is not to be a full fledged Client for the entire StackExchange API – that would be outside the scope of this project. The only reason the Client exists is that I couldn’t fine one that would work against the 2.x version of the official API.       1. The Maven dependencies To consume the StackExchange REST API, we will need very few dependencies – essentially just an HTTP client – the Apache HttpClient will do just fine for this purpose: <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.2.3</version> </dependency The Spring RestTemplate could also have been used to interact with the HTTP API, but that would have introduced quite a lot of other Spring related dependencies into the project, dependencies which are not strictly necessary, so HttpClient will keep things light and simple. 2. The Questions Client The goal of this Client is to consume the /questions REST Service that StackExchange publishes, not to provide a general purpose client for the entire StackExchange APIs – so for the purpose of this article we will only look at that. The actual HTTP communication using HTTPClient is relatively straightforward: public String questions(int min, String questionsUri) { HttpGet request = null; try { request = new HttpGet(questionsUri); HttpResponse httpResponse = client.execute(request); InputStream entityContentStream = httpResponse.getEntity().getContent(); return IOUtils.toString(entityContentStream, Charset.forName('utf-8')); } catch (IOException ex) { throw new IllegalStateException(ex); } finally { if (request != null) { request.releaseConnection(); } } } This simple interaction is perfectly adequate for obtaining the questions raw JSON that the API publishes – the next step will be processing that JSON. There is one relevant detail here – and that is the questionsUri method argument – there are multiple StackExchange APIs that can publish questions (as the official documentation suggests), and this method needs to be flexible enough to consume all of them. It can consume for example the simplest API that returns questions by setting questionUri set to https://api.stackexchange.com/2.1/questions?site=stackoverflow or it may consume the tag based https://api.stackexchange.com/2.1/tags/{tags}/faq?site=stackoverflow API instead, depending on what the client needs. A request to the StackExchange API is fully configured with query parameters, even for the more complex advanced search queries – there is no body being send. To construct the questionsUri, we’ll build a basic fluent RequestBuilder class that will make use of the URIBuilder from the HttpClient library. This will take care of correctly encoding the URI and generally making sure that the end result is valid: public class RequestBuilder { private Map<String, Object> parameters = new HashMap<>(); public RequestBuilder add(String paramName, Object paramValue) { this.parameters.put(paramName, paramValue); return this; } public String build() { URIBuilder uriBuilder = new URIBuilder(); for (Entry<String, Object> param : this.parameters.entrySet()) { uriBuilder.addParameter(param.getKey(), param.getValue().toString()); } return uriBuilder.toString(); } } So now, to construct a valid URI for the StackExchange API: String params = new RequestBuilder(). add('order', 'desc').add('sort', 'votes').add('min', min).add('site', site).build(); return 'https://api.stackexchange.com/2.1/questions' + params; 3. Testing the Client The Client will output raw JSON, but to test that, we will need a JSON processing library, specifically Jackson 2: <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.1.3</version> <scope>test</scope> </dependency> The tests we’ll look at will interact with the actual StackExchange API: @Test public void whenRequestIsPerformed_thenSuccess() throws ClientProtocolException, IOException { HttpResponse response = questionsApi.questionsAsResponse(50, Site.serverfault); assertThat(response.getStatusLine().getStatusCode(), equalTo(200)); } @Test public void whenRequestIsPerformed_thenOutputIsJson() throws ClientProtocolException, IOException { HttpResponse response = questionsApi.questionsAsResponse(50, Site.serverfault); String contentType = httpResponse.getHeaders(HttpHeaders.CONTENT_TYPE)[0].getValue(); assertThat(contentType, containsString('application/json')); } @Test public void whenParsingOutputFromQuestionsApi_thenOutputContainsSomeQuestions() throws ClientProtocolException, IOException { String questionsAsJson = questionsApi.questions(50, Site.serverfault); JsonNode rootNode = new ObjectMapper().readTree(questionsAsJson); ArrayNode questionsArray = (ArrayNode) rootNode.get('items'); assertThat(questionsArray.size(), greaterThan(20)); } The first test has verified that the response provided by the API was indeed a 200 OK, so the GET request to retrieve the questions was actually successful. After that basic condition is ensured, we moved on to the Representation – as specified by the Content-Type HTTP header – that needs to be JSON. Next, we actually parse the JSON and verify that there are actually Questions in that output – that parsing logic itself is low level and simple, which is enough for the purpose of the test. Note that these requests count towards your rate limits specified by the API – for that reason, the Live tests are excluded from the standard Maven build: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.13</version> <configuration> <excludes> <exclude>**/*LiveTest.java</exclude> </excludes> </configuration> </plugin> 4. The Next Step The current Client is only focused on a single type of Resource from the many available types published by the StackExchange APIs. This is because it’s initial purpose is limited – it only needs to enable a user to consume Questions from the various sites in the StackExchange portfolio. Consequently, the Client can be improved beyond the scope of this initial usecase to be able to consume the other types of the API. The implementation is also very much raw – after consuming the Questions REST Service, it simply returns JSON output as a String – not any kind of Questions model out of that output. So, a potential next step would be to unmarshall this JSON into a proper domain DTO and return that back instead of raw JSON. 5. Conclusion The purpose of this article was to show how to start building an integration with the StackExchange API, or really an HTTP based API out there. It covered how to write integration tests against the live API and make sure the end to end interaction actually works. The second part of this article will show how to interact with the Twitter API by using the Spring Social library, and how to use the StackExchange Client we built here to tweet questions on a new twitter account. I have already set up a few twitter accounts that are now tweeting the 2 Top Questions per day, for various disciplines:SpringAtSO – Two of the best Spring questions from StackOverflow each day JavaTopSO – Two of the best Java questions from StackOverflow each day AskUbuntuBest – Two of the best questions from AskUbuntu each day BestBash – Two of the best Bash questions from all StackExchange sites each day ServerFaultBest – Two of the best questions from ServerFault each dayThe full implementation of this StackExchange Client is on github.   Reference: Tweeting StackExchange with Spring Social – part 1 from our JCG partner Eugen Paraschiv at the baeldung blog. ...

Spring meets Apache Hadoop

SpringSource has just announced the first GA release of Spring for Apache Hadoop. The goal of this project is to simplify the development of Hadoop based applications. You may download the project here and check out the Maven artifacts here. Spring for Apache Hadoop was born to resolve the issue of having poorly constructed Hadoop applications, which usually consist of command line utilities, scripts and pieces of code stitched together. It provides a consistent programming and configuration model across a wide range of Hadoop ecosystem projects, as expected from a Spring project. The well known Template API design pattern is also embraced here, so the framework includes classes like:HBaseTemplate HiveTemplate PigTemplateAnother embraced aspect is the approach of starting small and growing into complex solutions. So, Spring for Hadoop introduces various Runner classes which allow the execution of Hive, Pig scripts, vanilla Map/Reduce or Streaming jobs, Cascading flows but also invocation of pre and post generic JVM-based scripting all through the familiar JDK Callable contract. When things start to get more complex, upgrading to Spring Batch is straightforward and easy. Spring Batch’s rich functionality for handling the ETL processing of large file translates directly into Hadoop use cases for the ingestion and export of files form HDFS. Also, the use of Spring Hadoop in combination with Spring Integration allows for rich processing of event streams that can be transformed, enriched, filtered, before being read and written from HDFS or other storages such as NoSQL stores, for which Spring Data provides plenty of support. To kick-start your applications, you can start with the sample apps provided (already compiled and ready for download). If you test drive Spring for Hadoop, let us know and share the knowledge. Happy coding! ...

Organizing an Agile Program: Part 1, Introduction

If you want to organize an agile program, so you can manage the stream of features in your agile program, you have some options. It depends on the size of your program. The communication structures in your agile program matter. How Large Is Your Agile Program? I think of programs as small, medium, and large. Yes, it’s a generality, and it’s been helpful to me. A small program is up to three teams. A medium program is four to nine teams. A large program is ten teams or more. Your mileage will definitely vary. It varies because of the size of the teams.Here’s why I find the these guidelines useful. Remember back when I wrote Why Team Size Matters? If you try to graph the communication paths for a 5 -person team, you have a graph that looks like the one to the left.Now, you’ve only added one more person, and you’ve increased the communication paths by five. This is why the size of the team matters when you think about your program and whether you have a small, medium or large program. If you have four-person teams and you have three teams, you have a small program. If you have ten-person teams, you might still have a small program. But you’re pushing it. You might have a medium size program. You know how sometimes you have to change the data structures in code when the size of the data changes? The same thing happens with a program. You want to think about the organization and communication structures you’re using, to make sure they enhance the product development you’re doing, and not holding you back. Remember that an agile team depends on the micro-commitments every single day between its members to make progress. The more teams you have attempting to make progress on features across the organization, the more you want to enhance that communication. So you want to select a program organization that enhances that communication. Programs Have Communication Paths, TooIn an agile program, the technical project teams make commitment to and with each other. They have communication paths not just inside the teams but between the teams. The way they communicate can vary depending on whether the program is small, medium or large. How you organize the program will change how you communicate. You have choices. Many people use Scrum as their agile approach of choice. Scrum is a great project management framework. It’s designed for a 5-7 person collocated team. If you want to scale Scrum, especially for small programs, many people use a technique called Scrum-of-Scrums. Scrum of Scrums is a Hierarchy I have nothing against Scrum-of-Scrums. In my opinion, it’s a lot of overhead for not a lot of return, but it works for a lot of people. And, more importantly, it’s a lot more effective than what they were doing. You know me. If it’s more effective and they are successful, rock on! And, SoS is a hierarchy. The Scrum teams get together at their standups and then send their Scrum Masters to another standup where the Masters get together and discuss the cross-program risks on a daily basis. They then have to get back together with their teams. Yes, they do this every day.So, the problems go up the chain and down the chain. Now, in a small program, such as three teams, especially if the teams are small and the people are trained well in agile, and they are accustomed to working in small features that are written end-to-end, the people solve problems themselves. Alice doesn’t have a problem walking over to someone and saying, “Hey Bob, I have a problem. Can you take a look at this and tell me what’s wrong?” But, what I see, even in small programs, is that the teams are not collocated. Alice is not located in the same timezone as Bob. Alice needs her Scrum Master to coordinate with Bob’s Scrum Master. And, because it’s the job of the Scrum Masters to coordinate, it’s not anyone else’s job to coordinate. Let me repeat that. Because it’s the Scrum Master’s job to facilitate and maintain the coordination job, it’s no one else’s job to do so. Why? Because everyone else is busy getting their features to done. It’s human nature. We push problems up a hierarchy for someone else to remove if that’s how it’s supposed to be. This is not Scrum’s fault. It is a human thing. It’s the way we are made. Add Communities of Practice The Scrum Masters cannot and should not track the details of everything for the testing and the architecture and the UX and the databases and and and… That will depend on your program. You may well need people on each team connected with communities of practice. And, you need someone from each team connected with the program team. So, you have more of a messy picture. But, who cares if you have a messy picture if you have a more effective program?This picture has a community of practice built in. You can do this with S-o-S. You don’t need permission. Now, with a three-team program of small teams of only 5 people, you might not need it. But, if you have 10-person teams, or if you are geographically distributed, you might. Ask yourself these questions: are we getting to done every iteration on every single feature on every single team? Do we have interdependency problems? And, if you are larger than three teams, you almost certainly need a different structure. If you cannot answer yes to these questions, or if you are having trouble with your features flowing through your program, you need a different structure. Communities of practice help. Lean Works, Too For those of you wondering, yes, you can always move to a lean approach. This works for any size team. I’m going to talk more about this for medium and large teams, because lean really shines there. So hang in there for later. This discussion is about communication structures between teams, not the process or lifecycle the individual teams should use in a program. As a program manager, I don’t care what the teams use as long as they meet their commitments to the program. I hope you are not surprised about that. More about that, later. Use a Network Instead of Hierarchy Instead of a hierarchy, you have a choice of using a network, even if you are using Scrum. Even if you just have a three-team program. When you have a network of teams, you don’t have to have everyone interconnected with everyone else. And, you don’t just need the Scrum Masters connected to each other. You need teams tightly connected to themselves. And, you need teams with loose connections to each other. This is called a small world network.(Do not sing the Disney song. No, I told you not to!)If you’ve heard of the Six degrees of separation from Kevin Bacon thing, this is how it works. You don’t need all of the teams connected to each other, as long as all of them are connected to some of them. For a three-team program, all of the teams would be connected. That’s easy. Since, in agile, we want to encourage people to collaborate, the small world network pattern is a reasonable pattern to use. With a small world network, if Bob and Alice have a question, they ask each other. It’s their obligation to do so. If they don’t know that they should ask each other, it’s their obligation to ask someone who might know. That someone is not necessarily a Scrum Master. Or an agile project manager. Or a program manager. It’s someone in their network. The question doesn’t go up the hierarchy. It goes across the network. This is not anarchy. It’s collaboration. Here’s a quote from Clay Shirky from Here Comes Everybody: The Power of Organizing with Organizations Collaborative production, where people have to coordinate with one another to get anything done, is considerably harder than simple sharing, but the results can be more profound. New tools allow large groups to collaborate, by taking advantage of nonfinancial motivations and by allowing for wildly differing levels of contribution. I’ve drawn my picture using five teams, because while the small world network is useful for small programs, you can really see how it starts to shine for medium programs. For a small program, use the simplest thing that works. (Do that anyway.) The real question is this: Does what you are doing scale, as your program grows? For my clients, the answer has been no. In fact, for my clients, Scrum of Scrums has not even worked for three teams. Why? Because they have not gotten training or coaching. They have tried to learn Scrum out of books. They have sent one person to one class and attempted to learn it that way. This is not a successful way to learn anything about agile. When my clients drop the Scrum of Scrums approach and revert to networks, which is how their work got done in their organization before, they get their work done. Don’t ask me why they thought all the questions had to be funneled through the Scrum Master. I have no idea. I do know that the network approach works for them. And, it’s a lot less overhead for the poor Scrum Master/Agile Project Manager Why Does a Network Approach Work? The small network pattern works because it puts the inherent rumor mill to work for you. The small world network engages people in a way that hierarchy does not. And, it decreases the transaction cost of just about everything. That makes a huge difference. You don’t have to wait for any standups to address problems, issues, or risks. People on teams solve problems when they have the problem. No need for a “master” or a “chief” to intervene. You know how much I dislike the chief titles business. In the next parts, I’ll talk more about how networks help, especially for medium and large programs and how they help features move through programs. Thanks for hanging in there with me. I tried to make this short, but I am not a good enough writer to write this short. Please, ask me questions.   Reference: Organizing an Agile Program: Part 1, Introduction from our JCG partner Johanna Rothman at the Managing Product Development blog. ...

Notes on Continuous Delivery – Configuration Management

Overview I will be continuing the topic on Continuous Delivery which began in my previous post: Notes on Continuous Integration; this time we will start looking at the first and most important step, Configuration Management. In the words of the authors (resource below): Configuration Management refers to the process by which all artifacts … and the relationships between them, are stored, retrieved, uniquely identified, and modified. Configuration Management involves four principles: 1. Keep everything in version control Source control is just for source code, version control is for any type of artifact your project has: source code, tests, database scripts, wireframes, high definition mock-ups, build scripts, documentation, libraries, configuration files, release plans, requirements documents, architecture diagrams, virtual machine configuration, virtual machine images and so on. I challenge you to change this in your geeky vocabulary, if you haven’t already. A little ambitious, I agree. The metadata stored within your version control system enables to access every version of every file you have ever stored as well as facilitates collaboration amongst distributed teams in space and time. The level of sophistication in your continuous delivery process will depend on how mature your configuration management strategy is. If storing absolutely everything is not feasible, or requires too much work and money, start with a few artifacts (in addition so source code, of course) and you will see the improvements in your process almost instantly. Promote the idea of checking-in code frequently followed with useful commit messages. Make use of the typical ‘-m‘ switch in your version control command line tool, almost all of them support it. Version control gives you the freedom to delete files; worst case scenario, you can always retrieve it easily. It’s like refactoring code with endless ‘Undo’ history. 2. Manage dependencies Dependencies constitute any external libraries, components, and modules that your application uses. For instance:JAR files – Java or JVM language PEAR, PECL modules or PHAR files – PHP DLLs – .NET Ruby Gems – Ruby Bindings – Python NPM – Node.jsAlso under this category would be any extensions or libraries your operating system is configured with. Build tools make it easier (not easy) to manage dependencies. I have had success with tools such as Maven, Ant, and Phing. There are many others that have gotten a lot of traction lately such as: Gradle and Ivy. I would recommend a tool like Maven because it makes it possible to recreate environments on different machines as well as manage your dependencies in centralized fashion by setting a package repository. 3. Manage software configuration Perhaps the most important principle of all, configuration management requires that you have a strategy for automating the injection of configuration properties to your software. This is critical if you are planning to support different environments: QA, Staging, Preview, and Production. Configuration information can be injected:At build time or packaging time: use Maven to read property files and inject configuration information into them. This is usually seen as property files with name-value pair records. These are common in Java and PHP; YAML files are common in the Ruby and Python worlds. XML is even a good contender here too supported in all platforms.At startup time: usually done via environment variables or command line arguments.At runtime: say you store configuration information in the database or in an external system. Then your application can fetch configuration information from the web or via scripts and apply them. For this, some bootstrapping information is always necessary such as: database connections and external URIs.Whichever mechanism you use, make sure everything is version controlled. One immediate question is, how can we store sensitive information like passwords? Here is one way to do that: John Resig: Keeping Passwords in source control Another is to store passwords together with the code so that it gets compiled and obfuscated. This is not good practice, especially in interpreted languages. I recommend using secure certificates, SSL keys, or encrypted information at the very least. The simplest approach to configuration management with which I have had success implementing is via build time injection of property files as mentioned above. These vary depending on the following:The application The version of the application The environment (QA, Production, UAT, Preview, Staging, etc)With Maven, you can easily inject configuration information into the application before packaging it into a deployable artifact. This is a very nice and simple approach because all of the configuration is stored in version control and uses the file system, which is widely supported in all platforms. Disclaimer: If you are actually developing Java Applets (I don’t why someone would still want to do that…), then access to the file system might not be an option. In that case, storing your configuration in an external system that can be fetched via RESTful calls is a good solution, all of the same principles mentioned thus far still apply. Something to keep in mind, consider configuration management early in the development lifecycle. Create patterns for your organization so that every application does it the same way: convention over configuration. Often, it is an after thought and teams tend to develop their own ad-hoc configuration management strategy. This will make it really hard for your Ops and DevOps teams to automate, waste time reinventing the wheel, and you are very likely to make the same mistakes someone else already made. 4. Manage environments The configuration of the environment is as important as that of the application’s. Your application might need specific system level configuration such as number of file handles, memory limits, firewall or networking, connections pools, etc. This is very important to get right, and can vary from one application to the other. The same principles from above apply here to. Do not reinvent the wheel, do not implement ad-hoc solutions, and keep everything in version control. Obviously you cannot check in your OS into version control, but it’s configuration and the scripts can. I would say that without a doubt, managing environments is probably one of the hardest things to automate. The end goal is to be able to recreate full environment baselines at the touch of a button, including different versions in time. In order for this to work, you absolutely must create a strong synergy with a very agile IT environment, which is not necessarily the case in most organizations and can be very costly. Different departments can have very different philosophies when it comes to managing their environments — these silos must be broken. As a result, many organizations resort to virtual environments powered by Citrix or VMWare or cloud environments such as AppEngine, Amazon EC2, Rackspace, Heroku, Azure, etc. You need to be able to fully control the environments you deploy to. As I said before, start by automating as much as you can, little steps towards this will reap lots of benefits down the line. I don’t have any experience with environment management, but I’ve heard good things about systems like Puppet. Conclusion Configuration managent sits at the core of continuous delivery. Store all application and infrastructure information so that you can recreate environments: configuration, database, DNS zone files, firewall configuration, patches, libraries installed, extensions, etc. The automation process (as we will see in later posts) will depend on having every artifact your application needs to be accessible on demand. In the projects I have worked on, I always promote checking in to version control frequently, as well as its counterpart updating from version control frequently. It helps a lot with resolving conflicts and tedious merges. One caveat of continuous delivery is to go against branching. Branching is antithetical for continuous integration. If you are using Git or Mercurial where branching and merging is the norm, establish a commit and push policy that works for your team, you want to have a stable and updated main trunk line from which you can deploy your code. One piece of advice, meet with your Ops team to determine how to properly implement configuration management for your applications. This is an aspect of the system everyone should be aware of. If you will be using key-value paris for your configuration options, use descriptive names for your property keys. The name should express very clearly and concisely what the configuration is for. Use lots of comments on your property files to add more description is necessary. Have a look at a PHP installation’s php.ini file for a good example. Do not hard code property keys everywhere you need to access them, wrap them with some sort of ConfigurationService that makes this access simple and testable. Finally, it is typical for web systems to expose your application’s configuration in some sort of management console for super user admins to change. While this sounds like a good idea, and up until this point I thought it was, it’s not. Unless you can write that change back into version control, runtime system configuration is not a good idea. In addition, a simple change to a property file can potentially break or degrade the entire system. Therefore, it should follow the same mechanisms in place for source code changes. ResourcesHumble, Jez and Farley David. Continuous Delivery: A Reliable Software Releases through Build, Test, and Deployment Automation. Addison Wesley. 2011  Reference: Notes on Continuous Delivery – Configuration Management from our JCG partner Luis Atencio at the Luisatencio.net blog. ...

Code Quality stage using Jenkins

In Continuous Delivery each build is potentially shippable. This fact implies among a lot of other things, to assign a none snapshot version to your components as fast as possible so you can refer them through all the process. Usually automated software delivery process consist of several stages like Commit stage, Code Quality, Acceptance Tests, Manual Test, Deployment, … But let’s focusing on second stage related to code quality. Note that in my previous post (http://www.lordofthejars.com/2013/02/conditional-buildstep-jenkins-plugin.html) there are some concepts that are being used here. Second stage in continuous delivery is the code quality. This step is very important because is where we are running static code analysis for detecting possible defects   (mostly possible NPE), code conventions or unnecessary object creation. Some of projects that are typically used are Checkstyle, PMD or FindBugs among others. In this case we are going to see how to use Checkstyle, but of course it is very similar in any other tool. So the first thing to do is configure Checkstyle into our build tool (in this case Maven). Because we only want to run the static analysis in second stage of our pipeline we are going to register the Checkstyle Maven plugin into a metrics profile. Keep in mind that all plugins run for code analysis should be added into that profile.<profiles> <profile> <id>metrics<id> <build> <plugins> <!-- CHECKSTYLE --> <plugin> <groupId>org.apache.maven.plugins<groupId> <artifactId>maven-checkstyle-plugin<artifactId> <version>2.9.1<version> <plugin> <plugins> <build> <profile> <profiles> Now that we have our pom configured with Checkstyle, we can configure Jenkins to run Code Quality stage after the first stage (explained in my previous post). In this case we are going to use Trigger Parameterized Build plugin to execute code quality job from commit stage. Because code of current build version has been pushed into a release branch (see my previous post) during commit stage, we need to set branch name as parameter for the code quality Jenkins job, so code can be downloaded and then run the static analysis. In build job of our first stage, we add a Post-build Action of type Trigger parameterized build on other projects. First we open the Configure menu of first build job of pipeline and we configure it so next build job of the pipeline (helloworld-code-quality) is executed only if current job is stable. Also we define the RELEASE_BRANCH_NAME parameter with branch name.Then let’s create a new build job that will be in charge of running static code analysis, we are going to name it helloworld-code-quality. And we configure the new build job. First of all check the option ‘This build is parameterized‘, and add a String parameter and set the name RELEASE_BRANCH_NAME. After that we can use RELEASE_BRANCH_NAME parameter in current job. So at Source Code Management section we add the repository URL and in Branches to build we set origin/${RELEASE_BRANCH_NAME}. Then at Build section we add a Maven build step, which executes Checkstyle goal: checkstyle:checkstyle -P metrics. And finally to have a better visibility of the result, we can install Checkstyle Jenkins plugin and publish the report. After plugin is installed, we can add a new Post-build Actions with name ‘ Publish Checkstyle analysis result‘. In our case report is located at **/target/checkstyle-result.xml.And that’s all for current stage, next stage is the responsible of executing the acceptance tests, but this would be in another post. So in summary we have learned how after code is compiled and some tests are executed (in first stage of pipeline), the Code Quality stage is run into Jenkins using Checkstyle Maven plugin.   Reference: Code Quality stage using Jenkins from our JCG partner Alex Soto at the One Jar To Rule Them All blog. ...

Prime-UI, JAX-RS with Jersey and Gson on Oracle Cloud

The Oracle Cloud is around everywhere these days. It had a rough start with Larry denying the need for a cloud for a very (too) long time and some very early announcements and a very bad availability after last year’s Open World nobody seems to be interested anymore. But for me it still has it’s hidden treasures and I believe it has a fair chance of winning it’s customers. Before I dive into the example which will show you how to use JAX-RS with Jersey on the Oracle Cloud Service I want to introduce you to the service a little bit. Feel free to skip this first section.         What the hell is the Oracle Cloud and why do you care? The Oracle Cloud is a marketing term. It tries to capture a couple of different services sharing a common base called the platform services. Those two basically are the Java and the Database Service. Technically this isn’t too new. We are talking about Oracle’s ‘ Cloud Application Foundation‘ which is out there since a while.It is down at the bottom of the whole Oracle Fusion Middleware Stack (at least in the available marketing slides) and is the basic software stack that runs on the Exalogic appliances. Most relevant parts for Java developers are the Java EE 5 WebLogic Server and a load balancing solution called Traffic Director. The neat part here is, that you literally can have your personal share of a real Exalogic machine in the cloud for a fraction of the costs that even the smallest rack costs. And it is running in data centers around the world. Fully managed and including the licenses. So, with paying your monthly mite you are done with the administrative parts. And if you ever had the questionable pleasure to deal with licensing and supported platforms you know a little about the added value in it. Technically speaking the Java Service is of little interest at all. EE 5 is outdated and even the Java SE 6 based JRockit feels like a stranger from the past with all the new features in Java SE 7 and the end-of-public-updates policy for SE 6. But I still consider it a good start and I am very looking forward about having latest WebLogic 12c and a decent Java 7 in the cloud. WebLogic Server and JAX-RS Do you remember the ancient days? Java EE 5? Running around with the latest EE 6 specification since a couple of years now it feels like you have to drive the car you had as a student again. Believe it or not: JAX-Rs wasn’t part of EE 5 at all. And this is exactly the reason why JAX-RS doesn’t run out of the box on the Oracle Java Service. But you might know that the Weblogic team is very aware of the fact that they are running late with EE adoption and so they are rolling out features which will be included into the base server with the next specification version bit by bit to earlier versions. The same happened with JAX-RS back in early 2011. Since 10.3.4 you’ve been able to facilitate Jersey as JAX-RS implementation by simply adding a library dependency or packaging it to your application. This also works for the Java Service. Simply start a new Maven project in your favorite IDE (might be latest NetBeans 7.3 which is hot off the press) and add <dependency> <groupId>com.sun.jersey</groupId> <artifactId>jersey-server</artifactId> <version>1.9</version> <scope>provided</scope> </dependency> Jersey as a dependency with scope provided. Another pointer is the Java Version you should compile against. Make sure SE 7 doesn’t slip in somewhere and set the mavven compiler-plugin to use source and target version 1.6. Sad as it is … Next thing to add is the weblogic.xml library ref for Jersey: <library-ref> <library-name>jax-rs</library-name> <specification-version>1.1</specification-version> <implementation-version>1.9</implementation-version> </library-ref> Which simply tells the container to add this to the class-loader. Typically you would have to deploy it to your domain first. But believe me: It is already there and you simply can use it. If you are using NetBeans and you start with the new ‘RESTful Web Services from Patterns’ wizard you might end up with a couple of more (unneeded) dependencies but this would save you from adding the Jersey configuration to your web.xml which should look like the following: <servlet> <servlet-name>ServletAdaptor</servlet-name> <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet-class> <init-param> <description>Multiple packages, separated by semicolon(;), can be specified in param-value</description> <param-name>com.sun.jersey.config.property.packages</param-name> <param-value>net.eisele.primeui.cloud</param-value> </init-param> <init-param> <param-name>com.sun.jersey.api.json.POJOMappingFeature</param-name> <param-value>true</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>ServletAdaptor</servlet-name> <url-pattern>/webresources/*</url-pattern> </servlet-mapping> And simply registers the REST servlet together with the package scanning path for your annotated implementation. Choose whatever mapping you like. Following this example closely you should be aware, that I’m going to hard-code the URL to the service in JavaScript later. Watch out for the ‘/webresources’ part. Adding some JSON You for sure noticed the et.eisele.primeui.cloud package reference. Let’s look at the class: @Path('countries') public class RestResource { //... @GET @Produces('application/json') public String getJson(@QueryParam('query') String query) { String[] raw = { 'Albania', 'Algeria', //... }; List<ValueHolder> countries = new ArrayList<ValueHolder>(); for (int i = 0; i < raw.length; i++) { countries.add(new ValueHolder(raw[i])); } Gson gson = new Gson(); return gson.toJson(countries); } } public class ValueHolder { public ValueHolder() {}public ValueHolder(String label) { this.label = label; this.value = 'v_' + label; } private String label; private String value; } This basically contains a String[] of countries. Each entry gets converted to a ValueHolder object and added to an ArrayList which gets converted to JSON with the help of Google’s gson library. This is the second dependency we need to include with the pom.xml <dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>2.2.2</version> <scope>compile</scope> </dependency> Make sure this is packaged with your application by using the compile scope. Mostly done now. You noticed the @QueryParam(‘query’). I build some more logic around selecting the right entries from the String[] to decide which ValueHolder to return. For the complete example refer to the RestResource on github. Now we are in need of a nice front-end. Prime-UI to the rescue Everybody is talking about JavaScript these days and I thought it might be a good way of showing off some of the things possible with latest Primefaces offspring called Prime-UI. Those guys do a great job pushing out their already well known and widely used JSF library PrimeFaces to the jQuery world by providing a widget library. Get everything you need from the PrimeFaces website by downloading the prime-ui zip file. If you started with a web project in NetBeans and you did not add JSF you end up by having a nice little jsp file in the webapp folder. Open it and make some changes and tweaks to it. The most important ones are the HTML5 doctype declaration and the needed JavaScript imports: <%@page contentType='text/html' pageEncoding='UTF-8'%> <!DOCTYPE html> <!-- header, title, all the other stuff you need --> <!-- jQuery --> <script src='js/vendor/jquery.js'></script> <!-- jQuery UI --> <script src='js/vendor/jquery-ui.js'></script> <!-- Prime UI Core --> <script src='js/core/core.js'></script> <!-- Prime UI Input Text --> <script src='js/inputtext/inputtext.js'></script> <!-- Prime UI Autocomplete --> <script src='js/autocomplete/autocomplete.js'></script> The Auto complete example binds an input text field to a backend and gives you type-ahead features. Lets assume you have the rest service above running you now simply add the following JavaScript to your head section: <script type='text/javascript'> $(function() {$('#remote').puiautocomplete({ effect: 'fade', effectSpeed: 'fast', completeSource: function(request, response) { $.ajax({ type: 'GET', url: './webresources/countries', data: {query: request.query}, dataType: 'json', context: this, success: function(data) { response.call(this, data); }, error: function(jqXHR, textStatus, errorThrown) { console.log(textStatus, errorThrown); } }); } });}); </script> And add the input tag to your body section of the page: <input id='remote' name='remote' type='text'/> That is all you have to do. One little remark. If you are going to deploy the app as it is, you will be prompted with a login screen in front of it. In order to open it to the public you have to add an empty <login-config/> element to your web.xml. Now go on and add the cloud to your IDE and deploy the application to your trial instance. If you are using my github sources, it should look like this:depending on the query it returns the more qualified results. Going the postman way it looks like this:Take away I hope, you didn’t expect this to be kind of rocket science at all. It is a basic post along the lines of what most of the WebLogic server developers might have know already. This is one of the biggest advantages but also a big disadvantage of the Oracle Java Cloud Service. If you know WebLogic you are most likely going to love it. If you are on the Open Source side of things you might run into issues that are well known to the Oracle Middleware guys but not to you. EE 5 isn’t that complete than EE 6 and EE 7 will only be slightly better in closing the vendor specific gabs between all the different implementations. But again: This isn’t something new for you, right? Now go: Give it a test-drive and share your experiences! Looking forward reading about them!   Reference: Prime-UI, JAX-RS with Jersey and Gson on Oracle Cloud from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog. ...

Virtual Host + Nginx + Tomcat

In my Virtual Host + Apache httpd server + Tomcat + mod_jk connector post has lots of comments. and i hope everyone like that. But problem is very difficult to install and configure the Apache httpd server with mod_jk connector. Recently i learned nginx. and i walked through the Nginx server as Load balancer. This post is same as Virtual Host + Apache httpd server + Tomcat + mod_jk connector but i am going to replace Apache httpd web server by Nginx. Why Nginx? Nginx is also open source web server. its easy to configure and it can handle huge volume of web traffic with minimal memory (RAM) footprint. Why we need Virtual Host? In post (Virtual Host in Tomcat) we discussed about how setup the virtual host in Tomcat. Its cost effective technique because only one public IP is enough to host multiple domain. If we have big organization and each department want to host their website in locally in different machine. then how to achieve the virtual host concept?. In this post we will see the how we do this through Nginx. Problem Scenario: In big organization they have multiple department, each department want to host their website in different machine. so these websites are accessed locally with different local IP address.When we mapping to public address then we face the problem. We have two choice either purchase as many public address or Put one server front and delegate these request. We going to use 2nd option. we put Nginx web server in front of all department servers. so only one public IP is enough. All domain DNS entries are pointed to Nginx server. Here i am using local DNS (/etc/hosts file) for simulating the same environment. so i added these entries inThen Nginx server delegates these request to corresponding tomcat server. This process is completely transparent from users(browser) perspective. Suppose we consider there are 3 department each one have own tomcat and they deployed own web application in their respective tomcat. (department1,department2,department3). Now URL and corresponding web apps name:ramki.com is department1webapp in Tomcat1 blog.ramki.com is department2webapp in Tomcat2 www.krishnan.com department3webapp in Tomcat3In my Virtual Host + Apache httpd server + Tomcat + mod_jk connector post we saw the 4 step to make the virtual hostInstall Apache httpd Web Server Install mod_jk connector Configure JK Connector Configure Apache httpd server apply virtual host conceptsThese steps are so complex and very difficult to debug(troubleshoot). Today i m going to use Nginx server. Install Nginx We can install nginx through either repository (apt-get,yum) or from source. Here i am building nginx from source from here. Then i extract the compressed file. ./configure --help The command above shows possible command line options available for compile. In order to configure use this command: ./configure --prefix=/home/ramki/nginx --with-http_ssl_module As we can see: –prefix used to specify where nginx server want to install, here i m using my home folder (like /usr/local/nginx) –with-http_ssl_module here i specified install SSL module (https), its not necessary. If we want secure webpage then this module is needed. Now we compile the source make and install the nginx based on our configuration sudo make install. When installation is done, to start the nginx: cd /home/ramki/nginx/sbin sudo ./nginx Now open browser and go to http://localhost to get nginx default page. To stop the nginx, we need to pass stop signal via -s option sudo ./nginx -s stop Add Virtual Host into Nginx If we want to add virtual Host then we need to add more server blocks in nginx configuration file(conf/nginx.conf). each server block represent the one virtual host. Each server blocks looks like: server { listen 80; server_name blog.ramki.com; location / { proxy_pass; } }here each server blocks bind (listen) the port 80. and this server block is only resonds when the server_name value (here blog.ramki.com) is matched the HTTP header Host field. Location directive matches the pattern to URL. If its matched then location block is executed. proxy_pass directive is delegate the request to back end server. In this dirceive we need to mention back-end server IP and port. Suppose client make request http://blog.ramki.com then this block is matched the requested resource. so this block is executed then location directive / is matched our URL, so proxy_pass is executed our request is delegated to back-end server. Like this we need to add another 2 server blocks for other 2 virtual hosts. change the IP address and port number accordingly.Now everything is works. If u try to access http://ramki.com/department1/ then nginx server select the server block and forward the request to tomcat1. In tomcat1 department1 context root is present so it respond back. Like that we can access http://blog.ramki.com/department2/ and http://www.krishnan.com/department3/ and we will see that URL’s are working fine. But we don’t want to access the website with extra context root. i want to access http://ramki.com/, so we need to use rewrite dirctive to reqrite the URL on the fly in nginx server. In order to do that, we modify the server block (add rewrite line ): server { listen 80; server_name blog.ramki.com;rewrite_log on;error_log logs/error_ramki.log notice; rewrite ^/(.*)$ /department1/$1; location / { proxy_pass; } }Here we add rewrite directive its capture the URL and modify it (add department1 to URL) and we add 2 optional statement for rewrite log its help for easy debugging, so we add same rewrite line in all server blocks(modify the department). If client make the request http://ramki.com/ then nginx takes this request and matches with rewrite directive. so its change the URL http://ramki.com/department1/. then its delegates to tomcat1.I hope everything is clear. If anything is missed then please let me know. Screen Cast :  Reference: Virtual Host + Nginx + Tomcat from our JCG partner Rama Krishnan at the Ramki Java Blog blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: