Featured FREE Whitepapers

What's New Here?

grails-logo

Using Servlet 3.0 Async Features in Grails 2.0

I was talking to someone last week about the new support for Servlet 3.0 async features in Grails 2 and realized I didn’t know that much about what was available. So I thought I’d try it out and share some examples. The documentation is a little light on the subject, so first some background information. The primary hook to do asynchronous work in the 3.0 spec is the new startAsync method in the javax.servlet.ServletRequest class. This returns an instance of the javax.servlet.AsyncContext interface which has lifecycle methods such as dispatch and complete, gives you a hook back to the request and response, and lets you register an javax.servlet.AsyncListener. You call the start method passing in a Runnable to do the asynchronous work. Using this approach frees up server resources instead of blocking, which increases scalability since you can handle more concurrent requests. In order to use this however the servlet that handles the request must support async, and all applied filters in the filter chain must too. The main Grails servlet (GrailsDispatcherServlet) is registered in the 3.0 version of the web.xml template with the async-supported attribute set to true. And Servlet3AsyncWebXmlProcessor adds <async-supported>true</async-supported> to all filter declarations in web.xml after it’s generated. So that’s covered for you; there is no required web.xml configuration on your part. You also have to be configured to use servlet API 3.0. This is simple to do; just change the value of grails.servlet.version to “3.0? from the default value of “2.5?. Note that there is a legacy setting in application.properties with the name app.servlet.version; you should delete this line from your application.properties file since its value is ignored and overridden at runtime by the value from BuildConfig.groovy. You don’t call startAsync on the request from a controller though; call startAsync directly on the controller. This method is added as a controller method (wired in as part of the controllers’ AST transforms from ControllersAsyncApi (by ControllerAsyncTransformer if you’re curious)). It’s important to call the controller’s startAsync method because it does all of the standard work but also adds Grails integration. This includes adding the logic to integrate all registered PersistenceContextInterceptor instances, e.g. to bind a Hibernate Session to the thread, flush when finished, etc., and also integrates with Sitemesh. This is implemented by returning an instance of GrailsAsyncContext which adds the extra behavior and delegates to the real instance provided by the container (e.g. org.apache.catalina.core.AsyncContextImpl in Tomcat) for the rest. There are a few other new async-related methods available in the request; they include boolean isAsyncStarted() and AsyncContext getAsyncContext().I’ve attached a sample application (see below for the link) to demonstrate these features. There are two parts; a simple controller that looks up stock prices asynchronously, and a chat application. StockController is very simple. It just has a single action and suspends to look up the current stock price for the requested stock ticker. It does this asynchronously but it’s typically very fast, so you probably won’t see a real difference from the serial approach. But this pattern can be generalized to doing more time-consuming tasks. Call http://localhost:8080/asynctest/stock/GOOG, http://localhost:8080/asynctest/stock/AAPL, http://localhost:8080/asynctest/stock/VMW, etc. to test it. The second example is more involved and is based on the “async-request-war” example from the Java EE 6 SDK. This implements a chat application (it was previously implemented with Comet). The SDK example is one large servlet; I split it up into a controller to do the standard request work and the ChatManager class (registered as a Spring bean in resources.groovy) to handle client registration, message queueing and dispatching, and associated error handling. The implementation uses a hidden iframe which initiates a long-running request. This never completes and is used to send messages back to each registered client. When you “login” or send a message, the controller handles the request and queues a response message. ChatManager then cycles through each registered AsyncContext and sends JSONP to the iframe which updates a text area in the main page with incoming messages. One thing that hung me up for quite a while was that things worked fine with the SDK example but not mine. Everything looked good but messages weren’t being received by the iframe. It turns out this is due to the optimizations that are in place to make response rendering as fast as possible. Unfortunately this resulted in flush() calls on the response writer being ignored. Since we need responsive updates and aren’t rendering a large page of html, I added code to find the real response that’s wrapped by the Grails code and send directly to that. Try it out by opening http://localhost:8080/asynctest/ in two browsers. Once you’re “logged in” to both, messages sent will be displayed in both browsers. Some notes about the test application:All of the client logic is in web-app/js/chat.js grails-app/views/chat/index.gsp is the main page; it creates the text area to display messages and the hidden iframe to stay connected and listen for messages This requires a servlet container that implements the 3.0 spec. The version of Tomcat provided by the tomcat plugin and used by run-app does, and all 7.x versions of Tomcat do. I ran install-templates and edited web.xml to add metadata-complete="true" to keep Tomcat from scanning all jar files for annotated classes – this can cause an OOME due to a bug that’s fixed in version 7.0.26 (currently unreleased) Since the chat part is based on older code it uses Prototype but it could easily use jQueryYou can download the sample application code here. Reference: Using Servlet 3.0 Async Features in Grails 2.0 from our JCG partner Burt Beckwith at the An Army of Solipsists blog....
grails-logo

Using Servlet 3.0 Async Features in Grails 2.0

I was talking to someone last week about the new support for Servlet 3.0 async features in Grails 2 and realized I didn’t know that much about what was available. So I thought I’d try it out and share some examples. The documentation is a little light on the subject, so first some background information. The primary hook to do asynchronous work in the 3.0 spec is the new startAsync method in the javax.servlet.ServletRequest class. This returns an instance of the javax.servlet.AsyncContext interface which has lifecycle methods such as dispatch and complete, gives you a hook back to the request and response, and lets you register an javax.servlet.AsyncListener. You call the start method passing in a Runnable to do the asynchronous work. Using this approach frees up server resources instead of blocking, which increases scalability since you can handle more concurrent requests. In order to use this however the servlet that handles the request must support async, and all applied filters in the filter chain must too. The main Grails servlet (GrailsDispatcherServlet) is registered in the 3.0 version of the web.xml template with the async-supported attribute set to true. And Servlet3AsyncWebXmlProcessor adds <async-supported>true</async-supported> to all filter declarations in web.xml after it’s generated. So that’s covered for you; there is no required web.xml configuration on your part. You also have to be configured to use servlet API 3.0. This is simple to do; just change the value of grails.servlet.version to “3.0? from the default value of “2.5?. Note that there is a legacy setting in application.properties with the name app.servlet.version; you should delete this line from your application.properties file since its value is ignored and overridden at runtime by the value from BuildConfig.groovy. You don’t call startAsync on the request from a controller though; call startAsync directly on the controller. This method is added as a controller method (wired in as part of the controllers’ AST transforms from ControllersAsyncApi (by ControllerAsyncTransformer if you’re curious)). It’s important to call the controller’s startAsync method because it does all of the standard work but also adds Grails integration. This includes adding the logic to integrate all registered PersistenceContextInterceptor instances, e.g. to bind a Hibernate Session to the thread, flush when finished, etc., and also integrates with Sitemesh. This is implemented by returning an instance of GrailsAsyncContext which adds the extra behavior and delegates to the real instance provided by the container (e.g. org.apache.catalina.core.AsyncContextImpl in Tomcat) for the rest. There are a few other new async-related methods available in the request; they include boolean isAsyncStarted() and AsyncContext getAsyncContext().I’ve attached a sample application (see below for the link) to demonstrate these features. There are two parts; a simple controller that looks up stock prices asynchronously, and a chat application. StockController is very simple. It just has a single action and suspends to look up the current stock price for the requested stock ticker. It does this asynchronously but it’s typically very fast, so you probably won’t see a real difference from the serial approach. But this pattern can be generalized to doing more time-consuming tasks. Call http://localhost:8080/asynctest/stock/GOOG, http://localhost:8080/asynctest/stock/AAPL, http://localhost:8080/asynctest/stock/VMW, etc. to test it. The second example is more involved and is based on the “async-request-war” example from the Java EE 6 SDK. This implements a chat application (it was previously implemented with Comet). The SDK example is one large servlet; I split it up into a controller to do the standard request work and the ChatManager class (registered as a Spring bean in resources.groovy) to handle client registration, message queueing and dispatching, and associated error handling. The implementation uses a hidden iframe which initiates a long-running request. This never completes and is used to send messages back to each registered client. When you “login” or send a message, the controller handles the request and queues a response message. ChatManager then cycles through each registered AsyncContext and sends JSONP to the iframe which updates a text area in the main page with incoming messages. One thing that hung me up for quite a while was that things worked fine with the SDK example but not mine. Everything looked good but messages weren’t being received by the iframe. It turns out this is due to the optimizations that are in place to make response rendering as fast as possible. Unfortunately this resulted in flush() calls on the response writer being ignored. Since we need responsive updates and aren’t rendering a large page of html, I added code to find the real response that’s wrapped by the Grails code and send directly to that. Try it out by opening http://localhost:8080/asynctest/ in two browsers. Once you’re “logged in” to both, messages sent will be displayed in both browsers. Some notes about the test application:All of the client logic is in web-app/js/chat.js grails-app/views/chat/index.gsp is the main page; it creates the text area to display messages and the hidden iframe to stay connected and listen for messages This requires a servlet container that implements the 3.0 spec. The version of Tomcat provided by the tomcat plugin and used by run-app does, and all 7.x versions of Tomcat do. I ran install-templates and edited web.xml to add metadata-complete="true" to keep Tomcat from scanning all jar files for annotated classes – this can cause an OOME due to a bug that’s fixed in version 7.0.26 (currently unreleased) Since the chat part is based on older code it uses Prototype but it could easily use jQueryYou can download the sample application code here. Reference: Using Servlet 3.0 Async Features in Grails 2.0 from our JCG partner Burt Beckwith at the An Army of Solipsists blog....
twitter-logo

The Twitter API Management Model

The objective of this blog post is to explore in detail the patterns and practices Twitter has used in it’s API management. Twitter comes with a comprehensive set of REST APIs to let client apps talk to Twitter. Let’s take few examples… If you use following with cUrl – it returns the 20 most recent statuses, including retweets if they exist, from non-protected users. The public timeline is cached for 60 seconds. Requesting more frequently than that will not return any more data, and will count against your rate limit usage. curl https://api.twitter.com/1/statuses/public_timeline.json The example above is an open API – which requires no authentication from the client who accesses it. But keep in mind… it has a throttling policy associated with the API. That is the rate limit. For example the throttling policy associated with the ..statuses/public_timeline.json API could say, only allow maximum 20 API calls from the same IP address.. like wise.. so.. this policy is a global policy for this API. 1. Twitter has open APIs – where anonymous users can access. 2. Twitter has globally defined policies per API. Let’s take another sample API – statuses/retweeted_by_user – returns the 20 most recent retweets posted by the specified user – given that the user’s timeline is not protected. This is also another open API. But, what if I want to post to my twitter account? I could use the API statuses/update. This updates the authenticating user’s status and this is not an open API. Only the authenticated users can access this. How do we authenticate our selves to access the Twitter API? Twitter supported two methods. One way is to use BasicAuth over HTTPS and the other way is OAuth 1.0a. BasicAuth support was removed recently and now the only remaining way is with OAuth 1.0a. As of this writing Twitter doesn’t support OAuth 2.0. Why I need to use the APIs exposed by Twitter – is that I have some external applications that do want to talk to Twitter and these applications use the Twitter APIs for communication. If I am the application developer – following are the steps I need to follow to build my application to access protected APIs from Twitter. First the application developer needs to login to Twitter and creates an Application. Here, the Application is an abstraction for a set of protected APIs Twitter exposes outside. Each Application you create, needs to define the level of access it needs to those underling APIs. There are three values to pick from. – Read only – Read and Write – Read, Write and Access direct messages Let’s see what these values mean… If you pick ‘Read only’ – that means a user who is going to use your Application needs to give it the permission to read. In other words – the user will be giving access to invoke the APIs defined here which starts with GET, against his Twitter account. The only exception is Direct Messages APIs – with Read only your Application won’t have access to a given user’s Direct Messages – even GETs. Even you who develop the application – the above is valid for you too as well. If you want to give your application, access your Twitter account – there also you should give the application the required rights. If you pick Read and Write – that means a user who is going to use your application needs to give it the permission to read and write. In other words – the user will be giving access to invoke the APIs defined here which starts with GET or POST, against his Twitter account. The only exception is Direct Messages APIs – with Read and Write, your application won’t have access to a given user’s Direct Messages – even GETs or POSTs. 3. Twitter has an Application concept that groups APIs together. 4. Each API declares the actions those do support. GET or POST 5. Each Application has a required access level for it to function[Read only, Read and Write, Read Write and Direct Messages] Now lets dig in to the run-time aspects of this. I am going to skip OAuth related details here purposely for clarity. For our Application to access the Twitter APIs – it needs a key. Let’s name it as API_KEY [if you know OAuth – this is equivalent to the access_token]. Say I want to use this Application. First I need to go to Twitter and need to generate an API_KEY to access this Application. Although there are multiple APIs wrapped in the Application – I only need a single API_KEY. 6. API_KEY is per user per Application[a collection of APIs]. When I generate the API_KEY – I can specify what level of access I am going to give to that API_KEY – Read Only, Read & Write or Read, Write & Direct Message. Based on the access level I can generate my API_KEY. 7. API_KEY carries permissions to the underlying APIs. Now I give my API_KEY to the Application. Say it tries to POST to my Twitter time-line. That request also should include the API_KEY. Now once the Twitter gets the request – looking at the API_KEY it will identify the Application is trying to POST to ‘my’ time-line. Also it will check whether the API_KEY has Read & Write permissions – if so it will let the Application post to my Twitter time-line. If the Application tries to read my Direct Messages using the same API_KEY I gave it – then Twitter will detect that the API_KEY doesn’t have Read, Write & Direct Message permission and the request will fail. Even in the above case, if the Application tries to post to the Application Developer’s Twitter account – there also it needs an API_KEY – from the Application Developer, which he can get from Twitter. Once user grants access to an Application by it’s API_KEY – during the entire life time of the key, the application can access his account. But, if the user wants to revoke the key, Twitter provides a way to do that as well. Basically when you go here, it displays all the Applications you have given permissions to access your Twitter account – if you want, you can revoke access from there. 8. Twitter lets users revoke API_KEYs Also another interesting thing is how Twitter does API versioning. If you carefully look at the URLs, you will notice that the version number is included in the URL it self  -https://api.twitter.com/1/statuses/public_timeline.json. But it does not let Application developers to pick, which versions of the APIs they want to use. 9. Twitter tracks API versions in runtime. 10.Twitter does not let Application developers to pick API versions at the time he designs it. Twitter also has a way of monitoring the status of the API. Following shows a screenshot of it. 11. Twitter does API monitoring in runtime.  Reference: The Twitter API Management Model from our JCG partner Prabath Siriwardena at the Facile Login blog. ...
spring-interview-questions-answers

JAXB Custom Binding – Java.util.Date / Spring 3 Serialization

JaxB can handle Java.util.Date serialization, but it expects the following format: “yyyy-MM-ddTHH:mm:ss“. What if you need to format the date object in another format? I had the same issue when I was working with Spring MVc 3 and Jackson JSON Processor, and recently, I faced the same issue working with Spring MVC 3 and JAXB for XML serialization. Let’s digg into the issue: Problem: I have the following Java Beans which I want to serialize in XML using Spring MVC 3: package com.loiane.model;import java.util.Date;public class Company {private int id;private String company;private double price;private double change;private double pctChange;private Date lastChange;//getters and setters And I have another object which is going to wrap the POJO above: package com.loiane.model;import java.util.List;import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlRootElement;@XmlRootElement(name="companies") public class Companies {@XmlElement(required = true) private List<Company> list;public void setList(List<Company> list) { this.list = list; } } In my Spring controller, I’m going to return a List of Company through the the @ResponseBody annotation – which is going to serialize the object automatically with JaxB: @RequestMapping(value="/company/view.action") public @ResponseBody Companies view() throws Exception {} When I call the controller method, this is what it returns to the view: <companies> <list> <change>0.02</change> <company>3m Co</company> <id>1</id> <lastChange>2011-09-01T00:00:00-03:00</lastChange> <pctChange>0.03</pctChange> <price>71.72</price> </list> <list> <change>0.42</change> <company>Alcoa Inc</company> <id>2</id> <lastChange>2011-09-01T00:00:00-03:00</lastChange> <pctChange>1.47</pctChange> <price>29.01</price> </list> </companies> Note the date format. It is not the format I expect it to return. I need to serialize the date in the following format: “MM-dd-yyyy“ Solution: I need to create a class extending the XmlAdapter and override the marshal and unmarshal methods and in these methods I am going to format the date as I need to: package com.loiane.util;import java.text.SimpleDateFormat; import java.util.Date;import javax.xml.bind.annotation.adapters.XmlAdapter;public class JaxbDateSerializer extends XmlAdapter<String, Date>{private SimpleDateFormat dateFormat = new SimpleDateFormat("MM-dd-yyyy");@Override public String marshal(Date date) throws Exception { return dateFormat.format(date); }@Override public Date unmarshal(String date) throws Exception { return dateFormat.parse(date); } } And in my Java Bean class, I simply need to add the @XmlJavaTypeAdapter annotation in the get method of the date property. package com.loiane.model;import java.util.Date;import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;import com.loiane.util.JaxbDateSerializer;public class Company {private int id;private String company;private double price;private double change;private double pctChange;private Date lastChange;@XmlJavaTypeAdapter(JaxbDateSerializer.class) public Date getLastChange() { return lastChange; } //getters and setters } If we try to call the controller method again, it is going to return the following XML: <companies> <list> <change>0.02</change> <company>3m Co</company> <id>1</id> <lastChange>09-01-2011</lastChange> <pctChange>0.03</pctChange> <price>71.72</price> </list> <list> <change>0.42</change> <company>Alcoa Inc</company> <id>2</id> <lastChange>09-01-2011</lastChange> <pctChange>1.47</pctChange> <price>29.01</price> </list> </companies> Problem solved! Happy Coding! Reference: JAXB Custom Binding – Java.util.Date / Spring 3 Serialization from our JCG partner Loiane Groner at the Loiane Groner’s blog blog....
apache-solr-logo

Solr: Creating a spellchecker

In a previous post I talked about how the Solr Spellchecker works and then I showed you some test results of its performance. Now we are going to see another aproach to spellchecking. This method, as many others, use a two step procedure. A rather fast “candidate word” selection, and then a scoring of those words. We are going to select different methods from the ones that Solr uses and test its performance. Our main objective will be effectiveness in the correction, and in a second term, velocity in the results. We can tolerate a slightly slower performance considering that we are gaining in correctness of the results. Our strategy will be to use a special Lucene index, and query it using fuzzy queries to get a candidate list. Then we are going to rank the candidates with a Python script (that can easily be transformed in a Solr spell checker subclass if we get better results). Candidate selection Fuzzy queries have historically been considered a slow performance query in relation with others but , as they have been optimized in the 1.4 version, they are a good choice for the first part of our algorithm. So, the idea will be very simple: we are going to construct a Lucene index where every document will be a dictionary word. When we have to correct a misspelled word we are going to do a simple fuzzy query of that word and get a list of results. The results will be words similar to the one we provided (ie with a small edit distance). I found that with approximately 70 candidates we can get excellent results. With fuzzy queries we are covering all the typos because, as I said in the previous post, most of the typos are of edit distance 1 with respect to the correct word. But although this is the most common error people make while typing, there are other kinds of errors.We can find three types of misspellings [Kukich]:Typographic errors Cognitive errors Phonetic errorsTypographic errors are the typos, when people knows the correct spelling but makes a motor coordination slip when typing. The cognitive errors are those caused by a lack of knowledge of the person. Finally, phonetic errors are a special case of cognitive errors that are words that sound correctly but are orthographically incorrect. We already covered typographic errors with the fuzzy query, but we can also do something for the phonetic errors. Solr has a Phonetic Filter in its analysis package that, among others, has the double methaphone algorithm. In the same way we perform fuzzy query to find similar words, we can index the methaphone equivalent of the word and perform fuzzy query on it. We must manually obtain the methaphone equivalent of the word (because the Lucene query parser don’t analyze fuzzy queries) and construct a fuzzy query with that word. In few words, for the candidate selection we construct an index with the following solr schema: <fieldType name="spellcheck_text" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true"> <analyzer type="index"> <tokenizer class="solr.KeywordTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.PhoneticFilterFactory" encoder="DoubleMetaphone" maxCodeLength="20" inject="false"/> </analyzer> </fieldType><field name="original_word" type="string" indexed="true" stored="true" multiValued="false"/> <field name="analyzed_word" type="spellcheck_text" indexed="true" stored="true" multiValued="false"/> <field name="freq" type="tfloat" stored="true" multiValued="false"/> As you can see the analyzed_word field contains the “soundslike” of the word. The freq field will be used in the next phase of the algorithm. It is simply the frequency of the term in the language. How can we estimate the frequency of a word in a language? Counting the frequency of the word in a big text corpus. In this case the source of the terms is the wikipedia and we are using the TermComponents of Solr to count how many times each term appears in the wikipedia. But the Wikipedia is written by common people that make errors! How can we trust in this as a “correct dictionary”? We make use of the “colective knowledge” of the people that writes the Wikipedia. This dictionary of terms extracted from the Wikipedia has a lot of terms! Over 1.800.00, and most of them aren’t even words. It is likely that words with a high frequency are correctly spelled in the Wikipedia. This approach of building a dictionary from a big corpus of words and considering correct the most frequent ones isn’t new. In [Cucerzan] they use the same concept but using query logs to build the dictionary. It apears that Google’s “Did you mean” use a similar concept. We can add little optimizations here. I have found that we can remove some words and get good results. For example, I removed words with frequency 1, and words that begin with numbers. We can continue removing words based on other criteria, but we’ll leave this like that. So the procedure for building the index is simple, we extract all the terms from the wikipedia index via the TermsComponent of Solr along with frequencies, and then create an index in Solr, using SolrJ. Candidate ranking Now the ranking of the candidates. For the second phase of the algorithm we are going to make use of information theory, in particular, the noisy channel model. The noisy channel applied to this case assumes that the human knows the correct spelling of a word but some noise in the channel introduces the error and as the result we get another word, misspelled. We intuitively know that it is very unlikely that we get ‘sarasa’ when trying to type ‘house’ so the noisy channel model introduces some formality to finding how probable an error was. For example, we have misspelled ‘houze’ and we want to know which one is the most likely word that we wanted to type. To accomplish that we have a big dictionary of possible words, but not all of them are equally probable. We want to obtain the word with the highest probability of having been intended to be typed. In mathematics that is called conditional probability; given that we typed ‘houze’ how high is the probability of each of the correct words to be the word that we intended. The notation of conditional probability is: P(‘house’|’houze’) that stands for the probability of ‘house’ given ‘houze’ This problem can be seen from two perspectives: we may think that the most common words are more probable, for example ‘house’ is more probable than ‘hose’ because the former is a more common word. In the other hand, we also intuitively think that ‘house’ is more probable than ‘photosinthesis’ because of the big difference in both words. Both of these aspects, are formally deduced by Bayes theorem:We have to maximize this probability and to do that we only have one parameter: the correct candidate word (‘house’ in the case shown).For that reason the probability of the misspelled word will be constant and we are not interested in it. The formula reduces toAnd to add more structure to this, scientists have given named to these two factors. The P(‘houze’|’house’) factor is the Error model (or Channel Model) and relates with how probable is that the channel introduces this particular misspell when trying to write the second word. The second term P(‘house’) is called the Language model and gives us an idea of how common a word is in a language.Up to this point, I only introduced the mathematical aspects of the model. Now we have to come up with a concrete model of this two probabilities. For the Language model we can use the frequency of the term in the text corpus. I have found empirically that it works much better to use the logarithm of the frequency rather than the frequency alone. Maybe this is because we want to reduce the weight of the very frequent terms more than the less frequent ones, and the logarithm does just that.There is not only one way to construct a Channel model. Many different ideas have been proposed. We are going to use a simple one based in the Damerau-Levenshtein distance. But also I found that the fuzzy query of the first phase does a good job in finding the candidates. It gives the correct word in the first place in more than half of the test cases with some datasets. So the Channel model will be a combination of the Damerau-Levenshtein distance and the score that Lucene created for the terms of the fuzzy query. The ranking formula will be:I programmed a small script (python) that does all that was previously said: from urllib import urlopen import doubleMethaphone import levenshtain import jsonserver = "http://benchmarks:8983/solr/testSpellMeta/"def spellWord(word, candidateNum = 70): #fuzzy + soundlike metaphone = doubleMethaphone.dm(word) query = "original_word:%s~ OR analyzed_word:%s~" % (word, metaphone[0])if metaphone[1] != None: query = query + " OR analyzed_word:%s~" % metaphone[1]doc = urlopen(server + "select?rows=%d&wt=json&fl=*,score&omitHeader=true&q=%s" % (candidateNum, query)).read( ) response = json.loads(doc) suggestions = response['response']['docs']if len(suggestions) > 0: #score scores = [(sug['original_word'], scoreWord(sug, word)) for sug in suggestions] scores.sort(key=lambda candidate: candidate[1]) return scores else: return []def scoreWord(suggestion, misspelled): distance = float(levenshtain.dameraulevenshtein(suggestion['original_word'], misspelled)) if distance == 0: distance = 1000 fuzzy = suggestion['score'] logFreq = suggestion['freq']return distance/(fuzzy*logFreq) From the previous listing I have to make some remarks. In line 2 and 3 we use third party libraries for Levenshtein distance and metaphone algorithms. In line 8 we are collecting a list of 70 candidates. This particular number was found empirically. With higher candidates the algorithm is slower and with fewer is less effective. We are also excluding the misspelled word from the candidates list in line 30. As we used the wikipedia as our source it is common that the misspelled word is found in the dictionary. So if the Leveshtain distance is 0 (same word) we add 1000 to its distance. Tests I ran some tests with this algorithm. The first one will be using the dataset that Peter Norvig used in his article. I found the correct suggestion of the word in the first position approximately 80% of the times!!! That’s is a really good result. Norvig with the same dataset (but with a different algoritm and training set) got 67% Now let’s repeat some of the test of the previous post to see the improvement. In the following table I show you the results.Test set % Solr % new Solr time [seconds] New time [seconds] Improvement Time lossFAWTHROP1DAT.643 45,61% 81,91% 31,50 74,19 79,58% 135,55%batch0.tab 28,70% 56,34% 21,95 47,05 96,30% 114,34%SHEFFIELDDAT.643 60,42% 86,24% 19,29 35,12 42,75% 82,06%We can see that we get very good improvements in effectiveness of the correction but it takes about twice the time. Future work How can we improve this spellchecker. Well, studying the candidates list it can be found that the correct word is generally (95% of the times) contained in it. So all our efforts should be aimed to improve the scoring algorithm.We have many ways of improving the channel model; several papers show that calculating more sophisticated distances weighting the different letter transformations according to language statistics can give us a better measure. For example we know that writing ‘houpe’ y less probable than writing ‘houze’.For the language model, great improvements can be obtained by adding more context to the word. For example if we misspelled ‘nouse’ it is very difficult to tell that the correct word is ‘house’ or ‘mouse’. But if we add more words “paint my nouse” it is evident that the word that we were looking for was ‘house’ (unless you have strange habits involving rodents). These are also called ngrams (but of words in this case, instead of letters). Google has offered a big collection of ngrams that are available to download, with their frequencies.Lastly but not least, the performance can be improved by programming the script in java. Part of the algorithm was in python.Bye!As an update for all of you interested, Robert Muir told me in the Solr User list that there is a new spellchecker, DirectSpellChecker, that was in the trunk then and now should be part of Solr 3.1. It uses a similar technique to the one i presented in this entry without the performance loses.     References[Kukich] Karen Kukich – Techniques for automatically correcting words in text – ACM Computing Surveys – Volume 24 Issue 4, Dec. 1992[Cucerzan] S. Cucerzan and E. Brill Spelling correction as an iterative process that exploits the collective knowledge of web users. July 2004Peter Norvig – How to Write a Spelling CorrectorReference: Creating a spellchecker with Solr from our JCG partner Emmanuel Espina at the emmaespina blog....
apache-openjpa-logo

Registering entity types with OpenJPA programmatically

I’ve just started work on an OpenJPA objectstore for Isis. In the normal scheme of things, one would register the entity types within the persistence.xml file. However, Isis is a framework that builds its own metamodel, and can figure out for itself which classes constitute entities. I therefore didn’t want to have to force the developer to repeat themselves, so the puzzle became how to register the entity types programmatically within the Isis code. It turns out to be pretty simple, if a little ugly. OpenJPA allows implementations of certain key components to be defined programmatically; these are specified in a properties map that is then passed through to javax.persistence.Persistence.createEntityManager(null, props). But it also supports a syntax that can be used to initialize those components through setter injection. In my case the component of interest is the openjpa.MetaDataFactory. At one point I thought I’d be writing my own implementation; but it turns out that the standard implementation does what I need, because it allows the types to be injected through its setTypes(List<String>) mutator. The list of strings is passed into that property as a ;-delimited list. So, here’s what I’ve ended up with: final Map<String, String> props = Maps.newHashMap();final String typeList = entityTypeList(); props.put("openjpa.MetaDataFactory", "org.apache.openjpa.persistence.jdbc.PersistenceMappingFactory(types=" + typeList + ")");// ... then add in regular properties such as // openjpa.ConnectionURL, openjpa.ConnectionDriverName etc... entityManagerFactory = Persistence.createEntityManagerFactory(null, props); where entityTypeList() in my case looks something like: private String entityTypeList() { final StringBuilder buf = new StringBuilder(); // loop thru Isis' metamodel looking for types that have been annotated using @Entity final Collection<ObjectSpecification> allSpecifications = getSpecificationLoader().allSpecifications(); for(ObjectSpecification objSpec: allSpecifications) { if(objSpec.containsFacet(JpaEntityFacet.class)) { final String fqcn = objSpec.getFullIdentifier(); buf.append(fqcn).append(";"); } } final String typeList = buf.toString(); return typeList; } Comments welcome, as ever Reference: Registering entity types with OpenJPA programmatically from our JCG partner Dan Haywood at the Dan Haywood blog blog....
netbeans-logo

NetBeans 7.2 Introduces TestNG

One of the advantages of code generation is the ability to see how a specific language feature or framework is used. As I discussed in the post NetBeans 7.2 beta: Faster and More Helpful, NetBeans 7.2 beta provides TestNG integration. I did not elaborate further in that post other than a single reference to that feature because I wanted to devote this post to the subject. I use this post to demonstrate how NetBeans 7.2 can be used to help a developer new to TestNG start using this alternative (to JUnit) test framework. NetBeans 7.2’s New File wizard makes it easier to create an empty TestNG test case. This is demonstrated in the following screen snapshots that are kicked off by using New File | Unit Tests (note that “New File” is available under the “File” drop-down menu or by right-clicking in the Projects window).Running the TestNG test case creation as shown above leads to the following generated test code. TestNGDemo.java (Generated by NetBeans 7.2) package dustin.examples;import org.testng.annotations.AfterMethod; import org.testng.annotations.AfterClass; import org.testng.annotations.BeforeMethod; import org.testng.annotations.BeforeClass; import org.testng.annotations.Test; import org.testng.Assert;/** * * @author Dustin */ public class TestNGDemo { public TestNGDemo() { } @BeforeClass public void setUpClass() { } @AfterClass public void tearDownClass() { } @BeforeMethod public void setUp() { } @AfterMethod public void tearDown() { } // TODO add test methods here. // The methods must be annotated with annotation @Test. For example: // // @Test // public void hello() {} } The test generated by NetBeans 7.2 includes comments indicate how test methods are added and annotated (similar to modern versions of JUnit). The generated code also shows some annotations for overall test case set up and tear down and for per-test set up and tear down (annotations are similar to JUnit’s). NetBeans identifies import statements that are not yet used at this point (import org.testng.annotations.Test; and import org.testng.Assert;), but are likely to be used and so have been included in the generated code.I can add a test method easily to this generated test case. The following code snippet is a test method using TestNG. testIntegerArithmeticMultiplyIntegers() @Test public void testIntegerArithmeticMultiplyIntegers() { final IntegerArithmetic instance = new IntegerArithmetic(); final int[] integers = {4, 5, 6}; final int expectedProduct = 2 * 3 * 4 * 5 * 6; final int product = instance.multiplyIntegers(2, 3, integers); assertEquals(product, expectedProduct); } This, of course, looks very similar to the JUnit equivalent I used against the same IntegerArithmetic class that I used for testing illustrations in the posts Improving On assertEquals with JUnit and Hamcrest and JUnit’s Built-in Hamcrest Core Matcher Support. The following screen snapshot shows the output in NetBeans 7.2 beta from right-clicking on the test case class and selecting “Run File” (Shift+F6).The text output of the TestNG run provided in the NetBeans 7.2 beta is reproduced next. [TestNG] Running: Command line suite[VerboseTestNG] RUNNING: Suite: "Command line test" containing "1" Tests (config: null) [VerboseTestNG] INVOKING CONFIGURATION: "Command line test" - @BeforeClass dustin.examples.TestNGDemo.setUpClass() [VerboseTestNG] PASSED CONFIGURATION: "Command line test" - @BeforeClass dustin.examples.TestNGDemo.setUpClass() finished in 33 ms [VerboseTestNG] INVOKING CONFIGURATION: "Command line test" - @BeforeMethod dustin.examples.TestNGDemo.setUp() [VerboseTestNG] PASSED CONFIGURATION: "Command line test" - @BeforeMethod dustin.examples.TestNGDemo.setUp() finished in 2 ms [VerboseTestNG] INVOKING: "Command line test" - dustin.examples.TestNGDemo.testIntegerArithmeticMultiplyIntegers() [VerboseTestNG] PASSED: "Command line test" - dustin.examples.TestNGDemo.testIntegerArithmeticMultiplyIntegers() finished in 12 ms [VerboseTestNG] INVOKING CONFIGURATION: "Command line test" - @AfterMethod dustin.examples.TestNGDemo.tearDown() [VerboseTestNG] PASSED CONFIGURATION: "Command line test" - @AfterMethod dustin.examples.TestNGDemo.tearDown() finished in 1 ms [VerboseTestNG] INVOKING CONFIGURATION: "Command line test" - @AfterClass dustin.examples.TestNGDemo.tearDownClass() [VerboseTestNG] PASSED CONFIGURATION: "Command line test" - @AfterClass dustin.examples.TestNGDemo.tearDownClass() finished in 1 ms [VerboseTestNG] [VerboseTestNG] =============================================== [VerboseTestNG] Command line test [VerboseTestNG] Tests run: 1, Failures: 0, Skips: 0 [VerboseTestNG] ============================================================================================== Command line suite Total tests run: 1, Failures: 0, Skips: 0 ===============================================Deleting directory C:\Users\Dustin\AppData\Local\Temp\dustin.examples.TestNGDemo test: BUILD SUCCESSFUL (total time: 2 seconds) The above example shows how easy it is to start using TestNG, especially if one is moving to TestNG from JUnit and is using NetBeans 7.2 beta. Of course, there is much more to TestNG than this, but learning a new framework is typically most difficult at the very beginning and NetBeans 7.2 gets one off to a fast start. Reference: NetBeans 7.2 Introduces TestNG from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
spring-interview-questions-answers

Make your Spring Security @Secured annotations more DRY

Recently a user on the Grails User mailing list wanted to know how to reduce repetition when defining @Secured annotations. The rules for specifying attributes in Java annotations are pretty restrictive, so I couldn’t see a direct way to do what he was asking. Using Groovy doesn’t really help here since for the most part annotations in a Groovy class are pretty much the same as in Java (except for the syntax for array values). Of course Groovy now supports closures in annotations, but this would require a code change in the plugin. But then I thought about some work Jeff Brown did recently in the cache plugin. Spring’s cache abstraction API includes three annotations; @Cacheable, @CacheEvict, and @CachePut. We were thinking ahead about supporting more configuration options than these annotations allow, but since you can’t subclass annotations we decided to use an AST transformation to find our versions of these annotations (currently with the same attributes as the Spring annotations) and convert them to valid Spring annotations. So I looked at Jeff’s code and it ended up being the basis for a fix for this problem. It’s not possible to use code to externalize the authority lists because you can’t control the compilation order. So I ended up with a solution that isn’t perfect but works – I look for a properties file in the project root (roles.properties). The format is simple – the keys are names for each authority list and the values are the lists of authority names, comma-delimited. Here’s an example: admins=ROLE_ADMIN, ROLE_SUPERADMIN switchUser=ROLE_SWITCH_USER editors=ROLE_EDITOR, ROLE_ADMIN These keys are the values you use for the new @Authorities annotation: package grails.plugins.springsecurity.annotation;import java.lang.annotation.Documented; import java.lang.annotation.ElementType; import java.lang.annotation.Inherited; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target;import org.codehaus.groovy.transform.GroovyASTTransformationClass;/** * @author Burt Beckwith */ @Target({ElementType.FIELD, ElementType.METHOD, ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @Inherited @Documented @GroovyASTTransformationClass( "grails.plugins.springsecurity.annotation.AuthoritiesTransformation") public @interface Authorities { /** * The property file key; the property value will be a * comma-delimited list of role names. * @return the key */ String value(); } For example here’s a controller using the new annotation: @Authorities('admins') class SecureController {@Authorities('editors') def someAction() { ... } } This is the equivalent of this controller (and if you decompile the one with @Authorities you’ll see both annotations): @Secured(['ROLE_ADMIN', 'ROLE_SUPERADMIN']) class SecureController {@Secured(['ROLE_EDITOR', 'ROLE_ADMIN']) def someAction() { ... } } The AST transformation class looks for @Authorities annotations, loads the properties file, and adds a new @Secured annotation (the @Authorities annotation isn’t removed) using the role names specified in the properties file: package grails.plugins.springsecurity.annotation;import grails.plugins.springsecurity.Secured;import java.io.File; import java.io.FileReader; import java.io.IOException; import java.util.ArrayList; import java.util.List; import java.util.Properties;import org.codehaus.groovy.ast.ASTNode; import org.codehaus.groovy.ast.AnnotatedNode; import org.codehaus.groovy.ast.AnnotationNode; import org.codehaus.groovy.ast.ClassNode; import org.codehaus.groovy.ast.expr.ConstantExpression; import org.codehaus.groovy.ast.expr.Expression; import org.codehaus.groovy.ast.expr.ListExpression; import org.codehaus.groovy.control.CompilePhase; import org.codehaus.groovy.control.SourceUnit; import org.codehaus.groovy.transform.ASTTransformation; import org.codehaus.groovy.transform.GroovyASTTransformation; import org.springframework.util.StringUtils;/** * @author Burt Beckwith */ @GroovyASTTransformation(phase=CompilePhase.CANONICALIZATION) public class AuthoritiesTransformation implements ASTTransformation {protected static final ClassNode SECURED = new ClassNode(Secured.class);public void visit(ASTNode[] astNodes, SourceUnit sourceUnit) { try { ASTNode firstNode = astNodes[0]; ASTNode secondNode = astNodes[1]; if (!(firstNode instanceof AnnotationNode) || !(secondNode instanceof AnnotatedNode)) { throw new RuntimeException("Internal error: wrong types: " + firstNode.getClass().getName() + " / " + secondNode.getClass().getName()); }AnnotationNode rolesAnnotationNode = (AnnotationNode) firstNode; AnnotatedNode annotatedNode = (AnnotatedNode) secondNode;AnnotationNode secured = createAnnotation(rolesAnnotationNode); if (secured != null) { annotatedNode.addAnnotation(secured); } } catch (Exception e) { // TODO e.printStackTrace(); } }protected AnnotationNode createAnnotation(AnnotationNode rolesNode) throws IOException { Expression value = rolesNode.getMembers().get("value"); if (!(value instanceof ConstantExpression)) { // TODO System.out.println( "annotation @Authorities value isn't a ConstantExpression: " + value); return null; }String fieldName = value.getText(); String[] authorityNames = getAuthorityNames(fieldName); if (authorityNames == null) { return null; }return buildAnnotationNode(authorityNames); }protected AnnotationNode buildAnnotationNode(String[] names) { AnnotationNode securedAnnotationNode = new AnnotationNode(SECURED); List<Expression> nameExpressions = new ArrayList<Expression>(); for (String authorityName : names) { nameExpressions.add(new ConstantExpression(authorityName)); } securedAnnotationNode.addMember("value", new ListExpression(nameExpressions)); return securedAnnotationNode; }protected String[] getAuthorityNames(String fieldName) throws IOException {Properties properties = new Properties(); File propertyFile = new File("roles.properties"); if (!propertyFile.exists()) { // TODO System.out.println("Property file roles.properties not found"); return null; }properties.load(new FileReader(propertyFile));Object value = properties.getProperty(fieldName); if (value == null) { // TODO System.out.println("No value for property '" + fieldName + "'"); return null; }List<String> names = new ArrayList<String>(); String[] nameArray = StringUtils.commaDelimitedListToStringArray( value.toString()) for (String auth : nameArray) { auth = auth.trim(); if (auth.length() > 0) { names.add(auth); } }return names.toArray(new String[names.size()]); } } I’ll probably include this in the plugin at some point – I created a JIRA issue as a reminder – but for now you can just copy these two classes into your application’s src/java folder and create a roles.properties file in the project root. Any time you want to add or remove an entry or add or remove a role name from an entry, update the properties file, run grails clean and grails compile to be sure that the latest values are used. Reference: Make your Spring Security @Secured annotations more DRY from our JCG partner Burt Beckwith at the An Army of Solipsists blog....
software-development-2-logo

Frustrations and aspirations of a software craftsman

For a while I’ve been thinking about what makes me like or dislike a project. Having spent a very big part of my career working for consultancy companies, I was exposed to many different environments, industries, team sizes, processes and technologies. There were projects that I absolutely loved, some projects were OK and some were a real pain. There were even a couple of times in my career where I questioned myself if the choice of being a software craftsman and keep walking the long road would be the best thing for my sanity. What makes me dislike a project? Well, there are many factors. Here are just a few but is far from being a complete list:Bureaucracy is something that can be really frustrating. That includes process for the sake of process, innumerable cycles of approvals, tortuous and long test and deployment cycles, pointless documentation, and all that anti-agile stuff. Old technologies or the “wrong” technology for the job is always demotivating. We love new toys. There is nothing more annoying when the technology stack is imposed on the team. “You must use these tools from Oracle or IBM. But, hey, don’t look like that. You have support if you need it.” Lack of autonomy and credibility. “You are just the developers here. You don’t make decisions. You do what you are told to do. There are much smarter people here to worry about the _real_ problems. And by the way, you don’t have admin rights to your PC and you can’t access a few websites either.“ Uninteresting domain. It’s always difficult to find motivation to build a great software if you don’t like what the software does or don’t really believe in the business idea. Demotivated people. How can we find motivation and have team spirit when your colleagues attitude is: “Oh, I just turn up to work, keep my head down and do what I’m told. If something goes wrong, it’s not my fault.” Finger pointing and highly competitive environment, where no one plays as a team. This is an environment where everyone wants to be the boss, they are always looking for a scapegoat and the less work they do, the more they delegate, the better. If something goes wrong, it would never be their fault. If something goes well, they take all the credit. Arrogant and unskilled people. Arrogance many times is used as a self-defence mechanism in order to hide the lack of skills a person may have. “I don’t need to read any books. I think all these new technologies and methodologies are crap. I’ve been doing this for years. I know what it is best.” Software factory concept. “We need to go faster. Let’s throw more developers here. Which ones? Doesn’t matter. Just hire some monkeys.“ Mortgage-Driven Development Project managers that think they are the most important member of the team Very deep hierarchy You can’t help those who don’t want to be helped. I really could go on forever here….So what is really the problem here? When I first mentally thought about all these items, I realised that almost all the things that make me dislike a project are related to people. Yes, people. One of the few exceptions are uninteresting domain and old technologies. So even if I make sure that I don’t work on projects related to subjects I have no interest and that use the latest technologies, the people involved may make it very frustrating. Have you ever been on a project where you thought that the project had everything to be a great project but for some reason it was a pain?After this analysis, things were not looking good, so I decided to look at all the projects I really enjoyed. It was when I realised that in some of them we didn’t use the latest technologies and in a few of them I was not exactly passionate about the domain either. One or two were even quite bureaucratic. So, why did I enjoy them? What was in there that made me put them among the projects I liked the most? Once again, the answer was “people”. The good projects and what I always would like to find My favourite projects had quite a few things in common but the most important ones were passion, craftsmanship, friendship and trust. Passion It’s not because you like something that you are going to be good at it. However, to be really good at something, you must have a passion for it.   The best people for a job are the ones that love the job. This in the essential quality that drives people to be successful in whatever they decide to do. A passionate person will do whatever is in his or her power to keep acquiring skills and do the best they can. Passionate people bring innovation, they question what they are doing, they want to contribute, they want to get involved, they want to learn. They want to succeed. Passionate people CARE. Craftsmanship The only way to go fast is to go well – Uncle Bob Martin In all my favourite projects, the focus on quality and the willingness to see users satisfied and using the system was always a big thing. The whole team was focused in delivering the best project we could, taking into consideration all the constraints we were under. It was clear to all of us that to be successful we had to be pragmatic. We also always believed that how it is done is as important as having it done.  Software craftsmen use the right tools for the job, are skilful, are pragmatic, care about the quality of their work, care about their reputation and want to delight every single one of their customers and users. Software craftsmen CARE. Friendship So far I’ve been talking about passionate people and care. However we know that people have different opinions and there are many ways to do the same thing. Now imagine a room full of very passionate people that don’t really get along. In the best scenario, there will be some memorable arguments. In the worst, they will kill each other. Friendship is the answer to that. The importance of social events for a team is enormous, even if it is just a few drinks once or twice a month. Like it or not, we spend more time with our colleagues than with our own family, so it is important that we have the friendliest environment possible. Having lunch together at least a few times per week is another thing that can help improve this friendship. Team members need some time together where they are not always just talking about work. Team members need time to know each other. Among friends people feel comfortable to speak their minds. Working with friends helps to improve the quality of the discussions and no one is worried to expose ignorance or give suggestions. Friends help each other, friends learn from each other, friends CARE for each other. Trust Once people can demonstrate their commitment and passion, can demonstrate competence, willingness to learn and contribute and are able to deliver the best software possible, trust is easily established. With trust you gain autonomy and are free to decide what it is best for the project and to do your job well. With trust you can be more effective when delegating or sharing tasks. With trust we can remove all the bureaucracy that impedes that we do our job efficiently. Aspirations I just wish I could find the things I described above in all projects. Companies should aim to hire passionate and skilful people. People that can contribute to their projects and organisation. People that are willing to share and learn. Unfortunately not always we will find all that. In those cases, the only thing we can do is to try our best to change the environment around us. We can try motivate people and share our passion. We can be nice to everyone, respect our colleagues and promote an environment where everyone feels comfortable to ask for help, to help each other and share knowledge. With great people we can overcome any obstacle and have an environment where every morning, when we wake up, we would think: “Yay! Today I’ll have another great day at work.” Reference: Frustrations and aspirations of a software craftsman from our JCG partner Sandro Mancuso at the Crafted Software blog....
oracle-weblogic-logo

Dealing with Weblogic Stuck Threads

Definition or What is a Stuck Thread?   WebLogic Server diagnoses a thread as stuck if it is continually working (not idle) for a set period of time. You can tune a server’s thread detection behavior by changing the length of time before a thread is diagnosed as stuck ( Stuck Thread Max Time), and by changing the frequency with which the server checks for stuck threads. Check here to see how to change the Stuck Thread Max Time. The problem or Why are Stuck Threads evil?   WebLogic Server automatically detects when a thread in an execute queue becomes “stuck.” Because a stuck thread cannot complete its current work or accept new work, the server logs a message each time it diagnoses a stuck thread. If all threads in an execute queue become stuck, the server changes its health state to either “warning” or “critical” depending on the execute queue:If all threads in the default queue become stuck, the server changes its health state to “critical.” (You can set up the Node Manager application to automatically shut down and restart servers in the critical health state. For more information, see “Node Manager Capabilities” in Configuring and Managing WebLogic Server.) If all threads in weblogic.admin.HTTP, weblogic.admin.RMI, or a user-defined execute queue become stuck, the server changes its health state to “warning.”So practically, a couple of Stuck Threads might not crash your server preventing it from serving request, but it is a bad sign. Usually, the number of stuck threads will increase and your server will eventually crash. What you can do to avoid your application completely fail?   WebLogic Server checks for stuck threads periodically (this is the Stuck Thread Timer Interval and you can adjust it here). If all application threads are stuck, a server instance marks itself failed, if configured to do so, exits. You can configure Node Manager or a third-party high-availability solution to restart the server instance for automatic failure recovery.You can configure these actions to occur when not all threads are stuck, but the number of stuck threads have exceeded a configured threshold:Shut down the Work Manager if it has stuck threads. A Work Manager that is shut down will refuse new work and reject existing work in the queue by sending a rejection message. In a cluster, clustered clients will fail over to another cluster member.Shut down the application if there are stuck threads in the application. The application is shutdown by bringing it into admin mode. All Work Managers belonging to the application are shut down, and behave as described above. Mark the server instance as failed and shut it down it down if there are stuck threads in the server. In a cluster, clustered clients that are connected or attempting to connect will fail over to another cluster member.How to identify the problem?   The most recommended way is to check the thread dumps. Check Sending Email Alert For Stuck Threads With Thread Dumps post of Middleware magic, to have Thread Dumps mailed to you automatically when they occur. Tools to help you with analyzing the Thread Dumps can be:TDA – Thread Dump Analyzer SamuraiHow to workaround the problem?   After you have identify the code that causes the Stuck Thread, that is the code which execution takes more than the Stack Thread Max Time, you can use Work Manager to execute your code. Work Managers have a Ignore Stuck Thread options that gives the ability to execute long running jobs. See below:Below are some posts on how to create a Work Managerhttps://blogs.oracle.com/jamesbayer/entry/work_manager_leash_for_slow_js http://jdeveloperfaq.blogspot.com/2011/05/faq-34-using-weblogic-work-managers-to.htmlTest: How to create a Stuck Thread?   How to create a Stuck Thread in order to test your weblogic settings? Put a breakpoint in a backing bean or model method that is called with you request. If you wait in the breakpoint for Stuck Max Thread Time, you notice a Stuck Thread trace will be shown in servers log: <16 =?? 2011 12:28:22 ?? EET><Error><WebLogicServer><BEA-000337><[STUCK] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)' has been busy for "134" seconds working on the request "weblogic.servlet.internal.ServletRequestImpl@6e6f4718[ GET /---/---/----/---/days.xhtml HTTP/1.1 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.120 Safari/535.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: JSESSIONID=DYG5TDTZSnKLTFw5CMMdLCD9sPsZS4Jqlmxj9wdGNyt1BnPcfNrR!-1520792836]", which is more than the configured time (StuckThreadMaxTime) of "60" seconds. Stack trace: --------------------------------------------(--------------------.java:83) javax.faces.component.UIComponentBase.encodeBegin(UIComponentBase.java:823) com.sun.faces.renderkit.html_basic.HtmlBasicRenderer.encodeRecursive(HtmlBasicRenderer.java:285) com.sun.faces.renderkit.html_basic.GridRenderer.renderRow(GridRenderer.java:185) com.sun.faces.renderkit.html_basic.GridRenderer.encodeChildren(GridRenderer.java:129) javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:848) org.primefaces.renderkit.CoreRenderer.renderChild(CoreRenderer.java:55) org.primefaces.renderkit.CoreRenderer.renderChildren(CoreRenderer.java:43) org.primefaces.component.fieldset.FieldsetRenderer.encodeContent(FieldsetRenderer.java:95) org.primefaces.component.fieldset.FieldsetRenderer.encodeMarkup(FieldsetRenderer.java:76) org.primefaces.component.fieldset.FieldsetRenderer.encodeEnd(FieldsetRenderer.java:53) javax.faces.component.UIComponentBase.encodeEnd(UIComponentBase.java:878) javax.faces.component.UIComponent.encodeAll(UIComponent.java:1620) javax.faces.render.Renderer.encodeChildren(Renderer.java:168) javax.faces.component.UIComponentBase.encodeChildren(UIComponentBase.java:848) org.primefaces.renderkit.CoreRenderer.renderChild(CoreRenderer.java:55) org.primefaces.renderkit.CoreRenderer.renderChildren(CoreRenderer.java:43) org.primefaces.component.panel.PanelRenderer.encodeContent(PanelRenderer.java:229) org.primefaces.component.panel.PanelRenderer.encodeMarkup(PanelRenderer.java:152)More digging:Excellent post by Frank Munz: WebLogic Stuck Threads: Creating, Understanding and Dealing with them. Updated for Weblogic 12c. Includes sample app for creating Stuck Thread too. http://stackoverflow.com/questions/2709410/weblogic-stuck-thread-protectionsrc:Maxence Button excellent post: http://m-button.blogspot.com/2008/07/using-wlst-to-perform-regular.html http://download.oracle.com/docs/cd/E13222_01/wls/docs81/perfor/WLSTuning.html#1125714 http://download.oracle.com/docs/cd/E21764_01/web.1111/e13701/overload.htm http://java.sys-con.com/node/358060?page=0,0Reference: Dealing with Weblogic Stuck Threads from our JCG partner Spyros Doulgeridis at the ADF & Weblogic How To blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close