Featured FREE Whitepapers

What's New Here?

java-logo

Java Lock Implementations

We all use 3rd party libraries as a normal part of development. Generally, we have no control over their internals. The libraries provided with the JDK are a typical example. Many of these libraries employ locks to manage contention.JDK locks come with two implementations. One uses atomic CAS style instructions to manage the claim process. CAS instructions tend to be the most expensive type of CPU instructions and on x86 have memory ordering semantics. Often locks are un-contended which gives rise to a possible optimisation whereby a lock can be biased to the un-contended thread using techniques to avoid the use of atomic instructions. This biasing allows a lock in theory to be quickly reacquired by the same thread. If the lock turns out to be contended by multiple threads the algorithm with revert from being biased and fall back to the standard approach using atomic instructions. Biased locking became the default lock implementation with Java 6.When respecting the single writer principle, biased locking should be your friend. Lately, when using the sockets API, I decided to measure the lock costs and was surprised by the results. I found that my un-contended thread was incurring a bit more cost than I expected from the lock. I put together the following test to compare the cost of the current lock implementations available in Java 6.The TestFor the test I shall increment a counter within a lock, and increase the number of contending threads on the lock. This test will be repeated for the 3 major lock implementations available to Java:Atomic locking on Java language monitors Biased locking on Java language monitors ReentrantLock introduced with the java.util.concurrent package in Java 5.I’ll also run the tests on the 3 most recent generations of the Intel CPU. For each CPU I’ll execute the tests up to the maximum number of concurrent threads the core count will support.The tests are carried out with 64-bit Linux (Fedora Core 15) and Oracle JDK 1.6.0_29. The Code import java.util.concurrent.BrokenBarrierException; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; import java.util.concurrent.CyclicBarrier;import static java.lang.System.out;public final class TestLocks implements Runnable { public enum LockType { JVM, JUC } public static LockType lockType;public static final long ITERATIONS = 500L * 1000L *1000L; public static long counter = 0L;public static final Object jvmLock = new Object(); public static final Lock jucLock = new ReentrantLock(); private static int numThreads; private static CyclicBarrier barrier;public static void main(final String[] args) throws Exception { lockType = LockType.valueOf(args[0]); numThreads = Integer.parseInt(args[1]); runTest(numThreads); // warm up counter = 0L;final long start = System.nanoTime(); runTest(numThreads); final long duration = System.nanoTime() - start;out.printf("%d threads, duration %,d (ns)\n", numThreads, duration); out.printf("%,d ns/op\n", duration / ITERATIONS); out.printf("%,d ops/s\n", (ITERATIONS * 1000000000L) / duration); out.println("counter = " + counter); }private static void runTest(final int numThreads) throws Exception { barrier = new CyclicBarrier(numThreads); Thread[] threads = new Thread[numThreads];for (int i = 0; i < threads.length; i++) { threads[i] = new Thread(new TestLocks()); }for (Thread t : threads) { t.start(); }for (Thread t : threads) { t.join(); } }public void run() { try { barrier.await(); } catch (Exception e) { // don't care }switch (lockType) { case JVM: jvmLockInc(); break; case JUC: jucLockInc(); break; } }private void jvmLockInc() { long count = ITERATIONS / numThreads; while (0 != count--) { synchronized (jvmLock) { ++counter; } } }private void jucLockInc() { long count = ITERATIONS / numThreads; while (0 != count--) { jucLock.lock(); try { ++counter; } finally { jucLock.unlock(); } } } }Script the tests: set -x for i in {1..8}; do java -XX:-UseBiasedLocking TestLocks JVM $i; done for i in {1..8}; do java -XX:+UseBiasedLocking TestLocks JVM $i; done for i in {1..8}; do java TestLocks JUC $i; done The ResultsFigure 1Figure 2Figure 3Biased locking should no longer be the default lock implementation on modern Intel processors. I recommend you measure your applications and experiement with the -XX:-UseBiasedLocking JVM option to determine if you can benefit from using atomic lock based algorithm for the un-contended case.ObservationsBiased locking, in the un-contended case, is ~10% more expensive than the atomic locking. It seems that for recent CPU generations the cost of atomic instructions is less than the necessary housekeeping for biased locks. Previous to Nehalem, lock instructions would assert a lock on the memory bus to perform the these atomic operations, each would cost more than 100 cycles. Since Nehalem, atomic instructions can be handled local to a CPU core, and typically cost only 10-20 cycles if they do not need to wait on the store buffer to empty while enforcing memory ordering semantics. As contention increases, language monitor locks quickly reach a throughput limit regardless of thread count. ReentrantLock gives the best un-contended performance and scales significantly better with increasing contention compared to language monitors using synchronized. ReentrantLock has an odd characteristic of reduced performance when 2 threads are contending. This deserves further investigation. Sandybridge suffers from the increased latency of atomic instructions I detailed in a previous article when contended thread count is low. As contended thread count continues to increase, the cost of the kernel arbitration tends to dominate and Sandybridge shows its strength with increased memory throughput.ConclusionWhen developing your own concurrent libraries I would recommend ReentrantLock rather than using the synchronized keyword due to the significantly better performance on x86, if a lock-free alternative algorithm is not a viable option.Update 20-Nov-2011Dave Dice has pointed out that biased locking is not implemented for the locks created in the first few seconds of the JVM startup. I’ll re-run my tests this week and post the results. I’ve had some more quality feedback that suggests my results could be potentially invalid. Micro benchmarks can be tricky but the advice of measuring your own application in the large still stands.A re-run of the tests can be seen in this follow-on blog taking account of Dave’s feedback.Reference: Java Lock Implementations from our JCG partner Martin Thompson at the Mechanical Sympathy blog....
software-development-2-logo

Database Usage Practices

After a long period of intense thinking, research and brainstorming sessions, I have finally made up my mind with few best practices that can be considered while using a database.All database servers today has capabilities to optimize the queries sent to the server, ironically, they lack the capability to alter query in such a way that the throughput is maximized. It is up to the developer to analyze and optimize the queries to attain maximum throughput. In this section we will be looking at some useful tips to optimize the queries to produce the optimal output.Prefer prepared statements over create statements. Preparing statements does a syntax checks only once during the query execution, while create statements compiles the queries every time they are executed. Considering today’s servers and application requirements this might not be a critical factor since there is only a benefit of few milliseconds or even lesser. Another more important benefit of using prepared statements is the use of bind variables. The quotes and other characters, used for SQL injection and corrupting database, are taken care by the driver itself, hence improving the security of the data and the database server. Using tools like EXPLAIN, ANALYZE. These are the most important tool for DBAs and Developers to optimize their queries (in posgresql). These commands will give a real-time analysis of how their queries perform. The EXPLAIN commands shows the query execution plan and we can identify bottlenecks in the plan and take appropriate measures to eliminate these bottlenecks. Take advantage of the indexes. Database uses the indexing capabilities wherever it can. Some databases even rearranges the ‘where clause’ so that it can return the output much faster. It is always best if you have the where clause on the indexes first followed by other clauses. Maintaining cache for read-only queries and storing master tables. Master tables are tables that are very rarely modified. Every query on these tables will result in I/O which is the slowest process in any machine. It is a best practice to cache the master records in the application so that it is stored in the RAM so that the records can be fetched much faster. If the master table itself is huge, it is a concern since it might eat up your RAM and might result in creating swap files which are even worse. In such scenarios, better extract the master records out of the main application and deploy a separate caching service for retrieving those records. Use VACUUM frequently on transaction tables. Frequently update/delete of records of a table might result in lots of dead tuples. When a record is deleted the memory allocated for that record is not actually freed but marked as dead. This dead space can be used to store another record that might fit into that space. Too many dead tuples could result in higher query execution time and indexing of the tables. The VACUUM command fragments these dead tuples. Another command, FULL_VACUUM, will reclaim the memory allocated for the dead tuples. Prefer BETWEEN rather than <= and >=. Almost all major database drivers will optimize the queries with <= and >=. But by using BETWEEN clauses, we are explicitly giving an optimized query to the driver. Even though this does not result in a huge difference in the response time, it could be a good point to consider in complex queries. Use LIMIT 1 when getting a unique row or if just one record is required for processing. Use LIMIT, OFFSET clauses while querying huge databases. This will reduce the query time since now the database server doesn’t have to scan the entire table. Always specify the required field names while executing SELECT/INSERT queries. This would prevent the application from breaking when the schema of the table changes. In where clause place the predicate that will eliminate the greatest number of rows first.Reference: Database Usage Practices from our JCG partner George Janeve at the Janeve.Me blog....
logback-logo

Logback: Logging revisited

Hi, I am back again with my rant about logging as an inherent part of any application design and development. I am a big fan of strong basics, and in my humble opinion logging is one of those often overlooked but basic critical element of any enterprise grade application. I have written about this before here. This is not really mandatory to read in order to make sense of the current article, but it might help to give it a cursory look, to set the context for this article.In the first article, I introduced logging as a high benefit, low cost alternative to the omnipresent System.out.println(), that all java folks love so much. I had used log4j in that article. Log4j is a solid framework and delivers on it’s promise. In all the years that I have used it, it has never let me down. I can whole heartedly recommend it. However, having said that, there are few alternatives also, which have been around in the market for a while and I am happy to say that at least one of them seem to be challenging log4j in it’s own turf. I am talking about Logback.It is certainly not new kid in the block – and that is one of the reasons I am suggesting you consider this for enterprise grade applications to start with. A quick look at Maven Central suggests that the first version was published way back in 2006. Between 2006 and 8-June2012 – which is when the latest version was pushed to Maven Central, there have been 46 versions. Compare this with log4j. The first version was pushed in Maven Central in 2005 and the last on 26 May 2012, and between these there have been a total of 14 different versions. I do not mean to use this data to compare these two frameworks. The only intent is to assure the reader that Logback have been around long enough and is current enough to be taken seriously.Being around is one thing and making your mark is different. As far as ambition and intent goes, Logback makes it pretty clear that it intends to be successor of log4j – and says that in clear words at it’s homepage. Of courser there is an exhaustive list of features / benefits that Logback claims over Log4j. You can read about them at this link. That’s it really. The point of this article is that I am suggesting that while designing and developing a enterprise grade java based applications, look at logging a bit more carefully and also consider using Logback.A few of the audience at this point, I am hoping, will like to roll up their sleeves, fire up their favorite editor and take Logback out for a spin. If you are one of them, then you and I have something in common. You might want to read on.The very first thing that Logback promises is faster implementation (at this link). Really? I would like to check that claim.I start by creating a vanilla java application using Maven.File: MavenCommands.bat call mvn archetype:create ^ -DarchetypeGroupId=org.apache.maven.archetypes ^ -DgroupId=org.academy ^ -DartifactId=loggerThis unfortunately is preloaded with JUnit 3. I set up JUnit 4 and also add Contiperf, so that I could run the tests multiple times – something that would come in handy if I were to check performance. File: /logger/pom.xml [...] UTF-8 4.10 2.2.0 [...][...] junit junit ${junit.version} test org.databene contiperf ${contiperf.version} testAlso, I like to explicitly control the java version that is being used to compile and execute my code. File: /logger/pom.xml [...] 2.0.2 1.7[...] org.apache.maven.plugins maven-compiler-plugin ${maven-compiler-plugin.version} ${java.version} ${java.version}Last of configurations – for the time being. Slap on surefire to run unit tests. File: /logger/pom.xml [...] 2.12[...] org.apache.maven.plugins maven-surefire-plugin ${maven-surefire-plugin.version} org.apache.maven.surefire surefire-junit47 ${maven-surefire-plugin.version} -XX:-UseSplitVerifierPlease note, I have taken the pains of adding all these dependencies to this article with their versions, just to ensure that should you try this yourself, you know exactly what was the software configuration of my test. Now, let us finally add the unit tests. File: /logger/src/test/java/org/academy/AppTest.java public class AppTest { private final static Logger logger = LoggerFactory .getLogger(AppTest.class); @Rule public ContiPerfRule i = new ContiPerfRule(); @Test @PerfTest(invocations = 10, threads = 1) @Required(max = 1200, average = 1000) public void test() { for(int i = 0; i<10000 ; i++){ logger.debug("Hello {}", "world."); } } }So, we have used the logger in my unit test but have not added an implementation of logger. What I intend to do is to add log4j (with slf4j) and logback (with inherent support of slf4j) one by one and run this simple test multiple times to compare performance. To add log4j I used this setting. File: /logger/pom.xml org.slf4j slf4j-api ${slf4j.version} org.slf4j jcl-over-slf4j ${slf4j.version} runtime org.slf4j slf4j-log4j12 ${slf4j.version} runtimeand for logback I used this setting. File: /logger/pom.xml ch.qos.logback logback-classic ${logback.version}with the following versions. File: /logger/pom.xml 1.6.1 1.0.6For either of these logger framework to actually log anything you will have to add a file telling loggers what to log and where. File: src/main/resources/log4j.properties # Set root logger level to DEBUG and its only appender to A1. log4j.rootLogger=DEBUG, A1# configure A1 to spit out data in console log4j.appender.A1=org.apache.log4j.ConsoleAppender log4j.appender.A1.layout=org.apache.log4j.PatternLayout log4j.appender.A1.layout.ConversionPattern=%d [%t] %-5p %c - %m%nFinally, for the moment of truth. I ran the tests thrice with each framework i.e. logback and log4j. Essentially I log.debug() a string 1000,000 times in each test and timed them. And this is how the final figures came out.Framework 1st run 2nd run 3rd runLogback 0.375 seconds 0.375 seconds 0.406 secondsLog4j 0.454 seconds 0.453 seconds 0.454 secondsAs far as this little experiment goes, Logback clearly performs faster than Log4j. Of course this is overly simplistic experiment and many valid scenarios have not been considered. For example, we have not really used vanilla log4j. We have used log4j in conjunction with the slf4j API, which is not quite the same thing. Also, being faster is not the only consideration. Log4j works asynchronously (read here and here) whereas as far as I know Logback does not. Logback has quite a few nifty features that Log4j does not.So, in isolation this little code does not really prove anything. If at all, it brings me back to the first point that I made – Logback is a serious potential and worth a good look if you are designing / coding an enterprise grade java based application.That is all for this article. Happy coding.Want to read on? May I suggest … The first article of this series. 10 tips for application logging. How to exclude commons logging easily A nice tip on boosting performance of NOT logging. Reference: Logging revisited from our JCG partner Partho at the Tech for Enterprise blog....
logback-logo

JUnit, Logback, Maven with Spring 3

In this series we have already learnt to set up a basic Spring MVC application and learnt how to handle forms in Spring MVC. Now it is time to take on some more involved topics. However, before we venture into deeper waters, let’s get some basics set up. Unit testing I am no TDD evangelist. There I said it. I have never ever been able to write any software where for every piece of code, I have written a test first and then code. If you have done so and are gainfully employed by coding, please do let me know. I would seriously like to know you better. Seriously. My difference in opinion with TDD ends there. Apart from writing test before code – which somehow I simply can’t get my brain to work with – I am a huge supporter of unit testing. I am a firm believer of using JUnit to test all functionality (public but non getter setter, methods). I am a huge fan of using cobertura to report on code coverage. I am a huge fan of maven which allows me to bring this all together in a nice HTML report with just one command. I will use JUnit 4 for this series. Let’s add the dependencies. File: \pom.xml <properties> <junit.version>4.10</junit.version> </properties><!-- Unit testing framework. --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>${junit.version}</version> <scope>test</scope> </dependency>And let’s add a dumb class to demonstrate testing. File: /src/main/java/org/academy/HelloWorld.java package org.academy;public class HelloWorld { private String message = 'Hello world. Default setting.'; public String greet(){ return message; } public String getMessage() { return message; } public void setMessage(String message) { this.message = message; } }And finally the JUnit to test it. File: src/test/java/org/academy/HelloWorldTest.java package org.academy;import static org.junit.Assert.*;import org.junit.Test; import org.junit.runner.RunWith; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class HelloWorldTest {@Autowired HelloWorld helloWorld; private final static Logger logger = LoggerFactory .getLogger(HelloWorldTest.class);@Test public void test() { logger.debug(helloWorld.greet()); assertEquals(helloWorld.greet(), 'Hello world, from Spring.'); } }You would have noticed that the helloWorld within the unit test have never been initialized in the code. This is the bit of IoC magic of Spring. To make this work, we have used @RunWith, @ContextConfiguration and @Autowired. And I have also given Spring enough information to be able to create an instance of HelloWorld and then inject it to HelloWorldTest.helloWorld. Also, the assertEquals is checking for a very different message than what is actually hard coded in the HelloWorld class. This was done in a xml file mentioned below. Please do note the location of the file within Maven structure. File: /src/test/resources/org/academy/HelloWorldTest-context.xml <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:p='http://www.springframework.org/schema/p' xmlns:context='http://www.springframework.org/schema/context' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.0.xsd'><bean id='helloWorld' class='org.academy.HelloWorld'> <property name='message' value='Hello world, from Spring.' /> </bean> </beans>There are multiple ways I could have provided this configuration file to the unit test. @RunWith(SpringJUnit4ClassRunner.class) is a nice thing to add but is not mandatory. What I have provided here is just the vanilla approach that works in most cases, but I encourage the audience to experiment. Unit test coverage / code coverage. I don’t feel there is enough said about the importance of automated / semi automated / easy way of reporting on code coverage – both for individual developers and technical heads. Unless you are practising TDD religiously (which by the way I have mentioned before I personally have never been able to), it is absolutely impossible for even an individual developer to know if all logic branches of a code are covered by unit test. I am not even going to talk about how a technical head of a team / organization is going to ensure that his product(s) are sufficiently unit tested. I personally believe, any software product which is not sufficiently unit tested and test coverage reported, is an unacceptable risk. Period. Admittedly a bit of a hard stance, but that’s how it is. A bit of my conviction for the hard stance comes from the fact that it is so darn easy to report on test coverage. I will use cobertura in this example. You need to add cobertua to Maven pom. File: pom.xml <!-- Reporting --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-site-plugin</artifactId> <version>3.0</version> <configuration> <reportPlugins> <!-- Reporting on success / failure of unit tests --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <version>2.6</version> </plugin> <!-- Reporting on code coverage by unit tests. --> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <version>2.5.1</version> <configuration> <formats> <format>xml</format> <format>html</format> </formats> </configuration> </plugin> </reportPlugins> </configuration>And once you have done this, and added JUnit, and added an actual JUnit test, you just need to run mvn -e clean install siteto create a nice looking HTML based code coverage report. This report will allow you to click through source code under test and give you nice green coloured patches for unit tested code and red coloured patches for those that slipped through the cracks. Logging Log4j is good, Logback is better. Just don’t use System.out.println() for logging. You could go a long way without proper logging. However, I have spent far too many weekends and nights chasing down production issues, with business breathing down my neck, wishing there was some way to know what was happening in the app rather than having to guess all my way. Now a days, with mature api like slf4j and stable implementation like logback, a developer needs to add just one extra line per class to take advantage of enterprise grade logging infrastructure. It just does not make sense not to use proper logging right from the beginning of any project. Add slf4j and logback to Maven dependencies. File: \pom.xml. <!-- Logging --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>${logback.version}</version> </dependency>Ensure that Spring’s default logging i.e. commons logging is excluded. If you are wondering if logback is really this good that I claim it to be why did Spring not opt for it to start with. In my defense, here is a link at Spring’s official blog where they say ‘If we could turn back the clock and start Spring now as a new project it would use a different logging dependency. Probably the first choice would be the Simple Logging Facade for Java (SLF4J),…’ File: \pom.xml. <!-- Support for testing Spring applications with too TestNG This artifact is generally always defined the integration testing framework and unit testin <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <version>${org.springframework.version}</version> <scope>test</scope> <exclusions> <exclusion> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </exclusion> </exclusions> </dependency>Provide configuration for logback. File: /src/main/resources/logback.xml  <?xml version='1.0' encoding='UTF-8'?> <configuration> <appender name='CONSOLE' class='ch.qos.logback.core.ConsoleAppender'> <encoder> <pattern>%d %5p | %t | %-55logger{55} | %m %n</pattern> </encoder> </appender><logger name='org.springframework'> <level value='INFO' /> </logger><root> <level value='DEBUG' /> <appender-ref ref='CONSOLE' /> </root> </configuration>Finally, add the magic one liner at the beginning of each class that needs logging (that ought to be all classes). File: src/test/java/org/academy/HelloWorldTest.java [...] private final static Logger logger = LoggerFactory .getLogger(HelloWorldTest.class); [...] logger.debug(helloWorld.greet()); [...]There you are all set up. Now is the time to wade deeper into Spring. Happy coding. Want to read more? Here are the links to earlier articles in this series. Hello World with Spring 3 MVC Handling Forms with Spring 3 MVC And, of course these are highly recommended Spring 3 Testing with JUnit 4. Running unit tests with the Spring Framework @RunWith JUnit4 with BOTH SpringJUnit4ClassRunner and Parameterized Issue with Junit and Spring. Reference: JUnit, Logback, Maven with Spring 3 from our JCG partner Partho at the Tech for Enterprise blog....
jetbrains-intellijidea-logo

Developing a plugin for IntelliJ IDEA – some useful tips and links

When I started thinking about writing my first plugin for IntelliJ IDEA the biggest problem was lack of good and comprehensive guides how to do it and how to do it well gathered in one place. So this post will be a written collection of my personal experiences, links and other resources I found useful and noteworthy in my road to releasing Share with Pastie plugin to public IntelliJ repository. Links Before starting writing your mind-blowing and super-extra-useful plugin you should first do some reading. Below there is a list of places worth visiting: Getting Started – a place where you should start. The most basic introduction to creating new plugins. Basics of plugin development – second page similar to previous one but some interesting knowledge can be found there. Plugin Development FAQ – the best place with ready solutions and answers to your questions and problems Open API forum – a place to go if you are really stuck with your problem Show me the code Resources in the above links are helpful but sooner or later you will have to dive into code to check how something works or how you could implement some feature. There are two ways of analysing source code of IntelliJ IDEA itself: open your console and type: git clone git://git.jetbrains.org/idea/community.git idea or check IntelliJ IDEA 10 source code at GrepCode I have used both ways: Git repository to checkout the whole project, open it in IDEA and find usages, etc, and GrepCode to quickly find how single code fragment looks like. Learn from others Sometimes you might want to add some feature which is very similar to one you saw in plugin written by someone else. If you are lucky, source code of this plugin is available. The only thing you need to do is visit http://plugins.intellij.net/?idea, find plugin with similar feature and check if its source code is publicly available. That’s how I found the way to add green balloon with message that selected code fragment was successfully shared with Pastie. Some useful code samples There are a few elements which probably exists in source code of almost every plugin. To ease you pain of googling it or trying to figure it out, here are some two short samples I consider the most popular. Getting current project, current file, etc. Your plugin action should extend AnAction abstract class from IntelliJ OpenAPI. The only parameter passed to actionPerformed method is AnActionEvent. And from this object you can access various places: Project currentProject = DataKeys.PROJECT.getData(actionEvent.getDataContext()); VirtualFile currentFile = DataKeys.VIRTUAL_FILE.getData(actionEvent.getDataContext()); Editor editor = DataKeys.EDITOR.getData(actionEvent.getDataContext());// and so on... List of all places available in this way can be found in DataKeys class (and its parents) constants list. Balloon with info or error message This kind of popup is very useful to communicate feedback messages to user. It can be info message or error/warn. Default colors are green for info, red for error and orange for warnings. In Share With Pastie I am using it to inform that selected text was successfully sent to Pastie and link is waiting in clipboard.But before we show our balloon we need to specify a place when it will be located. In my plugin it is StatusBar (the lowest element in IDEA GUI): StatusBar statusBar = WindowManager.getInstance() .getStatusBar(DataKeys.PROJECT.getData(actionEvent.getDataContext()));and then we can prepare balloon and display it: JBPopupFactory.getInstance() .createHtmlTextBalloonBuilder(htmlText, messageType, null) .setFadeoutTime(7500) .createBalloon() .show(RelativePoint.getCenterOf(statusBar.getComponent()), Balloon.Position.atRight); A few trivias Plugin DevKit The most important thing is to download or activate plugin named Plugin DevKit which will allow to run and debug your own plugin during development process. This seems extremely trivial but you might have deactivated this plugin (like me ) or removed it to speed up start time. UI Designer If you are planning to develop plugin with additional windows, popups, etc. plugin named UI Designer is very handy. It’s something very similar to GUI builder from NetBeans and allows you to create Swing Panels through dragging, dropping and re-sizing components. Group and Action IDs When I was searching for a proper place to show my action in one of the menus available inside IntelliJ IDEA I encountered a page with list of group and action IDs which could help me with configuring my plugin. But this page appeared to be really outdated so I tried to find another way to determine proper values of those IDs. And of course, solution was lying just in front of me. If we press Alt + Insert in project view we will see a menu allowing to create several new objects. And one of them is Action. After clicking it, we will see a very friendly action creator which will looks like below. And of course it contains list of available groups and actions to place our plugin menu item next to them.Proper VM memory settings Next thing is to configure your run/debug configuration to work properly because depending on your hardware setup (available memory mainly) and IntelliJ IDEA settings you might encounter freezes when starting your plugin in development mode (most frequently those are crashes during index rebuilding) .To prevent such problems you should configure memory settings in “Run/Debug Configurations” window and add proper Virtual Machine parameters. For me -Xms256m -XX:PermSize=128m -XX:MaxPermSize=512m worked well. The end So this is list of my experiences, tips and thoughts about creating your own IntelliJ IDEA plugin. I hope you find it useful and your plugin will let us (Developers) to be even more productive and deliver code faster and in better quality My friend said “If I knew how to create plugins for IDEA, I would build a new one every three weeks because I am constantly having new ideas how to improve my workflow”, so maybe after reading this post at least start of plugin development process will be easier. Reference: Developing a plugin for IntelliJ IDEA – some useful tips and links from our JCG partner Tomasz Dziurko at the Code Hard Go Pro blog....
enterprise-java-logo

Teaser: Bare-knuckle SOA

I’m working on this idea, and I don’t know if it appeals to you guys. I’d like your input on whether this is something to explore further. Here’s the deal: I’ve encountered teams who, when working with SOA technologies have been dragged into the mud by the sheer complexity of their tools. I’ve only seen this in Java, but I’ve heard from some C# developers that they recognize the phenomenon there as well. I’d like to explore an alternative approach. This approach requires more hard work than adding a WSDL (web service definition language. Hocus pocus) file to your project and automatically generating stuff. But it comes with added understanding and increased testability. In the end, I’ve experienced that this has made me able to complete my tasks quicker, despite the extra manual labor. The purpose of this blog post (and if you like it, it’s expansions) is to explore a more bare-bones approach to SOA in general and to web services specifically. I’m illustrating these principles by using a concrete example: Let users be notified when their currency drops below a threshold relative to the US dollar. In order to make the service technologically interesting, I will be using the IP address of the subscriber to determine their currency. Step 1: create your active services by mocking external interactions Mocking the activity of your own services can help you construct the interfaces that define your interaction with external services. Teaser: public class CurrencyPublisherTest {private SubscriptionRepository subscriptionRepository = mock(SubscriptionRepository.class); private EmailService emailService = mock(EmailService.class); private CurrencyPublisher publisher = new CurrencyPublisher(); private CurrencyService currencyService = mock(CurrencyService.class); private GeolocationService geolocationService = mock(GeolocationService.class);@Test public void shouldPublishCurrency() throws Exception { Subscription subscription = TestDataFactory.randomSubscription(); String location = TestDataFactory.randomCountry(); String currency = TestDataFactory.randomCurrency(); double exchangeRate = subscription.getLowLimit() * 0.9;when(subscriptionRepository.findPendingSubscriptions()).thenReturn(Arrays.asList(subscription));when(geolocationService.getCountryByIp(subscription.getIpAddress())).thenReturn(location);when(currencyService.getCurrency(location)).thenReturn(currency); when(currencyService.getExchangeRateFromUSD(currency)).thenReturn(exchangeRate);publisher.runPeriodically();verify(emailService).publishCurrencyAlert(subscription, currency, exchangeRate); }@Before public void setupPublisher() { publisher.setSubscriptionRepository(subscriptionRepository); publisher.setGeolocationService(geolocationService); publisher.setCurrencyService(currencyService); publisher.setEmailService(emailService); } }Spoiler: I’ve recently started using random test data generation for my tests with great effect. The Publisher has a number of Services that it uses. Let us focus on one service for now: The GeoLocationService.Step 2: create a test and a stub for each service – starting with geolocationservice The top level test shows what we need from each external service. Informed by this and reading (yeah!) the WSDL for a service, we can test drive a stub for a service. In this example, we actually run the test using HTTP by starting Jetty embedded inside the test. Teaser: public class GeolocationServiceStubHttpTest {@Test public void shouldAnswerCountry() throws Exception { GeolocationServiceStub stub = new GeolocationServiceStub(); stub.addLocation("80.203.105.247", "Norway");Server server = new Server(0); ServletContextHandler context = new ServletContextHandler(); context.addServlet(new ServletHolder(stub), "/GeoService"); server.setHandler(context); server.start();String url = "http://localhost:" + server.getConnectors()[0].getLocalPort();GeolocationService wsClient = new GeolocationServiceWsClient(url + "/GeoService"); String location = wsClient.getCountryByIp("80.203.105.247");assertThat(location).isEqualTo("Norway"); } }Validate and create the xml payload This is the first “bare-knuckled” bit. Here, I create the XML payload without using a framework (the groovy “$”-syntax is courtesy of the JOOX library, a thin wrapper on top of the built-in JAXP classes):I add the XSD (more hocus pocus) for the actual service to the project and code to validate the message. Then I start building the XML payload by following the validation errors. Teaser: public class GeolocationServiceWsClient implements GeolocationService {private Validator validator; private UrlSoapEndpoint endpoint;public GeolocationServiceWsClient(String url) throws Exception { this.endpoint = new UrlSoapEndpoint(url); validator = createValidator(); }@Override public String getCountryByIp(String ipAddress) throws Exception { Element request = createGeoIpRequest(ipAddress); Document soapRequest = createSoapEnvelope(request); validateXml(soapRequest); Document soapResponse = endpoint.postRequest(getSOAPAction(), soapRequest); validateXml(soapResponse); return parseGeoIpResponse(soapResponse); }private void validateXml(Document soapMessage) throws Exception { validator.validate(toXmlSource(soapMessage)); }protected Validator createValidator() throws SAXException { SchemaFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); Schema schema = schemaFactory.newSchema(new Source[] { new StreamSource(getClass().getResource("/geoipservice.xsd").toExternalForm()), new StreamSource(getClass().getResource("/soap.xsd").toExternalForm()), }); return schema.newValidator(); }private Document createSoapEnvelope(Element request) throws Exception { return $("S:Envelope", $("S:Body", request)).document(); }private Element createGeoIpRequest(String ipAddress) throws Exception { return $("wsx:GetGeoIP", $("wsx:IPAddress", ipAddress)).get(0); }private String parseGeoIpResponse(Element response) { // TODO return null; }private Source toXmlSource(Document document) throws Exception { return new StreamSource(new StringReader($(document).toString())); } }In this example, I get a little help (and a little pain) from the JOOX library for XML manipulation in Java. As XML libaries for Java are insane, I’m giving up with the checked exceptions, too. Spoiler: I’m generally very unhappy with the handling of namespaces, validation, XPath and checked exceptions in all XML libraries that I’ve found so far. So I’m thinking about creating my own. Of course, you can use the same approach with classes that are automatically generated from the XSD, but I’m not convinced that it really would help much. Stream the xml over http Java’s built in HttpURLConnection is a clunky, but serviceable way to get the XML to the server (As long as you’re not doing advanced HTTP authentication).Teaser: public class UrlSoapEndpoint {private final String url;public UrlSoapEndpoint(String url) { this.url = url; }public Document postRequest(String soapAction, Document soapRequest) throws Exception { URL httpUrl = new URL(url); HttpURLConnection connection = (HttpURLConnection) httpUrl.openConnection(); connection.setDoInput(true); connection.setDoOutput(true); connection.addRequestProperty("SOAPAction", soapAction); connection.addRequestProperty("Content-Type", "text/xml"); $(soapRequest).write(connection.getOutputStream());int responseCode = connection.getResponseCode(); if (responseCode != 200) { throw new RuntimeException("Something went terribly wrong: " + connection.getResponseMessage()); } return $(connection.getInputStream()).document(); } }Spoiler: This code should be expanded with logging and error handling and the validation should be moved into a decorator. By taking control of the HTTP handling, we can solve most of what people buy an ESB to solve. Create the stub and parse the xml The stub uses xpath to find the location in the request. It generates the response in much the same way as the ws client generated the request (not shown).public class GeolocationServiceStub extends HttpServlet {private Map<String,String> locations = new HashMap<String, String>();public void addLocation(String ipAddress, String country) { locations.put(ipAddress, country); }@Override protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try { String ipAddress = $(req.getReader()).xpath("/Envelope/Body/GetGeoIP/IPAddress").text(); String location = locations.get(ipAddress); createResponse(location).write(resp.getOutputStream()); } catch (Exception e) { throw new RuntimeException("Exception at server " + e); } } }Spoiler: The stubs can be expanded to have a web page that lets me test my system without real integration to any external service. Validate and parse the response The ws client can now validate that the response from the stub complies with the XSD and parse the response. Again, this done using XPath. I’m not showing the code, as it’s just more of the same. The real thing! The code now verifies that the XML payload conforms to the XSD. This means that the ws client should now be usable with the real thing. Let’s write a separate test to check it:public class GeolocationServiceLiveTest {@Test public void shouldFindLocation() throws Exception { GeolocationService wsClient = new GeolocationServiceWsClient("http://www.webservicex.net/geoipservice.asmx"); assertThat(wsClient.getCountryByIp("80.203.105.247")).isEqualTo("Norway"); }}Yay! It works! Actually, it failed the first time I tried it, as I didn’t have the correct country name for the IP address that I tested with. This sort of point-to-point integration test is slower and less robust than my other unit tests. However, I don’t find make too big of a deal out of that fact. I filter the test from my Infinitest config and I don’t care much beyond that. fleshing out all the services The SubscriptionRepository, CurrencyService and EmailService need to be fleshed out in the same way as the GeolocationService. However, since we know that we only need very specific interaction with each of these services, we don’t need to worry about everything that could possibly be sent or received as part of the SOAP services. As long as we can do the job that the business logic (CurrencyPublisher) needs, we’re good to go! Demonstration and value chain testing If we create web UI for the stubs, we can now demonstrate the whole value chain of this service to our customers. In my SOA projects, some of the services we depend on will only come online late in the project. In this case, we can use our stubs to show that our service works. Spoiler: As I get tired of verifying that the manual value chain test works, I may end up creating a test that uses WebDriver to set up the stubs and verify that the test ran okay, just like I would in the manual test. Taking the gloves off when fighting in an soa arena In this article, I’ve showed and hinted at more than half a dozen techniques to work with tests, http, xml and validation that don’t involve frameworks, ESBs or code generation. The approach gives the programmer 100% control over their place in the SOA ecosystem. Each of the areas have a lot more depth to explore. Let me know if you’d like to see it be explored. Oh, and I’d also like ideas for better web services to use, as the Geolocated currency email is pretty hokey. Reference: Teaser: Bare-knuckle SOA from our JCG partner Johannes Brodwall at the Thinking Inside a Bigger Box blog....
enterprise-java-logo

Read-only ViewObject and Declarative SQL mode

IntroductionThe declarative SQL mode is considered to be one of the most valuable advantages of the entity-based view objects. In this mode the VO’s SQL is generated at runtime depending on the attributes showed in UI. For example, if some page contains a table with only two columns EmployeeId and FirstName, then the query will be generated as “select Employee_ID, First_Name from Employees”. This feature can significantly improve the performance of ADF application. But what about read-only or SQL-based view objects? JDeveloper doesn’t allow you to choose the SQL mode for SQL-based VOs. ?nly “Expert” mode can be used with no chance to have the query generated on the fly. But everything is possible. In this post we have an example of some SQL-based view object VEmployees:Let’s generate View Object Definition class:We’re going to override some methods: @Override public boolean isRuntimeSQLGeneration() { return true; }@Override public boolean isFullSql() { return false; }@Override //In our case we know exactly the clause FROM public String buildDefaultFrom(AttributeDef[] attrDefs, SQLBuilder builder, BaseViewCriteriaManagerImpl vcManager) { return "Employees"; }@Override //Setting "Selected in Query" property for each attribute except PK protected void createDef() { for (AttributeDef at : getAttributeDefs()) if (!at.isPrimaryKey()) ((AttributeDefImpl) at).setSelected(false); }Actually, that’s it! Let’s test it. For the page showing the full set of attributes, we have the result:And generated query (I use ODL analyzer):For the page with only two attributes we have the following result:And the query:The sample application for this post requires JDeveloper 11.1.2.1.0 and standard HR schema. Reference: Read-only ViewObject and Declarative SQL mode from our JCG partner Eugene Fedorenko at the ADF Practice blog....
jsf-logo

JSF Event-based communication: New-school approach

In the last post, we learnt event-based communication on basis of Observer / Event Listener and Mediator patterns. Due to their shortcomings I would like to show more efficient ways for event-based communication. We will start with Google Guava EventBus and end up with CDI (Contexts and Dependency Injection for the Java EE platform). Guava EventBus Google Guava library has an useful package eventbus. The class EventBus allows publish-subscribe-style communication between components without requiring the components to explicitly register with one another. Because we develop web applications, we should encapsulate an instance of this class in a scoped bean. Let’s write EventBusProvider bean. public class EventBusProvider implements Serializable {private EventBus eventBus = new EventBus("scopedEventBus");public static EventBus getEventBus() { // access EventBusProvider bean ELContext elContext = FacesContext.getCurrentInstance().getELContext(); EventBusProvider eventBusProvider = (EventBusProvider) elContext.getELResolver().getValue(elContext, null, "eventBusProvider");return eventBusProvider.eventBus; } }I would like to demonstrate all main features of Guava EventBus in only one example. Let’s write the following event hierarchy: public class SettingsChangeEvent {}public class LocaleChangeEvent extends SettingsChangeEvent {public LocaleChangeEvent(Object newLocale) { ... } }public class TimeZoneChangeEvent extends SettingsChangeEvent {public TimeZoneChangeEvent(Object newTimeZone) { ... } }The next steps are straightforward. To receive events, an object (bean) should expose a public method, annotated with @Subscribe annotation, which accepts a single argument with desired event type. The object needs to pass itself to the register() method of EventBus instance. Let’s create two beans: public MyBean1 implements Serializable {@PostConstruct public void initialize() throws Exception { EventBusProvider.getEventBus().register(this); }@Subscribe public void handleLocaleChange(LocaleChangeEvent event) { // do something }@Subscribe public void handleTimeZoneChange(TimeZoneChangeEvent event) { // do something } }public MyBean2 implements Serializable {@PostConstruct public void initialize() throws Exception { EventBusProvider.getEventBus().register(this); }@Subscribe public void handleSettingsChange(SettingsChangeEvent event) { // do something } }To post an event, simple provide the event object to the post() method of EventBus instance. The EventBus instance will determine the type of event and route it to all registered listeners. public class UserSettingsForm implements Serializable {private boolean changed;public void localeChangeListener(ValueChangeEvent e) { changed = true; // notify subscribers EventBusProvider.getEventBus().post(new LocaleChangeEvent(e.getNewValue())); }public void timeZoneChangeListener(ValueChangeEvent e) { changed = true; // notify subscribers EventBusProvider.getEventBus().post(new TimeZoneChangeEvent(e.getNewValue())); }public String saveUserSettings() { ...if (changed) { // notify subscribers EventBusProvider.getEventBus().post(new SettingsChangeEvent());return "home"; } } }Guava EventBus allows to create any listener that is reacting for many different events – just annotate many methods with @Subscribe and that’s all. Listeners can leverage existing events hierarchy. So if Listener A is waiting for events A, and event A has a subclass named B, this listener will receive both type of events: A and B. In our example, we posted three events: SettingsChangeEvent, LocaleChangeEvent and TimeZoneChangeEvent. The handleLocaleChange() method in the MyBean1 will only receive LocaleChangeEvent. The method handleTimeZoneChange() will only receive TimeZoneChangeEvent. But look at the method handleSettingsChange() in the MyBean2. It will receive all three events!As you may see, a manually registration is still needed ( EventBusProvider.getEventBus().register(this)) and the problem with scoped beans, I mentioned in the previous post, is still existing. We should be aware of scoping of EventBusProvider and scoping of publish / subscriber beans. But as you may also see, we have some improvements in comparison to the Mediator pattern: no special interfaces are needed, the subscriber’s method names are not fix defined, multi-listeners are possible too, no effort to manage registered instances, etc. Last but not least – asynchronous AsyncEventBus and subscription to DeadEvent (for listening for any events which were dispatched without listeners – handy for debugging). Follow this guide please to convert an existing EventListener-based system to EventBus-based one.CDI (Contexts and Dependency Injection)Every JEE 6 compliant application server supports CDI (the JSR-299 specification). It defines a set of complementary services that help improve the structure of application code. The best-known implementations of CDI are OpenWebBeans and JBoss Weld. Events in CDI allow beans to interact with no dependency at all. Event producers raise events that are delivered to event observers by the container. This basic schema might sound like the familiar Observer / Observable pattern, but there are a couple of benefits.Event producers and event observers are decoupled from each other. Observers can specify a combination of “selectors” to narrow the set of event notifications they will receive. Observers can be notified immediately or with delaying until the end of the current transaction. No headache with scoping by conditional observer methods (remember problem of scoped beans and Mediator / EventBus?).Conditional observer methods allow to obtain the bean instance that already exists, only if the scope of the bean that declares the observer method is currently active, without creating a new bean instance. If the observer method is not conditional, the corresponding bean will be always created. You are flexible!CDI event mechanism is the best approach for the event-based communication in my opinion. The subject is complex. Let’s only show the basic features. An observer method is a method of a bean with a parameter annotated @Observes. public MyBean implements Serializable {public void onLocaleChangeEvent(@Observes Locale locale) { ... } }The event parameter may also specify qualifiers if the observer method is only interested in qualified events – these are events which have those qualifiers. public void onLocaleChangeEvent(@Observes @Updated Locale locale) { ... }An event qualifier is just a normal qualifier, defined using @Qualifier. Here is an example: @Qualifier @Target({FIELD, PARAMETER}) @Retention(RUNTIME) public @interface Updated {}Event producers fire events using an instance of the parametrized Event interface. An instance of this interface is obtained by injection. A producer raises events by calling the fire() method of the Event interface, passing the event object. public class UserSettingsForm implements Serializable {@Inject @Any Event<Locale> localeEvent;public void localeChangeListener(ValueChangeEvent e) { // notify all observers localeEvent.fire((Locale)e.getNewValue()); } }The container calls all observer methods, passing the event object as the value of the event parameter. If any observer method throws an exception, the container stops calling observer methods, and the exception is re-thrown by the fire() method. @Any annotation above acts as an alias for any and all qualifiers. You see, no manually registration of observers is necessary. Easy? Specifying other qualifiers at the injection point is simple as well: // this will raise events to observers having parameter @Observes @Updated Locale @Inject @Updated Event<Locale> localeEvent;You can also have multiple event qualifiers. The event is delivered to every observer method that has an event parameter to which the event object is assignable, and does not have any event qualifier except the event qualifiers matching those specified at the Event injection point. The observer method may have additional parameters, which are injection points. Example: public void onLocaleChangeEvent(@Observes @Updated Locale locale, User user) { ... }What is about a specifying the qualifier dynamically? CDI allows to obtain a proper qualifier instance by means of AnnotationLiteral. This way, we can pass the qualifier to the select() method of Event. Example: public class DocumentController implements Serializable {Document document;@Inject @Updated @Deleted Event<Document> documentEvent;public void updateDocument() { ... // notify observers with @Updated annotation documentEvent.select(new AnnotationLiteral<Updated>(){}).fire(document); }public void deleteDocument() { ... // notify observers with @Deleted annotation documentEvent.select(new AnnotationLiteral<Deleted>(){}).fire(document); } }Let’s talk about “conditional observer methods”. By default, if there is no instance of an observer in the current context, the container will instantiate the observer in order to deliver an event to it. This behaviour isn’t always desirable. We may want to deliver events only to instances of the observer that already exist in the current context. A conditional observer is specified by adding receive = IF_EXISTS to the @Observes annotation. public void onLocaleChangeEvent(@Observes(receive = IF_EXISTS) @Updated Locale locale) { ... }Read more about Scopes and Contexts here. In this short post we can not talk more about further features like “event qualifiers with members” and “transactional observers”. I would like to encourage everybody to start learn CDI. Have much fun!Reference: Event-based communication in JSF. New-school approach. from our JCG partner Oleg Varaksin at the Thoughts on software development blog....
spring-logo

Spring 3 Internationalization and Localization

I wanted to add internationalization and localization feature provided by spring 3 to one of my current project recently. I went through the spring documentation and then searched on internet to find some resources.But I could not find a resource which was able to satisfy my client requirement. Most of the tutorials are like hello world application which gives basic understanding. Even spring documentation does not give in detailed explanation on integrating this feature to our own project. Expert developers can pick the stuff from spring documentation. But for others, have to put extra effort to make things up and running. With this tutorial, I am going to explain very practical scenario that most of the clients are expecting.The requirementI am using spring security with my application. User should be able to select the language from the log-in page which was specified as ‘ login-page’ of spring security XML file. I have provided links as “English”,”Chinese”,”German” and “Spanish” on top right corner of my log-in page to select the language. User can select the language and log in to the system by providing username and password. Then the whole application should be from the selected language. And also when selecting the language from the log-in page, the contents of the log-in page should also be changed.Spring configurationsAs the first step, I had to configure LocaleChangeInterceptor interceptor with in the dispatcher-servlet.xml file. This XML file name will change according to the name given to DispatcherServlet in web.xml file. I have given ‘ dispatcher’ as the name for DispatcherServlet. So I should create ‘ dispatcher-servlet.xml‘ file under /WEB-INF folder. My application is running on Tomcat 7.I could not make it working by following the way of declaring this interceptor as in the spring documentation. The request for changing the locale before log in(ie: from the login page) was not intercepted by the locale change interceptor. Therefore, I had to declare it as fallows. <mvc:interceptors> <mvc:interceptor> <mvc:mapping path="/doChangeLocale*"/> <bean class="org.springframework.web.servlet.i18n.LocaleChangeInterceptor" > <property name="paramName" value="locale" /> </bean> </mvc:interceptor> </mvc:interceptors> The ‘ LocaleChangeInterceptor‘ will intercept the request asking for locale change and the corresponding locale code will be stored in the session with the help of ‘ SessionLocaleResolver‘.Next we will look at how to declare the ‘SessionLocaleResolver’ in the ‘dispatcher-servlet.xml’ file. <bean id="localeResolver" class="org.springframework.web.servlet.i18n.SessionLocaleResolver"> <property name="defaultLocale" value="en" /> </bean> The SessionLocaleResolver will store the locale in the current session and resolve it for every subsequent user requests for the current session.Next, we have to declare the message resource bean. <bean id="messageSource" class="org.springframework.context.support.ReloadableResourceBundleMessageSource"> <property name="useCodeAsDefaultMessage" value="true" /> <property name="basenames"> <list> <value>classpath:messages</value> </list> </property> <property name="cacheSeconds" value="0" /> <property name="defaultEncoding" value="UTF-8"></property> </bean> My application should support for 4 languages. So I added 4 property files into the ‘ resources’ folder (ultimately all those property files should be in ‘classes’ folder) as follows.messages_de.properties – German messages_en.properties – English messages_zh.properties – Chinese messages_es.properties – SpanishNote that, all the file names should start with the text which you specified as ‘basenames’ property of message resource bean.The spring 3 security configurations were very important in this implementation. Keep in mind that, when you click any locale change link from the log-in page, you are not authenticated yet. But still that request should be intercepted by ‘LocaleChangeInterceptor’. Otherwise, the language will not be changed as expected. There fore, any anonymous user should be allowed to make locale change request and that request should go through the ‘LocaleChangeInterceptor’.Carefully look into my spring security configuration. <http auto-config="false"> <form-login login-page="/login.jsp" authentication-failure-url="/login.jsp?login_error=true" default-target-url="/mainMenu.htm"/> <logout logout-success-url="/login.jsp"/> <intercept-url pattern="/doChangeLocale**" access="ROLE_ANONYMOUS,ROLE_ADMIN,ROLE_USER"/> <intercept-url pattern="/**" access="ROLE_ADMIN,ROLE_USER" /> </http> The login.jsp file is where user can log into the system by providing username and password and also that page has the corresponding links to change the locale. When user makes any request to a protected resource without authenticating, the user will be redirected to the login.jsp page. The above configuration says all the requests that are coming to the application should be from a authenticated user and also the user should be authorized except for the ‘ /doChangeLocale**’ request.The intercept URL ‘ /doChangeLocale**’ is very important. Without that, the requests for changing locales are not intercepted by the locale change interceptor and finally locale will not change. The followings are the locale change links that are placed in the login.jsp file. <a href="<%=request.getContextPath()%>/doChangeLocale?locale=en">English</a> <a href="<%=request.getContextPath()%>/doChangeLocale?locale=de">German</a> <a href="<%=request.getContextPath()%>/doChangeLocale?locale=es">Spanish</a> <a href="<%=request.getContextPath()%>/doChangeLocale?locale=zh">Chinese</a>Hope this will be helpful for you. Reference: Spring 3 Internationalization and Localization – Not ‘Hello World’, But ‘Practical’ from our JCG partner Semika loku kaluge at the Code Box blog....
agile-logo

Hours, Velocity, Silo’d Teams, & Gantts

I’ve been having some email conversations with some project and program managers turned Scrum Masters. In general here’s how things have proceeded:Their organizations decided agile was a great idea Their organizations decided Scrum was a great idea to implement agile (because they don’t know the difference between Scrum and agile) The teams started working in two-week iterations, sort-of getting close to done at the end of the iterationBut, if you peel back the covers, what you see is not really Scrum. The person with the Scrum Master title is doing funky things, such as:preparing Gantt charts for the iteration, so you can see who will do what when predicting velocity based on hours (!!) predicting the number of story points per person per day telling the team what they can commit to for stories and all kinds of other strangeness I don’t associate with agileAnd, the teams are still silo’d teams. That is, there are developer teams, tester teams, architects leading component teams. There are 2- and 3-person teams here and maybe a 4-person teams there. These Scrum Master/project manager/program manager people have their hearts in the right places. But they have not had training, and they don’t know what agile can do for them. So while their iterations are helping them and their projects, they look around and say, “Why is agile not helping me?” If you are one of those people, you have options. Here is my list of recommendations:Stop trying to predict anything for the teams. Make the teams work as teams, and that brings us to #2. Move to cross-functional teams. Make them a reasonable size, such as 5-7 people. Make sure you have at least one tester on each team. If you don’t have enough testers to go around, that’s an impediment, and your team is going to have to develop tests for your code. But teams of 2 people are too small. And much of the time, teams of 3 people are too small. Tester-teams are wrong, just plain wrong. Testers go with developers. You need a cross-functional team to deliver features. Integrate all your architects into the feature teams. If you have more architects than you have teams, you have too many architects. (Yes, one of my correspondents has that problem.)Now, you have teams that might be able to work together. You, as the agile project manager/erstwhile Scrum Master, you, stay out of the middle!Now, when you have an iteration planning meeting, here is what you do. You ask the Product Owner to present the stories, and ask the team if the team can commit to the story for this iteration. That’s it. You don’t commit, the team commits. The team can estimate, but the team commits. If you start predicting velocity and you start predicting which stories a team can commit to, you are not doing agile. You are doing command-and-control in iterations. Oh, and if anyone starts to tell people, “Jim, that’s your story, Sue, that’s your story,” gag that person. Okay, maybe that’s a little extreme. But only a little. Remember, the idea is that the team commits to stories, not a person. What you can say is this, “Is it in our best interests as a team to commit to stories as a person? Remember, we want to make sure all of our stories are done at the end of the iteration. That means the testing has to be done. And, all of the user experience has to be done. (And any of the other special for-your-product stuff has to be done.) If someone who is an expert commits, what happens to the all the other pieces? Does that help us get all the stories done?” Then you hush. You can always facilitate the retrospective and help people learn from what happened. During an iteration, if anyone wants to know what a given team member is doing, you can say, “Look at the board.” If that person wants to know more by seeing a Gantt, you can say, “No, we don’t have Gantts in agile.” If that person signs your paycheck, you can remind that person that you have a demo every two weeks. If that person is insistent, you can ask what the real issue is. Because if that person looks at the people working, can’t that person see that everyone on the team is heads-down working? Look for the information that person wants and find another way to deliver it. If you have trouble seeing what’s really happening, consider adding kanban to your iterations, so you see if you have bottlenecks. Many organizations are understaffed in some area or other, and until you add a kanban board, you can’t see it. Kanban allows you to visualize the flow. Make sure you do a retrospective at the end of each iteration. Every single time. The retros can help you more than you know. Choose one thing to work on after each retro (okay, maybe up to three things), and see how fast you improve.What’s key is for the teams to turn into self-directed teams, not manager-led teams. The teams have to take responsibility for their own work, and fast. They have to recognize their own impediments. For many of these teams, one of the major impediments is that the stories are too large. They don’t realize it, so they try to take on too many stories at the start of the iteration. They can’t finish them all, so they don’t get credit, and they are left with unfinished work at the end of the iteration. Well, that frustrates everyone. Who is a “bad” estimator? Maybe no one. If the stories were smaller, or if a sufficiently large team swarmed around the stories, maybe the teams could complete the stories in the iteration. But asking a 2-person team to complete something that takes a 6-person team 1 week is crazy. Of course, I think a story that takes a 6-person team 1 week is too big. So, if you are on one of these not-quite-agile teams, take heart. First, you are not alone. There are people all over the world, just like you. If your management won’t allow you to take training, start reading. I’m sure there will be comments about what else to read. Here are my suggestions for reading: Join the scrumdevelopment group which is a yahoo group. There is plenty of free advice there. Much of it is good. Listen to everything Ron Jeffries says. Manage It! Your Guide to Modern, Pragmatic Project Management. I offer you tons of ideas about facilitative project management. (Yes, you can buy my book on Amazon. But you can only buy the electronic version on the Prag site, because that way you can get the updates for free.) Exploring Scrum: The Fundamentals. Rawsthorne and Shimp take you through the nuts and bolts of Scrum. Reference: Hours, Velocity, Silo’d Teams, & Gantts from our JCG partner Johanna Rothman at the Managing Product Development blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close