Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Making the right decisions when optimizing code

You have an optimization task at hand. Which is great – you finally have the possibility to do something interesting instead of implementing yet another invoice processing screen. The optimization task is (hopefully) linked to some non-functional requirement, such as the number of transactions your application has to process per second or the maximum time per transaction allowed.        Now, you have identified the part of your application that is the source of the performance problem. This part of your application uses a particular algorithm. You are considering switching the solution to another implementation and want to measure the performance impact. As discussed in one of our previous articles, you wisely choose the number of user transactions per second as the metrics to measure. You run your stress test with old implementation, write down the number of operations per second achieved, then run the very same stress test with new implementation, write down new number of operations per second. Next, you compare the numbers and make some decisions. Some of these decisions may include the requirement for further performance measurements and/or optimizations of the new implementation. Let me present some very simplistic example:With your old algorithm 1,000 transactions took 100 seconds With your improved algorithm the same 1,000 transactions took 90 secondsGreat! The new version is faster! The changes you have made to the algorithm gave you 10% better performance. You now might need to make further decisions based on that information concerning next steps while optimizing your algorithm. But, let us bring one more piece of information into account: the time spent on garbage collection. From the GC log you can extract total GC pauses times. That is the amount of time when your application was stopped doing stop-the-world GC work. Then we can have the following picture:With your old algorithm 1,000 transactions took 100 seconds, out of which GC pauses used 20 seconds With improved algorithm 1,000 transactions took 90 seconds, from which GC pauses used 27 secondsWhat can we deduce from this information? First of all, your algorithm’s running time has decreased from 100 – 20 = 80 seconds down to 90 – 27 = 63 seconds, 21% speedup. Secondly, the GC takes about 30% of your CPU time. Based on that your further algorithm’s optimization plans should focus not only on running speed, but also on decreasing memory usage and GC time. How should you decide which direction to take? To answer that, we should look into your performance requirements. Maybe you have already fulfilled your requirement on how many transactions per seconds you need to process. But are all the transactions processed in short enough time? Maybe your original wise idea of measuring only the number of transactions per second might not have been the best one? Those requirements in technical terms can be translated into two different aspects, namely – throughput and latency. Your current work has improved the throughput, but may have penalized the latency by introducing more and/or longer GC pauses. Further optimizations of your algorithm would be steered by the NFRs. Let’s imagine you still have not reached your throughput requirements. What could be your next steps? Considering that your implementation spends 30% of time on GC pauses, this fact alone should raise a red flag. On a typical case (yes, I know its like measuring the average temperature in a hospital and making decisions based on that) the GC pauses should not take more than 5% of the total time. And if you exceed 10% it is extremely likely that you should do something about it. The first step to take in reducing GC pauses is investigating your JVM configuration. Maybe you just need to increase the maximum heap size (-Xmx)? Maybe you should tune your generation sizes (-XX:NewRatio, -XX:SurvivorRatio, …)? Maybe you should experiment with different garbage collectors (-XX:+UseConcMarkSweepGC, -XX:+UseParallelGC, …)? Depending on your application, the right answer could be a change in the combination of the above, in some of them or not to provide any help at all. When configuring the JVM does not provide sufficient results, the next step is to look into your data structures. Maybe you can reduce the GC pauses by making changes to your source code. Maybe you can get rid of all the wrapper classes around primitives and significantly reduce the overhead? Maybe you could take a look in the Collection classes used and reduce the overhead posed? Or for some exotic cases when your algorithm constantly keeps creating and destroying the same objects, perhaps object pooling might be a good idea? And in the case above, only after this you might actually want to start reducing the overhead posed by your algorithm itself. Which in some exotic cases might lead you to discover that division is too slow. And you need to find a clever way to replace this with less expensive operation supported by your data structures. Or that memory barrier accompanying each write to java.util.concurrent.atomic.AtomicBoolean is too expensive. But let’s leave those cases for another story when I will describe some of the weirdest CPU wasters I have dealt with in my life. In conclusion – if you take up a quest to optimize your code, make sure you have thought through the requirements to both throughput and latency. And that you won’t stick to just one technique of optimization. Hopefully after reading the article you now have more tools in your arsenal.   Reference: Making the right decisions when optimizing code from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog. ...
apache-maven-logo

Tips for Writing Maven Plugins

I’ve spent a lot of time recently writing or working on plugins for Maven recently. They’re simple, rewarding and fun to write. I thought I’d share a couple of tips for making life easier when writing them.                 Tip 1: Separate the Task from the Mojo Initially you’ll put all the code for the mojo into the mojo’s class (i.e. the class that extends AbstractMojo). Instead think about separating the task out and having minimal shim code for the mojo. This will:Make it easier to unit test. Mean you can easily integrate with other tools, such as Ant. Make it easier to convert the simple data types in the mojo so more complex types for your task (e.g. turn a String into a File). Separate exception translation from the task.Tip 2: Consider Externalizing Configuration Normally you configure a plugin using theelement in the POM. This is fine for simple cases. When you have a large sets of configuration items, or where you might have several configuration profiles, this will result in long, hard to understand POMs. You can follow the assembly plugin’s example, have a standardised directory for putting config in, e.g. src/main/myplugin/config-profile-1.xml. Tip 3: Match the Mojo to the Phase Consider which phase you might want the mojo to run in. If it is doing things that ought to split accross phases, then split the mojo up and bind to the appropriate phase. It’ll make it easier to test and to maintain. Tip 4: Don’t Repeat Time Consuming Work Your mojo will get run multiple times on small variations of the same source code and config. Does it do a lot of intensive work every execution? I was working with a Mojo that unzipped a file every time it ran, by changing the zip code to only freshen files by checking file modification times, the task went from taking over a minute to execute to less than 10 seconds. Tip 5: Plan Your Testing Initially you’re probably writing your mojo and manually testing on the first project you’re going to use it on. This will be a long testing cycle, and result in an unreliable mojo. Separating the task from the mojo makes testing the task easy, but you’ll want to have some smoke tests for the mojo. Bugs in mojos can be hard for users to notice as there’s a tendency to assume most mojos are well tested and reliable. Tip 6: Consider how you Provide Documentation and Help for your Mojo IDEs and Maven can be a bit unhelpful here. What does that config item mean? Can I see an example? The solution is to provide a “help” mojo and optionally a Maven site. For example, if you execute “mvn assembly:help” or “mvn surefire:help -Ddetail=true -Dgoal=test” you’ll see help.   Reference: Tips for Writing Maven Plugins from our JCG partner Alex Collins at the Java, *NIX, testing, performance, e-gaming technology et al blog. ...
groovy-logo

Groovy JDK (GDK): Date and Calendar

I have looked at some highly useful methods available in Groovy GDK‘s extensions to the Java JDK in blog posts such as Groovy JDK (GDK): File.deleteDir(), Groovy JDK (GDK): Text File to String, Groovy JDK (GDK): More File Fun, Groovy JDK (GDK): String Support, and Groovy JDK (GDK): Number Support. In this post, I look at some of the endearing features of Groovy’s GDK extensions to the Java JDK java.util.Date and java.util.Calendar classes. Java’s current standard support for dates and times is generally disliked in the Java development community. Many of us look forward to JSR-310 and/or already use Joda Time to get around the shortcomings of Java’s treatment of dates and times.     Groovy makes working with dates and times a little easier when third-party frameworks are not available or cannot be used.The Groovy GDK extension of Date provides several new and highly useful methods as shown in the screen snapshot of its documentation.Some of these useful mehtods that I will highlight in this post are clearTime(), format(String), getDateString(), getTimeString(), parse(String, String), parseToStringDate(String), toCalendar(), toTimestamp(), and updated(Map). Many of the other methods listed in the API support Groovy operator overloading and are not highlighted in this post. Date.clearTime() and Calendar.clearTime() There are times when one wishes to represent a date only and the time portion of a Date or Calendar is not important (which is exactly why JSR 310 is bringing date-only constructs such as LocalDate to JDK 8). In such cases, Groovy’s extension to Date and Calendar make it easy to ‘clear’ the time component. The next code listing demonstrates use of Date.clearTime() followed by a screen snapshot showing that code executed. Note that the clearTime() method mutates the object it acts upon. /** * Demonstrates Groovy's GDK Date.clearTime() method. Note that the clearTime() * method acts upon the Date object upon which it is called, mutating its value * in addition to returning a reference to that changed object. */ def demoClearTime() { printTitle('Groovy GDK Date.clearTime()') def now = new Date() println 'Now: ${now}' def timelessNow = now.clearTime() println 'Now sans Time: ${timelessNow}' println 'Mutated Time: ${now}' }Calendar‘s clearTime() works similarly as shown in the next code snippet and its accompanying screen snapshot of its execution. /** * Demonstrates Groovy's GDK Calendar.clearTime() method. Note that the * clearTime() method acts upon the Calendar object upon which it is called, * mutating its value in addition to returning a reference to that changed object. */ def demoCalendarClearTime() { printTitle('Groovy GDK Calendar.clearTime()') def now = Calendar.getInstance() println 'Now: ${now}' now.clearTime() println 'Now is Timeless: ${now}' }Date.format and Calendar.format It is common in Java development to need to display a Date or Calendar in a specific user-friendly format and this is typically accomplished using instances of SimpleDateFormat. Groovy simplifies this process of applying a format to a Date or String with the respective methods Date.format(String) and Calendar.format(String). Code listings demonstrating each are shown next with each code listing followed by a screen snapshot displaying the executed code. /** * Demonstrate how much more easily a formatted String representation of a Date * can be acquired in Groovy using GDK Date.format(String). No need for an * explicit instance of SimpleDateFormat or any other DateFormat implementation * here! */ def demoFormat() { printTitle('Groovy GDK Date.format(String)') def now = new Date() println 'Now: ${now}' def dateString = now.format('yyyy-MMM-dd HH:mm:ss a') println 'Formatted Now: ${dateString}' }/** * Demonstrate how much more easily a formatted String representation of a * Calendar can be acquired in Groovy using GDK Calendar.format(String). No need * for an explicit instance of SimpleDateFormat or any other DateFormat * implementation here! */ def demoCalendarFormat() { printTitle('Groovy GDK Calendar.format(String)') def now = Calendar.getInstance() println 'Now: ${now}' def calendarString = now.format('yyyy-MMM-dd HH:mm:ss a') println 'Formatted Now: ${calendarString}' }Date.getDateString(), Date.getTimeString(), and Date.getDateTimeString() The format methods shown previously allow customized representation of a Date or Calendar and the clearTime methods shown previously allow the time element to be removed from an instance of a Date or Calendar. Groovy provides some convenience methods on Date for displaying a user-friendly date only, time only, or date and time without specifying a format or clearing the time component. These methods print dates and times in the predefined format specified by DateFormat.SHORT (for date portions) and DateFormat.MEDIUM (for time portions). Code listings of each of these methods are shown next and are each followed by screen snapshots of that code being executed. /** * Demonstrates Groovy's GDK Date.getDateString() method. Note that this * method doesn't change the underlying date, but simply presents only the date * portion (no time portion is presented) using the JDK's DateFormat.SHORT * constant (which defines the locale-specific 'short style pattern' for * formatting a Date). */ def demoGetDateString() { printTitle('Groovy GDK Date.getDateString()') def now = new Date() println 'Now: ${now}' println 'Date Only: ${now.getDateString()}' println 'Now Unchanged: ${now}' }/** * Demonstrates Groovy's GDK Date.getTimeString() method. Note that this * method doesn't change the underlying date, but simply presents only the time * portion (no date portion is presented) using the JDK's DateFormat.MEDIUM * constant (which defines the locale-specific 'medium style pattern' for * formatting a Date). */ def demoGetTimeString() { printTitle('Groovy GDK Date.getTimeString()') def now = new Date() println 'Now: ${now}' println 'Time Only: ${now.getTimeString()}' println 'Now Unchanged: ${now}' }/** * Demonstrates Groovy's GDK Date.getDateTimeString() method. Note that this * method doesn't change the underlying date, but simply presents the date and * time portions as a String. The date is presented with locale-specific format * as defined by DateFormat.SHORT and the time is presented with locale-specific * format as defined by DateFormat.MEDIUM. */ def demoGetDateTimeString() { printTitle('Groovy GDK Date.getDateTimeString()') def now = new Date() println 'Now: ${now}' println 'Date/Time String: ${now.getDateTimeString()}' println 'Now Unchanged: ${now}' }Date.parse(String, String) The GDK Date class provides a method Date.parse(String, String) that is a ‘convenience method’ that ‘acts as a wrapper for SimpleDateFormat.’ A code snippet and corresponding screen snapshot of the code’s output follow and demonstrate this method’s usefulness. /** * Demonstrate Groovy GDK's Date.parse(String, String) method which parses a * String (second parameter) based on its provided format (first parameter). */ def demoParse() { printTitle('Groovy GDK Date.parse(String, String)') def nowString = '2012-Nov-26 11:45:23 PM' println 'Now String: ${nowString}' def now = Date.parse('yyyy-MMM-dd hh:mm:ss a', nowString) println 'Now from String: ${now}' }Date.parseToStringDate(String) The GDK Date.parseToStringDate(String) method can be used to obtain an instance of Date from a String matching the exact format put out by the Date.toString() method. In other words, this method can be useful for converting back to a Date from a String that was generated from a Date‘s toString() method. Use of this method is demonstrated with the following code snippet and screen snapshot of the corresponding output. /** * Demonstrate Groovy GDK's Date.parseToStringDate(String) method which parses * a String generated by a Date.toString() call, but assuming U.S. locale to * do this. */ def demoParseToStringDate() { printTitle('Groovy GDK Date.parseToStringDate(String)') def now = new Date() println 'Now: ${now}' def nowString = now.toString() def nowAgain = Date.parseToStringDate(nowString) println 'From toString: ${nowAgain}' }There is one potentially significant downside to the GDK Date.parseToStringDate(String) method. As its documentation states, it relies on ‘US-locale-constants only.’ Date.toCalendar() and Date.toTimestamp() It is often useful to convert a java.util.Date to a java.util.Calendar or java.sql.Timestamp. Groovy makes these common conversions particularly easy with the GDK Date-provided methods Date.toCalendar and Date.toTimestamp(). These are demonstrated in the following code snippets with their output displayed in corresponding screen snapshots. /** * Demonstrates how easy it is to get a Calendar instance from a Date instance * using Groovy's GDK Date.toCalendar() method. */ def demoToCalendar() { printTitle('Groovy GDK Date.toCalendar()') def now = new Date() println 'Now: ${now}' def calendarNow = now.toCalendar() println 'Now: ${calendarNow} [${calendarNow.class}]' }/** * Demonstrates how easy it is to get a Timestamp instance from a Date instance * using Groovy's GDK Date.toTimestamp() method. */ def demoToTimestamp() { printTitle('Groovy GDK Date.toTimestamp()') def now = new Date() println 'Now: ${now}' def timestampNow = now.toTimestamp() println 'Now: ${timestampNow} [${timestampNow.class}]' }Date.updated(Map) [and Calendar.updated(Map)] The final convenience method provided by the GDK Date that I’m going to discuss in this post is Date.updated(Map), which its documentation describes as ‘Support creating a new Date having similar properties to an existing Date (which remains unaltered) but with some fields updated according to a Map of changes.’ In other words, this method allows one to start with a certain Date instance and acquire another Date instance with the same properties other than changes specified in the provided Map. The next code listing acquires a new Date instance from an existing Date instance with a few fields updated using the Date.updated(Map) method. The code listing is followed by a screen snapshot of its execution. /** * Demonstrate Groovy GDK's Date.updated(Map) with adaptation of the example * provided for that method in that method's Javadoc-based GDK documentation. * Note that the original Date upon which updated is invoked is NOT mutated and * the updates are on the returned instance of Date only. */ def demoUpdated() { printTitle('Groovy GDK Date.updated(Map)') def now = new Date() def nextYear = now[YEAR] + 1 def nextDate = now[DATE] + 1 def prevMonth = now[MONTH] - 1 def oneYearFromNow = now.updated(year: nextYear, date: nextDate, month: prevMonth) println 'Now: ${now}' println '1 Year from Now: ${oneYearFromNow}' }The demonstration shows that the original Date instance does remain unaltered and that a copy with specified fields changed is provided. There is also an equivalent for the GDK Calendar called Calendar.updated(Map). Conclusion One of the things I like about Groovy is the GDK extensions to SDK classes. In this post, I looked at how the GDK Date extension of the JDK’s Date provides many useful convenience methods that lead to more concise and more readable code.   Reference: Groovy JDK (GDK): Date and Calendar from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...
java-logo

Should you trust the default settings in JVM?

JVMs are considered smart nowadays. Not much configuration is expected – just set the maximum heap to use in the startup scripts and you should be good to go. All other default settings are just fine. Or so some of us mistakenly thought. Actually there is a lot going on during runtime which cannot be automatically adjusted for performance, so I am going to walk you through what and when to tweak throughout a case study I recently faced. But before jumping to the case itself, some background about the JVM internals being covered. All following is relevant for Oracle Hotspot 7. Other vendors or older releases of the Hotspot JVM’s most likely ship with different defaults.   JVM default options First stop: JVM tries to determine whether it is running on a server on a client environment. It does it via looking into the architecture and OS combination. In simple summary:Architecture CPU/RAM OS Defaulti586 Any MS Windows ClientAMD64 Any Any Server64-bit SPARC Any Solaris Server32-bit SPARC 2+ cores & > 2GB RAM Solaris Server32-bit SPARC 1 core or < 2GB RAM Solaris Clienti568 2+ cores & > 2GB RAM Linux or Solaris Serveri568 1 core or < 2GB RAM Linux or Solaris ClientAs an example – if you are running on an Amazone EC2 m1.medium instance on 32-bit Linux you would be considered running on a client machine by default. This is important because JVM optimizes completely differently on client and on server – on client machines it tries to reduce startup time and skips some optimizations during startup. On server environments some startup time is sacrificed to achieve higher throughput later. Second set of defaults: Heap sizing. If your environment is considered to be a Server determined according to the previous guidelines, your initial heap allocated will be 1/64 of the memory available on the machine. On 4G machine, it would mean that your initial heap size will be 64MB. If running on extremely low memory conditions (<1GB) it can be smaller, but in this case I would seriously doubt you are doing anything reasonable. Have not seen a server in this millenia with less than gig of memory. And if you have, I’ll remind you that a GB of DDR costs less than $20 nowadays … But this will be the initial heap size. The maximum heap size will be the smallest of either ¼ of your total memory available or 1GB. So in our 1.7GB Amazon EC2 m1.small instance the maximum heap size available for the JVM would be approximately 435MB. Next along the line: default garbage collector used. If you are considered to be running on a client JVM, the default applied by the JVM would be Serial GC (-XX:+UseSerialGC). On server-class machines (again, see the first section) the default would be Parallel GC (-XX:+UseParallelGC). There is a lot more going on on with the defaults, such as PermGen sizing, different generation tweaks, GC pausing limits, etc. But in order to keep the size of the post under control, lets just stick with the aforementioned configurations. For the curious ones – you can read further about the defaults from the following materials:http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html http://docs.oracle.com/javase/7/docs/technotes/guides/vm/gc-ergonomics.html http://docs.oracle.com/javase/7/docs/technotes/guides/vm/server-class.htmlCase Study Now lets see how our case study behaves. And whether we should trust the JVM with the decisions or jump in ourselves. Our application at hand was an issue tracker. Namely JIRA. Which is a web application with a relational database in the back-end. Deployed on Tomcat. Behaving badly in one of our client environments. And not because of any leaks but due to the different configuration issues in deployment. This misbehaving configuration resulted in significant losses in both throughput and latency due to the extremely long-running GC pauses. We managed to help out the customer, but for privacy’s sake we are not going to cover the exact details here. But the case was good, so we went ahead and downloaded the JIRA by ourselves to demonstrate some of the concepts we discovered from this real-world case study. What is extremely nice from Atlassian is that the guys have got some nicely packaged load tests shipping with it. So we had a benchmark to use for our configuration. We carefully unboxed our newly acquired JIRA and installed it on a 64-bit Linux Amazon EC2 m1.medium instance. And ran the bundled tests. Without changing anything in the defaults. Which were set by the Atlassian team to -Xms256m -Xmx768m -XX:MaxPermSize=256m During each run we have collected GC logs using -XX:+PrintGCTimeStamps -Xloggc:/tmp/gc.log -XX:+PrintGCDetails and analyzed this statistics with the help of GCViewer. The results were not too bad actually. We ran the tests for an hour, and out of this we lost just 151 seconds to the garbage collection pauses. Or 4.2% of the total runtime. And on the single worst-case gc pause was 2 seconds. So GC pauses were affecting both throughput and latency of this particular application. But not too much. But enough to serve as the baseline for this case study – in our real-world customer the GC pauses were spanning up to 25 seconds. Digging into the GC logs surfaced an immediate problem. Most of the Full GC’s run were caused by the PermGen size expanding over time. Logs demonstrated that in total around 155MB of PermGen were used during tests. So we have increased the initial size of the PermGen to a bit more than actually used by adding -XX:PermSize=170m to the startup scripts. This decreased the total accumulated pauses from 151 seconds to 134 seconds. And decreased the maximal latency from 2,000ms to 1,300ms. Then we discovered something completely unexpected. The GC used by our JVM was in fact Serial GC. Which, if you have carefully followed our post should not be the case – 64-bit Linux machines should always be considered server-class machines and the GC used should be Parallel GC. But apparently this is not the case. Our best guess at this point was that – even though the JVM launches in server mode, it still selects the GC used based on the memory and cores available. And as this m1.medium instance has 3.75GB memory but only one virtual core, the GC chosen is still serial. But if any of you guys have more insights on the topic, we are eager to find out more. Nevertheless we changed the algorithm to -XX:+UseParallelGC and re-ran the tests. Results – accumulated pauses decreased further to 92 seconds. Worst-case latency was also reduced to 1,200ms. For the final test we attempted to try out Concurrent Mark and Sweep mode. But this algorithm failed completely on us – pauses were increased to 300 seconds and latency to more than 5,000ms. Here we gave up and decided to call it a night. So just playing with two JVM startup parameters and spending few hours on configuration and interpretation of the results we had effectively increased the throughput and latency of the application. The absolute numbers might not sound too impressive – GC pauses reducing from 151 seconds to 92 seconds and worst-case latency from 2,000ms to 1,200ms, but lets bear in mind this was just a small test with only two configuration settings. And looking from the % point of view – hey, we have both improved the GC pause-related throughput and reduced the latency by 40%! In any case – we now have one more case to show you that – performance tuning is all about setting the goals, measuring, tuning and measuring again. And maybe you are just as lucky as we and can make your users 40% happier by just changing two configuration options …   Reference: Should you trust the default settings in JVM? from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog. ...
codahale-metrics-logo

Yammer Metrics, A new way to monitor your application

When you are running long term applications like web applications, it is good to know some statistics about them, like number of requests served, request durations, or the number active requests. But also some more generic information like the state of your internal collections, how many times some portion of code is being executed, or health checks like database availability, or any kind of connection to an external system.             All this kind of instrumentalization can be achieved by using native JMX or using a modular project like Metrics. Metrics provides a powerful way to measure the behaviour of your critical components and reporting them to a variety of systems like, JConsole, System Console, Ganglia, Graphite, CSV, or making them available through a web server. To install Metrics, we only have to add metrics dependency. In this example we are going to use Maven. <dependencies> <dependency> <groupId>com.yammer.metrics</groupId> <artifactId>metrics-core</artifactId> <version>2.2.0</version> </dependency> </dependencies> Now it is time to add some metrics to our code. In Metrics we can use 6 types of metrics:Gauges: an instantaneous measurement of a discrete value. Counters: a value that can be incremented and decremented. Can be used in queues to monitorize the remaining number of pending jobs. Meters: measure the rate of events over time. You can specify the rate unit, the scope of events or event type. Histograms: measure the statistical distribution of values in a stream of data. Timers: measure the amount of time it takes to execute a piece of code and the distribution of its duration. Healthy checks: as his name suggests, it centralize our service’s healthy checks of external systems.So let’s write a really simple application (in fact it is a console application) which sends queries to Google Search system. We will measure the number of petitions, the number of characters sent to Google, the last word searched, and a timer for measuring the rate of sending a request and receiving a response. The main class where Measures will be applied is called MetricsApplication and is the responsible of connecting to Google and sending the entered word. public class MetricsApplication {Counterprivate final Counter numberOfSendCharacters = Metrics.newCounter(MetricsApplication.class, 'Total-Number-Of-Characters');Meterprivate final Meter sendMessages = Metrics.newMeter(MetricsApplication.class, 'Sent-Messages', 'Send', TimeUnit.SECONDS);Timerprivate final Timer responseTime = Metrics.newTimer(MetricsApplication.class, 'Response-Time');private LinkedList<String> historyOfQueries = new LinkedList<String>();{GaugeMetrics.newGauge(MetricsApplication.class, 'lastQuery', new Gauge<String>() {@Overridepublic String value() {return historyOfQueries.getLast();}});}public void sendQuery(String message) throws FailingHttpStatusCodeException, MalformedURLException, IOException {updateMetrics(message);TimerContext timerContext = responseTime.time();sendQueryToGoogle(message);timerContext.stop();}private void sendQueryToGoogle(String message) throws FailingHttpStatusCodeException, MalformedURLException, IOException {WebClient webClient = new WebClient();HtmlPage currentPage = webClient.getPage('http:www.google.com');Get the query input textHtmlInput queryInput = currentPage.getElementByName('q');queryInput.setValueAttribute(message);Submit the form by pressing the submit buttonHtmlSubmitInput submitBtn = currentPage.getElementByName('btnG');currentPage = submitBtn.click();}private void updateMetrics(String message) {numberOfSendCharacters.inc(message.length());sendMessages.mark();historyOfQueries.addLast(message);}} The first thing we can see is the counter instance. This counter will count the number of characters that are sent to Google in the whole life of the applications (meanwhile you don’t stop it). The next property is a meter that measures the rate of sending queries over time. Then we have got a timer that rates the sendQueryToGoogle method callings and its distribution over time. And finally a LinkedList for storing all queries sent. This instance will be used to return the last query executed, and is used in gauge for returning the last inserted element. Notice that in each measure we are setting a class which will be used as folder in jconsole. Moreover a label is provided to be used as name inside folder. Let’s see a screenshot of jconsole with previous configuration and an execution of three searches:By default all metrics are visible via JMX. But of course we can report measurements to console, http server, Ganglia or Graphite. Also note that in this example we are mixing business code and metrics code. If you are planning to use Metrics in your production code I suggest you to put metrics logic into AOP whenever possible. We have learned an easy way to monitorize our applications without using JMX directly. Also keep in mind that Metrics comes with some built-in metrics for instrumenting HttpClient, JDBI, Jetty, Jersey, Log4j, Logback or Web Applications.   Reference: Yammer Metrics, A new way to monitor your application from our JCG partner Alex Soto at the One Jar To Rule Them All blog. ...
apache-camel-logo

Discovering the power of Apache Camel

These last years, ESB software has been getting more and more popular. If most people usually know what is an ESB, they are fewer to clearly understand the exact role of the different components of such architecture. For instance, Apache ServiceMix is composed of three major components : Apache Karaf (the OSGI container), Apache ActiveMQ (the message broker) and Apache Camel. By the way, what is exactly Camel ? What is a « routing and mediation engine » ? What is it useful for ?         I’ve been working with Camel for about one year now, and I think – although not being at all a Camel guru, that I now have enough hindsight to make you discovering the interest and power of Camel, using some very concrete examples.For the sake of clarity, I will, for the rest of this article, be using the Spring DSL – assuming the reader is familiar with Spring syntax. The Use Case Let us imagine we want to implement the following scenario using Camel. Requests for product information are coming as flat files (in CSV format) in a specific folder. Each line of such file contains a single request of a particular customer about a particular car model. We want to send these customers an email about the car they are interested in. To do so, we first need to invoke a web service to get additional customer data (e.g. their email). Then we have to fetch the car characteristics (lets us say a text) from a database. As we want a decent look (ie HTML) for our mails, a small text transformation will also be required. Of course, we do not want a mere sequential handling of the requests, but would like to introduce some parallelism. Similarly, we do not want to send many times the exact same mail to different customers (but rather a same unique mail to multiple recipients). It would be also nice to exploit the clustering facilities of our back-end to load-balance our calls to web services. And finally, in the event the processing of a request failed, we want to keep trace, in some way or another, of the originating request, so that we can for instance send it by postal mail.   A (possible) Camel implementation : <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans.xsdhttp://camel.apache.org/schema/springhttp://camel.apache.org/schema/spring/camel-spring.xsd " ><camelContext xmlns="http://camel.apache.org/schema/spring" errorHandlerRef="myDLQ"><!-- 2 redeliveries max before failed message is placed into a DLQ --> <errorHandler id="myDLQ" type="DeadLetterChannel" deadLetterUri="activemq:queue:errors" useOriginalMessage="true"> <redeliveryPolicy maximumRedeliveries="2"/> </errorHandler><!-- The polling of a specific folder every 30 sec --> <route id="route1"> <from uri="file:///Users/bli/folderToPoll?delay=30000&delete=true"/> <unmarshal> <csv/> </unmarshal> <split> <simple>${body}</simple> <setHeader headerName="customerId"> <simple>${body[1]}</simple> </setHeader> <setHeader headerName="carModelId"> <simple>${body[2]}</simple> </setHeader> <setBody> <simple>${body[0]}</simple> </setBody> <to uri="activemq:queue:individualRequests?disableReplyTo=true"/> </split> </route><!-- The consumption of individual (jms) mailing requests --> <route id="route2"> <from uri="activemq:queue:individualRequests?maxConcurrentConsumers=5"/> <pipeline> <to uri="direct:getCustomerEmail"/> <to uri="direct:sendMail"/> </pipeline> </route><!-- Obtain customer email by parsing the XML response of a REST web service --> <route id="route3"> <from uri="direct:getCustomerEmail"/> <setBody> <constant/> </setBody> <loadBalance> <roundRobin/> <to uri="http://backend1.mycompany.com/ws/customers?id={customerId}&authMethod=Basic&authUsername=geek&authPassword=secret"/> <to uri="http://backend2.mycompany.com/ws/customers?id={customerId}&authMethod=Basic&authUsername=geek&authPassword=secret"/> </loadBalance> <setBody> <xpath resultType="java.lang.String">/customer/general/email</xpath> </setBody> </route><!-- Group individual sendings by car model --> <route id="route4"> <from uri="direct:sendMail"/> <aggregate strategyRef="myAggregator" completionSize="10"> <correlationExpression> <simple>header.carModelId</simple> </correlationExpression> <completionTimeout> <constant>60000</constant> </completionTimeout> <setHeader headerName="recipients"> <simple>${body}</simple> </setHeader> <pipeline> <to uri="direct:prepareMail"/> <to uri="direct:sendMailToMany"/> </pipeline> </aggregate> </route><!-- Prepare the mail content --> <route id="route5"> <from uri="direct:prepareMail"/> <setBody> <simple>header.carModelId</simple> </setBody> <pipeline> <to uri="sql:SELECT xml_text FROM template WHERE template_id =# ?dataSourceRef=myDS"/> <to uri="xslt:META-INF/xsl/email-formatter.xsl"/> </pipeline> </route><!-- Send a mail to multiple recipients --> <route id="route6"> <from uri="direct:sendMailToMany"/> <to uri="smtp://mail.mycompany.com:25?username=geek&password=secret&from=no-reply@mycompany.com&to={recipients}&subject=Your request&contentType=text/html"/> <log message="Mail ${body} successfully sent to ${headers.recipients}"/> </route></camelContext><!-- Pure Spring beans referenced in the various Camel routes --><!-- The ActiveMQ broker --> <bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent"> <property name="brokerURL" value="tcp://localhost:61616"/> </bean><!-- A datasource to our database --> <bean id="myDS" class="org.apache.commons.dbcp.BasicDataSource"> <property name="driverClassName" value="org.h2.Driver"/> <property name="url" value="jdbc:h2:file:/Users/bli/db/MyDatabase;AUTO_SERVER=TRUE;TRACE_LEVEL_FILE=0"/> <property name="username" value="sa"/> <property name="password" value="sa"/> </bean><!-- An aggregator implementation --> <bean id="myAggregator" class="com.mycompany.camel.ConcatBody"/></beans> And the code of the (only!) Java class : public class ConcatBody implements AggregationStrategy {public static final String SEPARATOR = ", ";public Exchange aggregate(Exchange aggregate, Exchange newExchange) { if (aggregate == null) { // The aggregation for the very exchange item is the exchange itself return newExchange; } else { // Otherwise, we augment the body of current aggregate with new incoming exchange String originalBody = aggregate.getIn().getBody(String.class); String bodyToAdd = newExchange.getIn().getBody(String.class); aggregate.getIn().setBody(originalBody + SEPARATOR + bodyToAdd); return aggregate; } }} Some explanationsThe “route1” deals with the processing of incoming flat files. Thee file content is first unmarshalled (using CSV format) and then split into lines/records. Each line will be turned into an individual notification that is sent to a JMS queue. The “route2” is consuming these notifications. Basically, fulfilling a request means doing two things in sequence (“pipeline”) : get the customer email (route3) and send him a mail (route4). Note the ‘maxConcurrentConsumers’ parameter that is used to easily answer our parallelism requirement. The “route3” models how to get the customer email : simply by parsing (using XPath) the XML response of a (secured) REST web service that is available on two back-end nodes. The “route4” contains the logic to send massive mails. Each time 10 similar send requests (that is, in our case, 10 requests on same car model) are collected (and we are not ready to wait more than 1 minute) we want the whole process to be continued with a new message (or « exchange » in Camel terminology) being the concatenation of the 10 assembled messages. Continuing the process means: first prepare the mail body (route5), and then send it to the group (route6). In “route5“, a SQL query is issued in order to get the appropriate text depending on the car model. On that result, we apply a small XSL-T transformation (that will replace the current exchange body with the output of the xsl transformation). When entering “route6“, an exchange contains everything we need. We have the list of recipients (as header), and we also have (in the body) the html text to be sent. Therefore we can now proceed to the real sending using SMTP protocol. In case of errors (for instance, temporary network problems) – anywhere in the whole process, Camel will make maximum two additional attempts before giving up. In this latter case, the originating message will be automatically placed by Camel into a JMS Dead-Letter-Queue.Conclusion Camel is really a great framework – not perfect but yet great. You will be surprised to see how few lines of code are needed to model a complex scenario or route. You could also be glad to see how limpid is your code, how quickly your colleagues can understand the logic of your routes. But it’s certainly not the main advantage. Using Camel primarily invites you to think in terms of Enterprise Integration Patterns (aka “EIP”); it helps you to decompose the original complexity into less complex (possibly concurrent) sub-routes using well-known and proven techniques, thereby leading to more modular, more flexible implementations. In particular, using decoupling techniques facilitates the potential replacement or refactoring of individual parts or components of your solution.   Reference: Discovering the power of Apache Camel from our W4G partner Bernard Ligny. ...
apache-hadoop-logo

MapReduce Algorithms – Order Inversion

This post is another segment in the series presenting MapReduce algorithms as found in the Data-Intensive Text Processing with MapReduce book. Previous installments are Local Aggregation, Local Aggregation PartII and Creating a Co-Occurrence Matrix. This time we will discuss the order inversion pattern. The order inversion pattern exploits the sorting phase of MapReduce to push data needed for calculations to the reducer ahead of the data that will be manipulated.. Before you dismiss this as an edge condition for MapReduce, I urge you to read on as we will discuss how to use sorting to our advantage and cover using a custom partitioner, both of which are useful tools to have available.       Although many MapReduce programs are written at a higher level abstraction i.e Hive or Pig, it’s still helpful to have an understanding of what’s going on at a lower level.The order inversion pattern is found in chapter 3 of Data-Intensive Text Processing with MapReduce book. To illustrate the order inversion pattern we will be using the Pairs approach from the co-occurrence matrix pattern. When creating the co-occurrence matrix, we track the total counts of when words appear together. At a high level we take the Pairs approach and add a small twist, in addition to having the mapper emit a word pair such as (“foo”,”bar”) we will emit an additional word pair of (“foo”,”*”) and will do so for every word pair so we can easily achieve a total count for how often the left most word appears, and use that count to calculate our relative frequencies. This approach raised two specific problems. First we need to find a way to ensure word pairs (“foo”,”*”) arrive at the reducer first. Secondly we need to make sure all word pairs with the same left word arrive at the same reducer. Before we solve those problems, let’s take a look at our mapper code. Mapper Code First we need to modify our mapper from the Pairs approach. At the bottom of each loop after we have emitted all the word pairs for a particular word, we will emit the special token WordPair(“word”,”*”) along with the count of times the word on the left was found. public class PairsRelativeOccurrenceMapper extends Mapper<LongWritable, Text, WordPair, IntWritable> { private WordPair wordPair = new WordPair(); private IntWritable ONE = new IntWritable(1); private IntWritable totalCount = new IntWritable();@Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { int neighbors = context.getConfiguration().getInt('neighbors', 2); String[] tokens = value.toString().split('\\s+'); if (tokens.length > 1) { for (int i = 0; i < tokens.length; i++) { tokens[i] = tokens[i].replaceAll('\\W+','');if(tokens[i].equals('')){ continue; }wordPair.setWord(tokens[i]);int start = (i - neighbors < 0) ? 0 : i - neighbors; int end = (i + neighbors >= tokens.length) ? tokens.length - 1 : i + neighbors; for (int j = start; j <= end; j++) { if (j == i) continue; wordPair.setNeighbor(tokens[j].replaceAll('\\W','')); context.write(wordPair, ONE); } wordPair.setNeighbor('*'); totalCount.set(end - start); context.write(wordPair, totalCount); } } } } Now that we’ve generated a way to track the total numbers of times a particular word has been encountered, we need to make sure those special characters reach the reducer first so a total can be tallied to calculate the relative frequencies. We will have the sorting phase of the MapReduce process handle this for us by modifying the compareTo method on the WordPair object. Modified Sorting We modify the compareTo method on the WordPair class so when a “*” caracter is encountered on the right that particular object is pushed to the top. @Override public int compareTo(WordPair other) { int returnVal = this.word.compareTo(other.getWord()); if(returnVal != 0){ return returnVal; } if(this.neighbor.toString().equals('*')){ return -1; }else if(other.getNeighbor().toString().equals('*')){ return 1; } return this.neighbor.compareTo(other.getNeighbor()); } By modifying the compareTo method we now are guaranteed that any WordPair with the special character will be sorted to the top and arrive at the reducer first. This leads to our second specialization, how can we guarantee that all WordPair objects with a given left word will be sent to the same reducer? The answer is to create a custom partitioner. Custom Partitioner Intermediate keys are shuffled to reducers by calculating the hashcode of the key modulo the number of reducers. But our WordPair objects contain two words, so taking the hashcode of the entire object clearly won’t work. We need to wright a custom Partitioner that only takes into consideration the left word when it comes to determining which reducer to send the output to. public class WordPairPartitioner extends Partitioner<WordPair,IntWritable> {@Override public int getPartition(WordPair wordPair, IntWritable intWritable, int numPartitions) { return wordPair.getWord().hashCode() % numPartitions; } } Now we are guaranteed that all of the WordPair objects with the same left word are sent to the same reducer. All that is left is to construct a reducer to take advantage of the format of the data being sent. Reducer Building the reducer for the inverted order inversion pattern is straight forward. It will involve keeping a counter variable and a “current” word variable. The reducer will check the input key WordPair for the special character “*” on the right. If the word on the left is not equal to the “current” word we will re-set the counter and sum all of the values to obtain a total number of times the given current word was observed. We will now process the next WordPair objects, sum the counts and divide by our counter variable to obtain a relative frequency. This process will continue until another special character is encountered and the process starts over. public class PairsRelativeOccurrenceReducer extends Reducer<WordPair, IntWritable, WordPair, DoubleWritable> { private DoubleWritable totalCount = new DoubleWritable(); private DoubleWritable relativeCount = new DoubleWritable(); private Text currentWord = new Text('NOT_SET'); private Text flag = new Text('*');@Override protected void reduce(WordPair key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { if (key.getNeighbor().equals(flag)) { if (key.getWord().equals(currentWord)) { totalCount.set(totalCount.get() + getTotalCount(values)); } else { currentWord.set(key.getWord()); totalCount.set(0); totalCount.set(getTotalCount(values)); } } else { int count = getTotalCount(values); relativeCount.set((double) count / totalCount.get()); context.write(key, relativeCount); } } private int getTotalCount(Iterable<IntWritable> values) { int count = 0; for (IntWritable value : values) { count += value.get(); } return count; } } By manipulating the sort order and creating a custom partitioner, we have been able to send data to a reducer needed for a calculation, before the data needed for those calculation arrive. Although not shown here, a combiner was used to run the MapReduce job. This approach is also a good candidate for the “in-mapper” combining pattern. Example & Results Given that the holidays are upon us, I felt it was timely to run an example of the order inversion pattern against the novel “A Christmas Carol” by Charles Dickens. I know it’s corny, but it serves the purpose. new-host-2:sbin bbejeck$ hdfs dfs -cat relative/part* | grep Humbug {word=[Humbug] neighbor=[Scrooge]} 0.2222222222222222 {word=[Humbug] neighbor=[creation]} 0.1111111111111111 {word=[Humbug] neighbor=[own]} 0.1111111111111111 {word=[Humbug] neighbor=[said]} 0.2222222222222222 {word=[Humbug] neighbor=[say]} 0.1111111111111111 {word=[Humbug] neighbor=[to]} 0.1111111111111111 {word=[Humbug] neighbor=[with]} 0.1111111111111111 {word=[Scrooge] neighbor=[Humbug]} 0.0020833333333333333 {word=[creation] neighbor=[Humbug]} 0.1 {word=[own] neighbor=[Humbug]} 0.006097560975609756 {word=[said] neighbor=[Humbug]} 0.0026246719160104987 {word=[say] neighbor=[Humbug]} 0.010526315789473684 {word=[to] neighbor=[Humbug]} 3.97456279809221E-4 {word=[with] neighbor=[Humbug]} 9.372071227741331E-4 Conclusion While calculating relative word occurrence frequencies probably is not a common task, we have been able to demonstrate useful examples of sorting and using a custom partitioner, which are good tools to have at your disposal when building MapReduce programs. As stated before, even if most of your MapReduce is written at higher level of abstraction like Hive or Pig, it’s still instructive to have an understanding of what is going on under the hood. Thanks for your time.   Reference: MapReduce Algorithms – Order Inversion from our JCG partner Bill Bejeck at the Random Thoughts On Coding blog. ...
java-logo

Java EE 7 Community Survey Results!

Work on Java EE 7 presses on under JSR 342. Things are shaping up nicely and Java EE 7 is now in the Early Draft Review stage. In beginning of November Oracle posted a little community survey about upcoming Java EE 7 features. Yesterday the results were published. Over 1,100 developers participated in the survey and there was a large number of thoughtful comments to almost every question asked. Compare the prepared PDF attached to the EG mailing-list discussion.       New APIs for the Java EE 7 Profiles We have a couple of new and upcoming APIs which needs to be incorporated into either the Full or the Web Profile. Namely this are WebSocket 1.0, JSON-P 1.0, Batch 1.0 and JCache 1.0. The community was asked in which profile those should end up. The results about which of them should be in the Full Profile:As the graph depicts, support is relatively the weakest for Batch 1.0, but still good. A lot of folks saw JSON-P and WebSocket 1.0 as a critical technology. The same for both with regards to the Web Profile. Support for adding JCache 1.0 and Batch 1.0 is relatively weak. Batch got 51.8% ‘No’ votes.Enabling CDI by Default The majority (73.3%) of developers support enabling CDI by default. Also the detailed comments reflect a strong general support for CDI as well as a desire for better Java EE alignment with CDI. Consistent Usage of @Inject A light majority (53.3%) of developers support using @Inject consistently across all Java EE JSRs. 28.8% still believe using custom injection annotations is ok. The remaining 18.0% were not sure about the right way to go. The vast majority of commenters were strongly supportive of CDI and general Java EE alignment with CDI. Expanding the Use of @Stereotype 62.3% of the attending developers support expanding the use of @Stereotype across Java EE. A majority of the comments express ideas about general CDI/Java EE alignment. Expanding Interceptor Use 96.3% of developers wanted to expand interceptor use to all Java EE components. 35.7% even wanted to expand interceptors to other Java EE managed classes. Most developers (54.9%) were not sure if there is any place that injection is supported that should not support interceptors. 32.8% thought any place that supports injection should also support interceptors. The remaining 12.2% were certain that there are places where injection should be supported but not interceptors. Thanks for taking the time answering the survey. This gives a solid decision base for moving on with Java EE 7. Keep the feedback coming and subscribe to the users@javaee-spec.java.net alias (see archives online)!   Reference: Java EE 7 Community Survey Results! from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog. ...
Spring-Security-logo

Authentication against a RESTful Service with Spring Security

1. Overview This article is focused on how to authenticate against a secure REST API that provides security services – mainly, a RESTful User Account and Authentication Service. 2. The Goal First, let’s go over the actors – the typical Spring Security enabled application needs to authenticate against something – that something can be a database, LDAP or it can be a REST service. The database is the most common scenario; however, a RESTful UAA (User Account and Authentication) Service can work just as well. For the purpose of this article, the REST UAA Service will expose a single GET operation on /authentication, which will return the Principal information required by Spring Security to perform the full authentication process. 3. The Client Typically, a simple Spring Security enabled application would use a simple user service as the authentication source: <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="customUserDetailsService" /> </authentication-manager> This would implement the org.springframework.security.core.userdetails.UserDetailsService and would return the Principal based on a provided username: @Component public class CustomUserDetailsService implements UserDetailsService { @Override public UserDetails loadUserByUsername(String username) { ... } } When a Client authenticates against the RESTful UAA Service, working only with the username will no longer be enough – the client now needs the full credentials – both username and password – when it’s sending the authentication request to the service. This makes perfect sense, as the service itself is secured, so the request itself needs to contain the authentication credentials in order to be handled properly. From the point of view or Spring Security, this cannot be done from within loadUserByUsername because the password is no longer available at that point – we need to take control of the authentication process sooner. We can do this by providing the full authentication provider to Spring Security: <authentication-manager alias="authenticationManager"> <authentication-provider ref="restAuthenticationProvider" /> </authentication-manager> Overriding the entire authentication provider gives us a lot more freedom to perform custom retrieval of the Principal from the Service, but it does come with a fair bit of complexity. The standard authentication provider – DaoAuthenticationProvider – has most of what we need, so a good approach would be to simply extend it and modify only what is necessary. Unfortunately this is not possible, as retrieveUser – the method we would be interested in extending – is final. This is somewhat unintuitive (there is a JIRA discussing the issue) – it looks like the design intention here is simply to provide an alternative implementation which is not ideal, but not a major problem either – our RestAuthenticationProvider copy-pastes most of the implementation of DaoAuthenticationProvider and rewrites what it needs to – the retrieval of the principal from the service: @Override protected UserDetails retrieveUser(String name, UsernamePasswordAuthenticationToken auth){ String password = auth.getCredentials().toString(); UserDetails loadedUser = null; try { ResponseEntity<Principal> authenticationResponse =             authenticationApi.authenticate(name, password); if (authenticationResponse.getStatusCode().value() == 401) { return new User("wrongUsername", "wrongPass", Lists.<GrantedAuthority> newArrayList()); } Principal principalFromRest = authenticationResponse.getBody(); Set<String> privilegesFromRest = Sets.newHashSet(); // fill in the privilegesFromRest from the Principal String[] authoritiesAsArray =             privilegesFromRest.toArray(new String[privilegesFromRest.size()]); List<GrantedAuthority> authorities =             AuthorityUtils.createAuthorityList(authoritiesAsArray); loadedUser = new User(name, password, true, authorities); } catch (Exception ex) { throw new AuthenticationServiceException(repositoryProblem.getMessage(), ex); } return loadedUser; } Let’s start from the beginning – the HTTP communication with the REST Service – this is handled by the authenticationApi – a simple API providing the authenticate operation for the actual service. The operation itself can be implemented with any library capable of HTTP – in this case, the implementation is using RestTemplate: public ResponseEntity<Principal> authenticate(String username, String pass) { HttpEntity<Principal> entity = new HttpEntity<Principal>(createHeaders(username, pass)) return restTemplate.exchange(authenticationUri, HttpMethod.GET, entity, Principal.class); }HttpHeaders createHeaders(String email, String password) { HttpHeaders acceptHeaders = new HttpHeaders() { { set(com.google.common.net.HttpHeaders.ACCEPT,                 MediaType.APPLICATION_JSON.toString()); } }; String authorization = username + ":" + password; String basic = new String(Base64.encodeBase64 (authorization.getBytes(Charset.forName("US-ASCII")))); acceptHeaders.set("Authorization", "Basic " + basic);return acceptHeaders; } A FactoryBean can be used to set up the RestTemplate in the context. Next, if the authentication request resulted in a HTTP 401 Unauthorized, most likely because of incorrect credentials from the client, a principal with wrong credentials is returned so that the Spring Security authentication process can refuse them: return new User("wrongUsername", "wrongPass", Lists.<GrantedAuthority> newArrayList()); Finally, the Spring Security Principal needs some authorities – the privileges which that particular principal will have and use locally after the authentication process. The /authenticate operation had retrieved a full principal, including privileges, so these need to be extracted from the result of the request and transformed into GrantedAuthority objects, as required by Spring Security. The details of how these privileges are stored is irrelevant here – they could be stored as simple Strings or as a complex Role-Privilege structure – but regardless of the details, we only need to use their names to construct the GrantedAuthoritiy objects. After the final Spring Security principal is created, it is returned back to the standard authentication process: List<GrantedAuthority> authorities = AuthorityUtils.createAuthorityList(authoritiesAsArray); loadedUser = new User(name, password, true, authorities); 4. Testing the Authentication Service Writing an integration test that consumes the authentiction REST service on the happy-path is straightforward enough: @Test public void whenAuthenticating_then200IsReceived() { // When ResponseEntity<Principal> response = authenticationRestTemplate.authenticate("admin", "adminPass");// Then assertThat(response.getStatusCode().value(), is(200)); } Following this simple test, more complex integration tests can be implemented as well – however this is outside of the scope of this post. 5. Conclusion This article explained how to authenticate against a REST Service instead of doing so against a local system such as a database. For a full implementation of a secure RESTful service which can be used as an authentication provider, check out the github project.   Reference: Authentication against a REST Service with Spring Security from our JCG partner Eugen Paraschiv at the baeldung blog. ...
java-interview-questions-answers

Google Guava BiMaps

Next up on my tour of Guava, is the BiMap, another useful collection type. It’s pretty simple really, a BiMap is simply a two way map. Inverting a Map A normal java map is a set of keys and values, and you can look up values by key, very useful, eg lets say I wanted to create a (very rudimentary) British English to American English dictionary:       Map<String,String> britishToAmerican = Maps.newHashMap(); britishToAmerican.put('aubergine','egglant'); britishToAmerican.put('courgette','zucchini'); britishToAmerican.put('jam','jelly'); But what if you want an American to British dictionary? Well you could write some code to invert the map: // Generic method to reverse map. public %lt;S,T> Map<T,S> getInverseMap(Map<S,T> map) { Map<T,S> inverseMap = new HashMap<T,S>(); for(Entry<S,T> entry: map.entrySet()) { inverseMap.put(entry.getValue(), entry.getKey()); } return inverseMap; } It’ll do the job, but there’s several complications you might need to think about.How do we handle duplicate values in the original map? At the moment they’ll be silently overwritten in the reverse map. What if we want to put a new entry in the reversed map? We’d also have to update the original map! This could get annoying.BiMaps Well, guess what? This is the sort of situation a BiMap is designed for! And here’s how you might use it. BiMap<String,String> britishToAmerican = HashBiMap.create();// Initialise and use just like a normal map britishToAmerican.put('aubergine','egglant'); britishToAmerican.put('courgette','zucchini'); britishToAmerican.put('jam','jelly');System.out.println(britishToAmerican.get('aubergine')); // eggplantBiMap<String,String> americanToBritish = britishToAmerican.inverse();System.out.println(americanToBritish.get('eggplant')); // aubergine System.out.println(americanToBritish.get('zucchini')); // courgette Pretty simple really, but there’s a few things to notice. Enforcing uniqueness Firstly the BiMap enforces uniqueness of it’s values, and will give you an illegal argument exception if you try to insert a duplicate value, ie britishToAmerican.put('pudding','dessert'); britishToAmerican.put('sweet','dessert'); // IllegalArgumentException. If you need to add a values that has already been added there’s a forcePut method that will overwrite the entry with the duplicate value. britishToAmerican.put('pudding','dessert'); britishToAmerican.forcePut('sweet','dessert'); // Overwrites the previous entry System.out.println(britishToAmerican.get('sweet')); // dessert System.out.println(britishToAmerican.get('pudding')); // null The inverse method The other crucial thing to understand is the inverse method, this returns the inverse BiMap, ie the a map with the keys and values switched round. Now this inverse map, isn’t just a new map, such as my earlier reverseMap method might have created. It’s actually a view of the of the original map. This means that any subsequent changes to the inverse method will affect the original map! americanToBritish.put('potato chips','crisps'); System.out.println(britishToAmerican.containsKey('crisps')); // true System.out.println(britishToAmerican.get('crisps')); // potato chips So that’s the BiMap, like I said pretty simple. As usual there are several implementations available, and as ever I recommend taking a look at the full API documentation: http://guava-libraries.googlecode.com/svn/tags/release09/javadoc/com/google/common/collect/BiMap.html   Reference: Google Guava BiMaps from our JCG partner Tom Jefferys at the Tom’s Programming Blog blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close