Featured FREE Whitepapers

What's New Here?

javaone-logo

JavaOne 2012: How Do Non-Blocking Data Structures Work?

I was a little surprised when I looked at my schedule for today and noted that all of the sessions I have currently planned to see today are in the Hilton. This became a little less surprising when I realized that about half of the JavaOne presentations are in the Hilton and that they seem to be roughly located by track. Tobias Lindaaker‘s (Neo Technology) presentation ‘How Do Atomic Data Structures Work?’ was held in the Hilton’s Golden Gate 3/4/5 conference room area. Lindaaker changed his presentation’s title since he originally submitted the abstract. The abstract’s title (and that listed in the conference materials) was ‘How Do Atomic Data Structures Work?,’ but he has renamed it to ‘How Do Non-Blocking Data Structures Work?’ Lindaaker explained that ‘atomic’ comes from Greek and meaning ‘undividable.’ He explained that a ‘lock-free data structure’ is ‘a data structure that does not block any threads when performing an operation on the data structure (read or write).’ He stated that one wants to avoid ‘spin-waiting‘ whenever possible. Lindaaker talked about synchronized regions. He said such regions ‘create a serialized path through the code’ and ‘guarantee safe publication.’ He defined ‘safe publication’ as meaning ‘everything written before exiting synchronized [block]‘ and ‘guaranteed to be visible on entry of synchronized [block].’ One of his bullets stated, ‘volatile fields give you safe publication without serialization.’ Lindaaker focused more on the volatile keyword modifier in his ‘volatile fields’ slide. The slide ‘What is a memory barrier?’ provided a simple visual representation of the memory barrier concept. For his slide ‘Atomic updates,’ Lindaaker stated that the easiest way to access an atomic reference is via use of java.util.concurrent.atomic.AtomicReference<V>. Lindakker provided a physical demonstration using coasters to illustrate the difference between compareAndSet (sets a value if the conditional matches favorably) and getAndSet. (sets new value returns old value). Lindaaker prefers java.util.concurrent.atomic.AtomicReferenceFieldUpdater<T,V> because of its ‘lower memory overhead’ (‘fewer object headers’) and ‘better memory locality’ (‘no reference indirection’). Lindaaker explained that array-based queues do block (sometimes a benefit when amount of work needs to be limited due to finite hardware resources), linked queues do not block. Lindaaker used a supermarket queue as an example of the differences. In the link-based queue, you always stand behind the same customer in front of you in the queue. In the array-based queue, you always remain in the same position. Bounded queues ‘frequently perform better,’ but will block when full. One of the main themes of this presentation was the idea of learning new ideas and then individually researching them further. Lindaaker recommended that audience members look at the JDK’s code to see some impressive and less impressive code examples. Lindaaker referenced LMAX (London Multi Asset Exchange) Disruptor as an example of a ‘ring buffer’ (‘array with a read mark and a write mark’). He stated that ‘readers contend on the read mark, writers on write mark’ and highlighted the consequence of this, ‘With single reader / single writer, there is no contention.’ The Disruptor page describes Disruptor as a ‘High Performance Inter-Thread Messaging Library.’ Lindaaker stated that java.util.concurrent.ConcurrentHashMap is a good general choice, but is not very exciting for discussion in his presentation. He stated that it ‘scales reasonably well on current commodity hardware’ (fewer than 100 CPUs) with proper tuning. Neo Technology provides a database implementation (Neo4j) that is not relational (graph database). Lindaaker described the Neo Technology’s graph-based database offering as, ‘Stores data as nodes and relationships between nodes.’ Don’t forget to share! Reference: JavaOne 2012: How Do Non-Blocking Data Structures Work? from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
grails-logo

Stuff I Learned from Grails Consulting

I don’t do much Grails consulting since I work for the Engineering group, and we have an excellent group of support engineers that usually work directly with clients. I do occasionally teach the 3-day Groovy and Grails course but I’ve only been on two onsite consulting gigs so far, and one was a two-week engagement that ended last week. As is often the case when you teach something or help someone else out, I learned a lot and was reminded of a lot of stuff I’d forgotten about, so I thought it would be good to write some of that down for future reference. SQL Logging There are two ways to view SQL output from queries; adding logSql = true in DataSource.groovy and configuring Log4j loggers. The Log4j approach is a lot more flexible since it doesn’t just dump to stdout, and can be routed to a file or other appender and conveniently enabled and disabled. But it turns out it’s easy to toggle logSql SQL console logging. Get a reference to the sessionFactory bean (e.g. using dependency injection with def sessionFactory) and turn it on with sessionFactory.settings.sqlStatementLogger.logToStdout = true and off with sessionFactory.settings.sqlStatementLogger.logToStdout = falsestacktrace.log The stacktrace.log file was getting very large and they wanted to configure it to use a rolling file appender. Seemed simple enough, but it took a lot longer than I expected. The trick is to create an appender with the name 'stacktrace'; the Grails logic that parses the Log4j DSL looks for an existing appender and uses it, and only configures the default one if there isn’t one already configured. So here’s one that configures a RollingFileAppender with a maximum of 10 files, each a maximum of 10MB in size, and with the standard layout pattern. In addition it includes logic to determine if it’s deployed in Tomcat so it can write to the Tomcat logs folder, or the target folder if you’re using run-app. If you’re deploying to a different container, adjust the log directory calculation appropriately. appenders { String logDir = grails.util.Environment.warDeployed ? System.getProperty('catalina.home') + '/logs' : 'target' rollingFile name: 'stacktrace', maximumFileSize: 10 * 1024 * 1024, file: '$logDir/stacktrace.log', layout: pattern( conversionPattern: ''%d [%t] %-5p %c{2} %x - %m%n''), maxBackupIndex: 10 } Dynamic fooId property In a many-to-one where you have a Foo foo field (or static belongsTo = [foo: Foo] which triggers adding a ‘foo’ field) you can access its foreign key with the dynamic fooId property. This can be used in a few ways. Since references like this are lazy by default, checking if a nullable reference exists using foo != null involves loading the entire instance from the database. But checking fooId != null involves no database access. Other queries or updates that really only need the foreign key will be cheaper using fooId. For example, to set a reference in another instance you would typically use code like this: bar2.foo = bar1.foo bar2.save() But you can use the load method bar2.foo = bar1.fooId ? Foo.load(bar1.fooId) : null bar2.save() and avoid loading the Foo instance just to set its foreign key in the second instance and then discard it. Deleting by id is less expensive too; ordinarily you use get to load an instance and call its delete method, but retrieving the entire instance isn’t needed. You can do this instead: Foo.load(bar.fooId).delete()DRY constraints You can use the importFrom method inside a constraints block in a domain class to avoid repeating constraints. You can import all constraints from another domain class: static constraints = { someProperty nullable: true ... importFrom SomeOtherDomainClass } and optionally use the include and/or exclude properties to use a subset: static constraints = { someProperty nullable: true ... importFrom SomeOtherDomainClass, exclude: ['foo', 'bar'] }Flush event listener They were seeing some strange behavior where collections that weren’t explicitly modified were being changed and saved, causing StaleObjectStateExceptions. It wasn’t clear what was triggering this behavior, so I suggested registering a Hibernate FlushEventListener to log the state of the dirty instances and collections during each flush: package com.burtbeckwith.blogimport org.hibernate.HibernateException import org.hibernate.collection.PersistentCollection import org.hibernate.engine.EntityEntry import org.hibernate.engine.PersistenceContext import org.hibernate.event.FlushEvent import org.hibernate.event.FlushEventListenerclass LoggingFlushEventListener implements FlushEventListener {void onFlush(FlushEvent event) throws HibernateException { PersistenceContext pc = event.session.persistenceContextpc.entityEntries.each { instance, EntityEntry value -> if (instance.dirty) { println 'Flushing instance $instance' } }pc.collectionEntries.each { PersistentCollection collection, value -> if (collection.dirty) { println 'Flushing collection '$collection.role' $collection' } } } } It’s not sufficient in this case to use the standard hibernateEventListeners map (described in the docs here) since that approach adds your listeners to the end of the list, and this listener needs to be at the beginning. So instead use this code in BootStrap.groovy to register it: import org.hibernate.event.FlushEventListener import com.burtbeckwith.blog.LoggingFlushEventListenerclass BootStrap {def sessionFactorydef init = { servletContext ->def listeners = [new LoggingFlushEventListener()] def currentListeners = sessionFactory.eventListeners.flushEventListeners if (currentListeners) { listeners.addAll(currentListeners as List) } sessionFactory.eventListeners.flushEventListeners = listeners as FlushEventListener[] } }“Read only” objects and Sessions The read method was added to Grails a while back, and it works like get except that it marks the instance as read-only in the Hibernate Session. It’s not really read-only, but if it is modified it won’t be a candidate for auto-flushing using dirty detection. But you can explicitly call save() or delete() and the action will succeed. This can be useful in a lot of ways, and in particular it is more efficient if you won’t be changing the instance since Hibernate will not maintain a copy of the original database data for dirty checking during the flush, so each instance will use about half of the memory that it would otherwise. One limitation of the read method is that it only works for instances loaded individually by id. But there are other approaches that affect multiple instances. One is to make the entire session read-only: session.defaultReadOnly = true Now all loaded instances will default to read-only, for example instances from criteria queries and finders. A convenient way to access the session is the withSession method on an arbitrary domain class: SomeDomainClass.withSession { session -> session.defaultReadOnly = true } It’s rare that an entire session will be read-only though. You can set the results of individual criteria query to be read-only with the setReadOnly method: def c = Account.createCriteria() def results = c { between('balance', 500, 1000) eq('branch', 'London') maxResults(10) setReadOnly true } One significant limitation of this technique is that attached collections are not affected by the read-only status of the owning instance (and there doesn’t seem to be a way to configure collection to ignore changes on a per-instance basis). Read more about this in the Hibernate documentation Reference: Stuff I Learned Consulting from our JCG partner Burt Beckwith at the An Army of Solipsists blog....
javaone-logo

JavaOne 2012: JavaOne Technical Keynote

Mark Reinhold started off the JavaOne 2012 Technical Keynote. He said this year’s edition would be a little different because it would use largely the same example to illustrate various aspects of Java rather than standalone individual coverage of each component of Java. Richard Bair and Jasper Potts of the JavaFX team (and associated with FXExperience) introduced this example application, a schedule builder with presentation and speaker data from this year’s JavaOne. As part of the introduction of the example application, the presenters made extra effort to point out that Oracle is shipping the JVM for MacOS and that OpenJDK is what is being used in the example. They also stated that the example runs on Linux as well. They used Java SE 7 and JavaFX 2 for this application and they talked about the availability of SceneBuilder for building a JavaFX application. They demonstrated the use of SceneBuilder within NetBeans to generate the JavaFX-based login page. Other interesting JavaFX advancements mentioned include the addition of a ComboBox (though there is no Date Picker yet), interoperability with SWT, and the availability of a JavaFX Packager. It was also mentioned that JavaFX was architected and designed from the beginning to allow for the main UI thread to be separate from background threads, allowing it to take advantage of multiple CPUs. Bair showed the relatively verbose code that would be required to implement a JavaFX application to fully take advantage of multiple threads today. Brian Goetz came to the stage to describe how Project Lambda and the changes to the Java language will enable ‘better parallel libraries.’ Goetz said that the easiest way to help developers is to give them better libraries, but the language must sometime be extended when the limits of the language prevent libraries from being written to fully satisfy the need. Goetz stated that the goals of inner classes are the same as Project Lambda, but inner classes have ‘a whole lot of other baggage.’ Goetz added that bulk operations on collections may not ‘really be needed, but things are better this way.’ Goetz then showed a simple but highly illustrative example of how Project Lambda changes how we process bulk data changes in a collection. His slide showed the J2SE 5 enhanced for loop is used today but can be done with the forEach method (added to all of the collections via the new default implementation interface approach) and a Groovy-like closure syntax (->). Goetz’s next slide was even more impressive. He showed what appeared to be three operations being performed on a collection as it was iterated. However, he pointed out that these would all be enacted at once on the collection with only a single traversal of that collection. All I could think was, ‘Wow!’ Goetz also had a slide showing off the computeIfAbsent operation on collections. He ended by saying there’s still lots of work to do and citing two URLs for playing with Project Lambda: http://openjdk.java.net/projects/lambda/ and http://jdk8.java.net/lambda/. There was some interesting discussion on the differences between traditional Java environments and embedded environments. Raspberry Pi received multiple and prominent mentions. Reinhold started talking about modularity and Project Jigsaw and showed a ‘little bit of a spaghetti diagram that is way cleaner than where we started, which was a total spaghetti diagram.’ He used this as a starting point for discussing the controversial decision to boot Project Jigsaw from Java 8 to Java 9. Reinhold had a slide focused on things that are in Java 8 such as Project Lambda, Compact Profiles, Type Annotations, Project Nashorn, and the new Date/Time API. Reinhold added that ‘all this work is being done in OpenJDK’ and that ‘all the specification work is being done in the JCP.’ Arun Gupta had the unenviable task of beginning his presentation at the time the keynote was scheduled to end (7 pm local time). He talked about Java EE and showed a slide titled, ‘Java EE Past, Present, & Future.’ This slide showed how Java EE has added features since the ten specifications of J2EE 1.2 in December 1999. Gupta had another slide talking about ‘Java EE 7 Revised Scope’ and how it increases productivity (via less boilerplate code with richer functionality and more defaults) and adds HTML5 support (WebSocket, JSON, and HTML5 Forms). Another Gupta slide was titled ‘Java EE 7 – Candidate JSRs’ that listed JSRs that are all new to Java EE 7 as well as those being modified. He then focused individual slides on some of them. His ‘Java API for RESTful Web Services 2.0′ slide talked about a standarized approach using a client API. Gupta’s slides showing how this is done today (without libraries) and comparing it to the next client API demonstrated how much simpler this is going to be. Gupta’s coverage of JMS 2.0 included discussion of less verbosity in JMS thanks to annotations and other new features in the Java programming language. He mentioned that the required resource adapter will make it easier to ‘mix and match’ JMS providers in the future. Gupta showed a slide full of small-font code (‘this code is not meant to be readable’) demonstrating sending a message using JMS 1.1. This was followed with a slide showing significantly less (and much clearer) code in JMS 2.0 taking advantage of annotations and resource injection to send a message. Gupta’s coverage of the JSON support to be added to Java EE included the bullet ‘API to parse, generate, transform, query, etc. JSON.’ He then showed some slides with example JSON-formatted data and example code for using builder-style to access the JSON. It felt a lot like Groovy’s JSON handling. Java API for WebSocket 1.0 will allow annotations to be used to easily work with WebSocket. When covering Bean Validation 1.1, Gupta pointed out that not all new adopted JSRs are being led by Oracle. He showed using the built-in @NotNull annotation on method parameters, but also showed that one will be able to write custom constraints that can be similarly applied to method arguments. Gupta highlighted miscellaneous improvements to Java EE such as JPA 2.1, EJB 3.2, etc. The majority of these JSRs have early public drafts available. GlassFish 4 is the reference implementation of Java EE 7 and already includes WebSocket, JSON, JMS 2, and more. One of Gupta’s slides was focused on Avatar. The ‘Angry Bids’ example application was demonstrated. It is based on Avatar and runs on GlassFish and uses standard Java EE 7 components. Gupta introduced Project Easel for NetBeans. It was mentioned that NetBeans 7.3 beta would be coming out later this week and will include support for HTML5 as a new project type. The example being showed uses JQuery and CSS. The NetBeans-based example communicated through Google Chrome to WebKit (it also works with the JavaFX-embedded browser), but it is expected to work eventually with any WebKit-based browser or device. The demonstrator showed how his changes to HTML5 code (HTML, JavaScript, and CSS) within NetBeans were updated in the Google Chrome browser. It was pretty impressive and makes me wish I had enough time to have accepted an invitation to provide early testing of NetBeans 7.3. NetBeans is going to be able to generate RESTful clients, support JQuery, and provide a Project Nashorn editor. A similar demo to this one is available at http://netbeans.org/kb/docs/web/html5-gettingstarted-screencast.html. Like the Strategy Keynote, this Technical Keynote was held in the Masonic Auditorium. One of the interesting trends I noticed in tonight’s keynotes was that at least three different people from three different organizations mentioned looking for skilled Java developers should contact them if interested in job opportunities. Reference: JavaOne 2012: JavaOne Technical Keynote from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
javaone-logo

JavaOne 2012: Java Strategy Keynote and IBM Keynote

I had a rough start to JavaOne 2012 similar to that at JavaOne 2010. It took 70 minutes for the people handling the check-in to provide me with a JavaOne badge due to ‘computer and printer technical difficulties.’ Although I’m not the most patient person in the world, the part of this that was even more disappointing than the wait is that I missed being part of the ‘Community Session: For You – By You: Growing the NetBeans Community’ panel at NetBeans Community Day at JavaOne 2012, something I was really looking forward to attending and participating in. I had arrived at Moscone West about 15 minutes before that panel was to begin, but did not end up getting my badge until well after the panel was over. In a disappointed mood, I headed to the Nob Hill Masonic Center (AKA Masonic Auditorium, AKA California Masonic Memorial Temple) on Nob Hill to attend the initial evening’s keynote address. Java Strategy Keynote The first announcement was to ‘turn off all electronic devices.’ After that announcement, a video was shown. I was happy that it was short. Hassan Risvi introduced the theme for JavaOne 2012: ‘Make the Future Java.’ He showed slides indicating the 2012 Scorecard for three areas of Java’s strategy: technical innovation, community involvement, and Oracle leadership.Georges Saab stated that Oracle has made Java available for more new platforms in the past year with JDK 7 than added in the previous ten years. He highlighted JDK 7’s adoption and talked about OpenJDK. One feature of JDK 8 that he highlighted is Project Nashorn, a JavaScript implementation taking advantage of invokedynamic for high performance with high interoperability with Java and the JVM. He announced that Project Nashorn will be contributed to OpenJDK. He stated that IBM, RedHat, and Twitter have already expressed support for Project Nashorn as part of OpenJDK. Dierk König of Canoo and Navis were guest speakers at this Strategy Keynote. They talked about their use of JavaFX and the Canoo Dolphin project being open sourced. Nandini Ramani talked about JavaFX and stated that its now available for all major platforms. She also cited the release of NetBeans 7.2 with integrated SceneBuilder as part of improved tooling. She also reminded the audience that JavaFX is now bundled with current versions of Java. Ramani announced that JavaFX is now available for Linux ARM. She mentioned that 3D is coming to JavaFX. It was also mentioned in this keynote that they expect JavaFX to be fully open source by the end of the calendar year. AMD’s Phil Rogers talked about hardware trends moving from single-core CPUs to multi-core CPUs to GPUs using ‘a single piece of silicon and shared memory.’ Saab and Rogers stated that Project Sumatra allows the JVM to be modified so that Java developers can take advantage of new features in the hardware with existing Java language skills. The JVM will be able to decide whether to run the Java code on a multi-CPU or multi-processor. Ramani returned to the stage and mentioned two new recently announced releases: Java ME Embedded 3.2 and Java Embedded Suite 7.0. Axel Hansmann of Cinterion talked about his company’s use of Java ME Embedded. Marc Brule of the Royal Canadian Mint joined Ramani on stage to talk about their use of Java Card: MintChip (‘The Evolution of Currency’). Cameron Purdy came to the stage to discuss Java EE. Purdy announced that the earliest releases of Java EE 7 SDK can be downloaded via GlassFish versions. Purdy also announced that GlassFish 4 already includes significant HTML 5 additions mentioned at JavaOne 2011. Purdy pointed out that NoSQL is not standardized yet (‘you could call it ‘No Standard Databases”) and pointed out that JPA already supports MongoDB and Oracle NoSQL with planned support for Cassandra and other NoSQL implementations. Purdy stated that April 2013 is the currently planned timeframe for release of Java EE 7. Nicole Otto of Nike joined Purdy on stage and showed a brief video (FuelBand: ‘Life is a Sport: Make It Count’). She talked about Java EE being used to track data on activity. Purdy had a slide ‘Java EE 8 and beyond’ that was sub-titled: ‘Standards-based cloud programming model.’ A short film was shown to introduce Dr. Robert Ballard (now I know why we were given the Alien Deep DVD upon entrance to the keynote). Dr. Ballard talked about his discovery of the Titanic and explained how the technology used for that discovery was like tying two tin cans together compared to the technology available today. The most laughter and applause of the night came to his statement that he hoped to find a spaceship in his explorations so that he never has to talk about discovering the Titanic again. Dr. Ballard stated that we should not sell science or engineering but should make it more personal to kids and sell scientists and engineers. He stated that ‘the battle for a scientist of engineer is over by the eighth grade.’IBM Keynote: (hardware,software)–>{IBM.java.patterns} We moved, without break, directly into the IBM Keynote. Jason McGee (blog), an IBM representative, talked about ‘some of the things we’ve seen related to Java and the cloud.’ He talked about ‘Java Challenges’ as ‘share more,’ ‘cooperate,’ ‘use less’ (resources), exploit technology. John Duimovich came to the stage to talk more about these four challenges ‘in context on the Java Virtual Machine.’ Duimovich talked about ‘shared classes cache’ and AOT (Ahead of Time), described as ‘JIT code saved for next JVM). Duimovich also talked about multi-tenancy and supporting ‘isolation within a single JVM.’ He had a slide on the Liberty Profile ‘for Web, OSGi, and Mobile Apps.’ Duimovich introduced ‘really cool hardware’ called System z and explained the advantages of running Java (rather than C or C++) on this hardware. Duimovich stated that ‘Oracle and Java team together on Java, but compete head to head.’ He pointed out that this ‘competition drives innovation’ and is good for customers and developers. McGee returned to the stage to talk about a few more themes, observations, and trends. He pointed out that ‘Java provides developers abstraction from underlying hardware,’ but that ‘hardware is changing and evolving rapidly.’ McGee’s slide stated that ‘both Java and Cloud need to enable the exploitation of these hardware advances while still preserving the ‘run anywhere’ benefit.’ Another McGee slide was titled ‘Java in a ploygot world…’ and he used this slide to talk about the world transitioning from an all-Java enterprise world to today with applications written in numerous languages. He mentioned several alternative JVM-based languages and put a plug in for IBM’s X10 language. McGee believes that Java will be part of, but not all of, future enterprise applications. Don’t forget to share! Reference: JavaOne 2012: Java Strategy Keynote and IBM Keynote from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
google-logo

Google Guava v07 examples

We have something called Weekly Technology Workshops at TouK, that is, every Friday at 16:00 somebody has a presentation for everyone willing to come. We present stuff we learn and work on at home, but we also have a bulletin board with topics that people would like to listen about. Last week Maciej Próchniak had a talk about Clojure, this time a few folks asked for an introduction to Google Guava libraries. Since this was a dead simple task, I was happy to deliver. WTF is Guava? It’s a set of very simple, basic classes, that you end up writing yourself anyway. Think in terms of Apache commons, just by Google. Just to make your life a little bit easier. There is an early (v04) presentation and there was a different one (in Polish) at Javarsowia 2010 by Wiktor Gworek. At the time of writing this, the latest version is v07, it’s been mavenized and is available at a public maven repo. Here’s a quick review of a few interesting things. Don’t expect anything fancy though, Guava is very BASIC.@VisibleForTesting A simple annotation that tells you why a particular property access restriction has been relaxed. A common trick to use in testing is to relax access restrictions to default for a particular property, so that you can use it in a unit test, which resides in the same package (though in different catalog). Whether you thing it’s good or bad, remember to give a hint about that to the developer. Consider: public class User { private Long id; private String firstName; private String lastName; String login;Why is login package scoped? public class User { private Long id; private String firstName; private String lastName; @VisibleForTesting String login; Ah, that’s why.Preconditions Guava has a few preconditions for defensive programming (Design By Contract), but they are not quite as good as what Apache Commons / Spring framework has. One thing interesting is that Guava solution returns the object, so could be inlined. Consider: Using hand written preconditions: public User(Long id, String firstName, String lastName, String login) { validateParameters(id, firstName, lastName, login); this.id = id; this.firstName = firstName; this.lastName = lastName; this.login = login.toLowerCase(); }private void validateParameters(Long id, String firstName, String lastName, String login) { if(id == null ) { throw new IllegalArgumentException('id cannot be null'); }if(firstName == null || firstName.length() == 0) { throw new IllegalArgumentException('firstName cannot be empty'); }if(lastName == null || lastName.length() == 0) { throw new IllegalArgumentException('lastName cannot be empty'); }if(login == null || login.length() == 0) { throw new IllegalArgumentException('login cannot be empty'); } } Using guava preconditions: public void fullyImplementedGuavaConstructorWouldBe(Long id, String firstName, String lastName, String login) { this.id = checkNotNull(id); this.firstName = checkNotNull(firstName); this.lastName = checkNotNull(lastName); this.login = checkNotNull(login);checkArgument(firstName.length() > 0); checkArgument(lastName.length() > 0); checkArgument(login.length() > 0); } (Thanks Yom for noticing that checkNotNull must go before checkArgument, though it makes it a bit unintuitive) Using spring or apache commons preconditions (the use looks exactly the same for both libraries): public void springConstructorWouldBe(Long id, String firstName, String lastName, String login) { notNull(id); hasText(firstName); hasText(lastName); hasText(login); this.id = id; this.firstName = firstName; this.lastName = lastName; this.login = login; } CharMatcher For people who hate regexp or just want a simple and good looking object style pattern matching solution. Examples: And/or ease of use String input = 'This invoice has an id of 192/10/10'; CharMatcher charMatcher = CharMatcher.DIGIT.or(CharMatcher.is('/')); String output = charMatcher.retainFrom(input);output is: 192/10/10 Negation: String input = 'DO NOT scream at me!'; CharMatcher charMatcher = CharMatcher.JAVA_LOWER_CASE.or(CharMatcher.WHITESPACE).negate(); String output = charMatcher.retainFrom(input);output is: DONOT! Ranges: String input = 'DO NOT scream at me!'; CharMatcher charMatcher = CharMatcher.inRange('m', 's').or(CharMatcher.is('a').or(CharMatcher.WHITESPACE)); String output = charMatcher.retainFrom(input);output is: sram a m Joiner / Splitter As the names suggest, it’s string joining/splitting done the right way, although I find the inversion of calls a bit… oh well, it’s java. String[] fantasyGenres = {'Space Opera', 'Horror', 'Magic realism', 'Religion'}; String joined = Joiner.on(', ').join(fantasyGenres);Output: Space Opera, Horror, Magic realism, Religion You can skip nulls: String[] fantasyGenres = {'Space Opera', null, 'Horror', 'Magic realism', null, 'Religion'}; String joined = Joiner.on(', ').skipNulls().join(fantasyGenres);Output: Space Opera, Horror, Magic realism, Religion You can fill nulls: String[] fantasyGenres = {'Space Opera', null, 'Horror', 'Magic realism', null, 'Religion'}; String joined = Joiner.on(', ').useForNull('NULL!!!').join(fantasyGenres);Output: Space Opera, NULL!!!, Horror, Magic realism, NULL!!!, Religion You can join maps Map<Integer, String> map = newHashMap(); map.put(1, 'Space Opera'); map.put(2, 'Horror'); map.put(3, 'Magic realism'); String joined = Joiner.on(', ').withKeyValueSeparator(' -> ').join(map);Output: 1 ? Space Opera, 2 ? Horror, 3 ? Magic realism Split returns Iterable instead of JDK arrays: String input = 'Some very stupid data with ids of invoces like 121432, 3436534 and 8989898 inside'; Iterable<String> splitted = Splitter.on(' ').split(input);Split does fixed length splitting, although you cannot give a different length for each “column” which makes it’s use a bit limited while parsing some badly exported excels. String input = 'A 1 1 1 1\n' + 'B 1 2 2 2\n' + 'C 1 2 3 3\n' + 'D 1 2 5 3\n' + 'E 3 2 5 4\n' + 'F 3 3 7 5\n' + 'G 3 3 7 5\n' + 'H 3 3 9 7'; Iterable<String> splitted = Splitter.fixedLength(3).trimResults().split(input);You can use CharMatcher while splitting String input = 'Some very stupid data with ids of invoces like 123231/fv/10/2010, 123231/fv/10/2010 and 123231/fv/10/2010'; Iterable<String> splitted = Splitter.on(CharMatcher.DIGIT.negate()) .trimResults() .omitEmptyStrings() .split(input); Predicates / Functions Predicates alone are not much, it’s just an interface with a method that returns true, but if you combine predicates with functions and Collections2 (a guava class that simplifies working on collections), you get a nice tool in your toolbox. But let’s start with basic predicate use. Imagine we want to find whether there are users who have logins with digits inside. The inocation would be (returns boolean): Predicates.in(users).apply(shouldNotHaveDigitsInLoginPredicate);And the predicate looks like that public class ShouldNotHaveDigitsInLoginPredicate implements Predicate<User> { @Override public boolean apply(User user) { checkNotNull(user); return CharMatcher.DIGIT.retainFrom(user.login).length() == 0; } }Now lets add a function that will transform a user to his full name: public class FullNameFunction implements Function<User, String> { @Override public String apply(User user) { checkNotNull(user); return user.getFirstName() + ' ' + user.getLastName(); } }You can invoke it using static method transform: List<User> users = newArrayList(new User(1L, 'sylwek', 'stall', 'rambo'), new User(2L, 'arnold', 'schwartz', 'commando'));List<String> fullNames = transform(users, new FullNameFunction());And now lets combine predicates with functions to print names of users that have logins which do not contain digits: List<User> users = newArrayList(new User(1L, 'sylwek', 'stall', 'rambo'), new User(2L, 'arnold', 'schwartz', 'commando'), new User(3L, 'hans', 'kloss', 'jw23'));Collection<User> usersWithoutDigitsInLogin = filter(users, new ShouldNotHaveDigitsInLoginPredicate()); String names = Joiner.on('\n').join( transform(usersWithoutDigitsInLogin, new FullNameFunction()) ); What we do not get: fold (reduce) and tuples. Oh well, you’d probably turn to Java Functional Library anyway, if you wanted functions in Java, right? CaseFormat Ever wanted to turn those ugly PHP Pear names into nice java/cpp style with one liner? No? Well, anyway, you can: String pearPhpName = 'Really_Fucked_Up_PHP_PearConvention_That_Looks_UGLY_because_of_no_NAMESPACES'; String javaAndCPPName = CaseFormat.UPPER_UNDERSCORE.to(CaseFormat.UPPER_CAMEL , pearPhpName);Output: ReallyFuckedUpPhpPearconventionThatLooksUglyBecauseOfNoNamespaces But since Oracle has taken over Sun, you may actually want to turn those into sql style, right? String sqlName = CaseFormat.UPPER_CAMEL.to(CaseFormat.LOWER_UNDERSCORE, javaAndCPPName); Output: really_fucked_up_php_pearconvention_that_looks_ugly_because_of_no_namespaces Collections Guava has a superset of Google collections library 1.0, and this indeed is a very good reason to include this dependency in your poms. I won’t even try to describe all the features, but just to point out a few nice things:you have an Immutable version of pretty much everything you get a few nice static and statically typed methods on common types like Lists, Sets, Maps, ObjectArrays, which include:easy way of creating based on return type: e.g. newArrayList transform (way to apply functions that returns Immutable version) partition (paging) reverseAnd now for a few more interesting collections. Mutlimaps Mutlimap is basically a map that can have many values for a single key. Ever had to create a Map<T1, Set<T2>> in your code? You don’t have to anymore. Multimap<Integer, String> multimap = HashMultimap.create(); multimap.put(1, 'a'); multimap.put(2, 'b'); multimap.put(3, 'c'); multimap.put(1, 'a2');There are of course immutable implementations as well: ImmutableListMultimap, ImmutableSetMultomap, etc. You can construct immutables either in line (up to 5 elements) or using a builder: Multimap<Integer, String> multimap = ImmutableSetMultimap.of(1, 'a', 2, 'b', 3, 'c', 1, 'a2'); Multimap<Integer, String> multimap = new ImmutableSetMultimap.Builder<Integer, String>() .put(1, 'a') .put(2, 'b') .put(3, 'c') .put(1, 'a2') .build();BiMap BiMap is a map that have only unique values. Consider this: @Test(expected = IllegalArgumentException.class) public void biMapShouldOnlyHaveUniqueValues() { BiMap<Integer, String> biMap = HashBiMap.create(); biMap.put(1, 'a'); biMap.put(2, 'b'); biMap.put(3, 'a'); //argh! an exception }That allows you to inverse the map, so the values become key and the other way around: BiMap<Integer, String> biMap = HashBiMap.create(); biMap.put(1, 'a'); biMap.put(2, 'b'); biMap.put(3, 'c');BiMap<String, Integer> invertedMap = biMap.inverse();Not sure what I’d actually want to use it for. Constraints This allows you to add constraint checking on a collection, so that only values which pass the constraint may be added. Imagine we want a collections of users with first letter ‘r’ in their logins. Constraint<User> loginMustStartWithR = new Constraint<User>() { @Override public User checkElement(User user) { checkNotNull(user); if(!user.login.startsWith('r')) { throw new IllegalArgumentException('GTFO, you are not Rrrrrrrrr'); }return user; } };And now for a test: @Test(expected = IllegalArgumentException.class) public void shouldConstraintCollection() { //given Collection<User> users = newArrayList(new User(1L, 'john', 'rambo', 'rambo')); Collection<User> usersThatStartWithR = constrainedCollection(users, loginMustStartWithR);//when usersThatStartWithR.add(new User(2L, 'arnold', 'schwarz', 'commando')); }You also get notNull constraint out of the box: //notice it's not an IllegalArgumentException :( @Test(expected = NullPointerException.class) public void notNullConstraintShouldWork() { //given Collection<Integer> users = newArrayList(1); Collection<Integer> notNullCollection = constrainedCollection(users, notNull());//when notNullCollection.add(null); }Thing to remember: constraints are not checking the data already present in a collection. Tables Just as expected, a table is a collection with columns, rows and values. No more Map<T1, Map<T2, T3>> I guess. The usage is simple and you can transpose: Table<Integer, String, String> table = HashBasedTable.create(); table.put(1, 'a', '1a'); table.put(1, 'b', '1b'); table.put(2, 'a', '2a'); table.put(2, 'b', '2b');Table transponedTable = Tables.transpose(table);That’s all, folks. I didn’t present util.concurrent, primitives, io and net packages, but you probably already know what to expect. Happy coding and don’t forget to share! Reference: Google Guava v07 examples from our JCG partner Jakub Nabrdalik at the Solid Craft blog....
devops-logo

Business Agility Through DevOps and Continuous Delivery

The principles of Continuous Delivery and DevOps have been around for a few years. Developers and system administrators who follow the lean-startup movement are more than familiar with both. However, more often than not, implementing either or both within a traditional, large IT environment is a significant challenge compared to a new age, Web 2.0 type organization (think Flickr) or a Silicon Valley startup (think Instagram). This is a case study of how the consultancy firm I work for delivered the largest software upgrade in the history of one blue chip client, using both. Background The client, is one of Australia’s largest retailers. The firm I work for is a trusted consultant working with them for over a decade. During this time (thankfully), we have earned enough credibility to influence business decisions heavily dependent on IT infrastructure. A massive IT infrastructure upgrade was imminent, when our client wanted to leverage their loyalty rewards program to fight competition head-on. With an existing user base of several millions and our client looking to double this number with the new campaign, the expectations from the software was nothing short of spectacular. In addition to ramping up the existing software, a new set of software needed to be in place, capable of handling hundreds of thousands of new user registrations per hour. Maintenance downtime was not an option (is it ever?) once the system went live (especially during the marketing campaign period). Why DevOps? Our long relationship with this client and the way IT operations is organized meant that adopting DevOps was evolutionary than revolutionary. The good folk at operations have a healthy respect and trust towards our developers and the feeling is mutual. Our consultants provided development and 24/7 support for the software. The software include a Web Portal, back office systems, partner integration systems and customer support systems. Adopting DevOps principles meant;That our developers have more control over the environments the software runs in, from build to production. Developers have better understanding of the production environment the software eventually run in, opposed to their local machines. Developers are able to clearly explain to infrastructure operations group what the software does in each environment. Simple clear processes to manage the delivery of change. Better collaboration between developers and operations. No need to raise tickets.Why Continuous Delivery? The most important reason was the reduced risk to our client’s new campaign. With a massive marketing campaign in full throttle, targeting millions of new user sign-ups, the software systems needed to maintain 100% up-time. Taking software offline for maintenance, meant lost opportunity and money for the business. In a nutshell;A big bang approach would have been fine for the initial release. But when issues are found we want to deliver fixes without down time. When the marketing campaign is running, based on analytics and metrics, improvements and features will need to be done to the software. Delivering them in large batches (taking months) doesn’t deliver good business value. In a developer’s perspective, delivering small changes frequently helps to identify what went wrong easily and either roll back or re-deploy a fix. Years of Agile practices followed by us at the client’s site ensured that a proper culture is in place to adopt continuous delivery painlessly. We were already using Hudson/Jenkins for continuous integration. We only needed the ‘last mile’ of the deployment pipeline to be built, in order to upgrade the existing technical process to a one that delivered continuously.The process: keep it simple and transparent The development process we follow is simple and the culture is such, that each developer is aware that at any given moment one or more of their commits can be released to production. To make the burden minimum, we use subversion tags and branching so that release candidate revisions are tagged before a release candidate is promoted to the test environment (more on that later). The advantage of tagging early is that we have more control over changes we deliver into production. For instance, bug fixes versus feature releases.Image credit – WikipediaThe production environment consists of a cluster of twenty nodes. Each node contains a Tomcat instance fronted by Apache. The load balancer provides functionality to release nodes from the cluster when required, although not as advanced as API level communication provided by Amazon’s elastic load balancer (this is an investment made by the client way back, so we opted to work with it than complaining). Jenkins CI is used as the foundation for our continuous delivery process. The deployment pipeline consists of several stages. We kept the process simple just like the diagram above, to minimize confusion.1.Build – At this stage the latest revision from Subversion is checked out by Jenkins at the build server, unit tests are run and once successful, the artifacts bundled. The build environment is also equipped with infrastructure to test deploy the software for verification. Every build is deployed to this test infrastructure by Jenkins.Creating a release candidate build with subversion tagging.Promotion tasks2.Test (UAT) – Once a build is verified by developers, it’s promoted to the Test environment using a Jenkins task.A promotion indicates that the developers are confident of a build and it’s ready for quality assurance. The automated promotion process creates a tag in Subversion using the revision information packaged into the artifacts. Automated integration tests written using Selenium is run against the Test deployment. The QA team uses this environment to carry out their testing.3.Production Verification – Once artifacts are tested by the test team and no failures reported by the automated integration tests, a node is picked from the production cluster and – using a Jenkins job – prepared for smoke testing. This automated process will;Remove the elected node from the cluster. Deploy the tested artifacts to this node.Removing a node from the production cluster.Nominating a node (s) for production verification.4.Production (Cut-over) – Once the smoke tests are done, the artifacts are deployed to the cluster by a separate Jenkins task.The deployment is following a round-robin schedule, where each node is taken off the load balancer to deploy and refresh the software. The deployment time is highly predictable and almost constant. As soon as a node is returned to the cluster, verification begins. 5.Rollback (Disaster recovery) – In case of a bad deployment, despite all the testing and verification, rollback to the last stable deployment. Just like the cut-over deployment above, the time is predictable for a full rollback.Preparing for rollback – The roll back process goes through test server.Implementation: Our toolsJenkins – Jenkins is the user interface to the whole process. We used parametrized builds whenever we required a developer to interact with a certain job. Jenkins Batch Task plugin – We automated all repetitive tasks to minimize human error. The Task Plugin was used extensively so that we have the flexibility to write scripts to do exactly what we want. Bash – Most of the hard work is done by a set of Bash scripts. We configured keyless login from the build server with appropriate permissions, so that these scripts can perform just like a human, once told what to do via Jenkins. Ant – The build scripts for the software were written in Ant. Ant also couples nicely with Jenkins and can be easily called from a shell script when needed. JUnit and Selenium – Automation is great, but without a good feedback loop, can lead to disaster. JUnit tests provides us with feedback for every single build, while Selenium does the same for ones that are promoted to the test environment. An error means immediate termination of the deployment pipeline for that build. This coupled with testing done by QA keep defects reaching production to a minimum. Puppet – Puppet (http://puppetlabs.com) is used by the operations team to manage configurations across environments. Once the operations team build a server for the developers, they have full access to go in and configure it to run the application. The most important part is to record everything we do while in there. Once a developer is satisfied that the configuration is working, they give a walk-through to the operations team, who in-turn update their Puppet Recipes. These changes are rolled out to the cluster by Puppet immediately. Monitoring – The logs from all production nodes are harvested to a single location for easy analysis. A health check page is built into the application itself, so that we can check the status of the application running in each node.Conclusion Neither DevOps nor Continuous delivery is a silver bullet. However, nurturing a culture, where developers and operations trust each other and work together can be very rewarding to a business. Cultivating such a culture allows a business to reap the full benefits of an Agile development process. Because of the mutual trust between us (the developers) and our client’s operations team, we were able to implement a deployment pipeline that is capable of delivering features and fixes within hours if necessary, instead of months. During a crucial marketing campaign, this kind of agility allowed our client to keep the software infrastructure well in-tune with feedback received through their marketing analytics and KPIs. Further reading A few articles you might find interesting.Four Principles of Low-Risk Software Releases On DVCS, continuous integration, and feature branches The Relationship Between Dev-Ops And Continuous DeliveryReference: Business Agility Through DevOps and Continuous Delivery from our JCG partner Tyrell Perera at the Conundrum blog....
java-logo

Observer Design Pattern in Java

‘Don’t call us, we’ll call you’… that’s the Hollywood OO (Object Oriented) Principle and it’s exactly what the Observer pattern is about. In this post we’ll review this pattern and how it is used in Java, you may already have used it without knowing… According to Head First Design Patterns book, this is the definition of the Observer pattern: Defines a one-to-many dependency between objects so that when one object changes state, all its dependents are notified and updated automatically. Sounds familiar? Have you ever worked with Swing? Looks like the event handling mechanism is using the Observer Pattern. Let’s think about it, suppose you have a JButton and you want other objects to be notified when the button is pressed… have you done that? sure! The other objects are not asking all the time to the button whether it is pressed or not, they only wait for a notification from the button. Swing does it using the java.awt.event.ActionListener, but in essence it is using the Observer Pattern, even if we are not using interfaces like java.util.Observer or classes like java.util.Observable… Now, talking about observers and observables, this two exist since JDK 1.0 and you can use them to implement the Observer Pattern in your applications. Let’s see the how to do it: The objects you want to be waiting for notifications are called Observers, they implement the interface java.util.Observer. This interface defines only one method: +update(Observable,Object):void which is called whenever the observed object is changed. The first parameter, an Observable, is the object that changed. The second parameter may be used as follows:If using PUSH notifications, the Object parameter contains the information needed by the observers about the change. If using PULL notifications, the Object parameter is null and you should use the Observable parameter in order to extract the information needed.When to PUSH or PULL? It’s up to your implementation. The Object you want to be observed is called the Observable and it has to subclass the java.util.Observable class. Yes, subclass. That’s the dark side of the built-in implementation of the Observer Pattern in Java, sometimes you simply can’t subclass, we’ll talk about this in a minute… Once subclassed, you will inherit the following methods, among others:+addObserver(Observer):void which adds the Observer passed in as parameter to the set of observers. +deleteObserver(Observer):void which deletes the Observer passed in as parameter from the set of observers. setChanged():void which marks the Observable as having been changed. This method is protected, so you can only call it if you subclass the java.util.Observable class. Call it before notifying your observers. +notifyObservers():void which notifies the registered observers using PULL. It means that when the +update(Observable,Object):void is invoked on the Observer, the Object parameter will be null. +notifyObservers(Object):void which notifies the registered observers using PUSH. It means that when the +update(Observable,Object):void is invoked on the Observer, the Object parameter will be the same parameter passed in the +notifyObservers(Object):void .So, what happens if the class you want to be the Observable is already subclassing another class? Well, then you have to write your own Observer Pattern implementation because you can’t use the one built-in Java. The following diagram shows you the basic concepts of the Observer Pattern, so you can built your own implementation:One last thing, remember the most important OO Principle of all: Always use the simplest solution that meets your needs, even if it doesn’t include a pattern. Reference: Observer Pattern and Java from our JCG partner Alexis Lopez at the Java and ME blog....
java-logo

Java Code Quality Tools – Overview

Recently, I had a chance to present the subject at the local IT community meetup. Here is the basic presentation: Java Code Quality Tools    and more meaningful mind map:But, I think I need to cover this subject more deeply. This blog post should be something like start point for further investigation in this direction. 1. CodePro Analytix It’s a great tool (Eclipse plugin) for improving software quality. It has the next key features: Code Analysis, JUnit Test Generation, JUnit Test Editor, Similar Code Analysis, Metrics, Code Coverage and Dependency Analysis. 2. PMD It scans Java source code and looks for potential problems: Possible bugs, Dead code, Suboptimal code, Overcomplicated expressions and Duplicate code. 3. FindBugs It looks for bugs in Java programs. It can detect a variety of common coding mistakes, including thread synchronization problems, misuse of API methods, etc. 4. Cobertura It’s a free Java tool that calculates the percentage of code accessed by tests. It can be used to identify which parts of your Java program are lacking test coverage. It is based on jcoverage. 5. Emma It is a fast Java code coverage tool based on bytecode instrumentation. It differs from the existing tools by enabling coverage profiling on large scale enterprise software projects with simultaneous emphasis on fast individual development. 6. Checkstyle It is a development tool to help programmers write Java code that adheres to a coding standard. 7. JBoss Tattletale JBoss Tattletale is a tool that can help you get an overview of the project you are working on or a product that you depend on. The tool will recursive scan a directory for JAR files and generate linked and formatted HTML reports. 8. UCDetector UCDetector (Unecessary Code Detector) is a Open Source eclipse PlugIn Tool to find unecessary (dead) java code. It also tries to make code final, protected or private. UCDetector also finds cyclic dependencies between classes. 9. Sonar Sonar is a continuous quality control tool for Java applications. Its basic purpose in life is to join your existing continuous integration tools to place all your development projects under quality control. 10. XRadar The XRadar is an open extensible code report tool that produces HTML/SVG reports of the systems current state and the development over time. Uses DependencyFinder, JDepend, PMD, PMD-CPD, JavaNCSS, Cobertura, Checkstyle, XSource, JUnit, Java2HTML, ant and maven. 11. QALab QALab consolidates data from Checkstyle, PMD, FindBugs and Simian and displays it in one consolidated view. QALab keeps a track of the changes over time, thereby allowing you to see trends over time. You can tell weather the number of violations has increased or decreased – on a per file basis, or for the entire project. It also plots charts of this data. QALab plugs in to maven or ant. 12. Clirr Clirr is a tool that checks Java libraries for binary and source compatibility with older releases. Basically you give it two sets of jar files and Clirr dumps out a list of changes in the public api. The Clirr Ant task can be configured to break the build if it detects incompatible api changes. In a continuous integration process Clirr can automatically prevent accidental introduction of binary or source compatibility problems. 13. JDiff JDiff is a Javadoc doclet which generates an HTML report of all the packages, classes, constructors, methods, and fields which have been removed, added or changed in any way, including their documentation, when two APIs are compared. This is very useful for describing exactly what has changed between two releases of a product. Only the API (Application Programming Interface) of each version is compared. It does not compare what the source code does when executed. 14. JLint It checks your Java code and find bugs, inconsistencies and synchronization problems by doing data flow analysis and building the lock graph. 15. JDepend JDepend traverses Java class file directories and generates design quality metrics for each Java package. JDepend allows you to automatically measure the quality of a design in terms of its extensibility, reusability, and maintainability to effectively manage and control package dependencies. 16. cloc cloc counts blank lines, comment lines, and physical lines of source code in many programming languages. 17. Dependometer Dependometer performs a static analysis of physical dependencies within a software system. Dependometer validates dependencies against the logical architecture structuring the system into classes, packages, subsystems, vertical slices and layers and detects cycles between these structural elements. Furthermore, it calculates a number of quality metrics on the different abstraction layers and reports any violations against the configured thresholds. 18. Hammurapi Hammurapi is an open source code inspection tool. Its release comes with more than 100 inspectors which inspect different aspects of code: Compliance with EJB specification, threading issues, coding standards, and much more. 19. JavaNCSS JavaNCSS is a simple command line utility which measures two standard source code metrics for the Java programming language. The metrics are collected globally, for each class and/or for each function. 20. DCD DCD finds dead code in your Java applications. 21. Classycle Classycle’s Analyser analyses the static class and package dependencies in Java applications or libraries. It is especially helpful for finding cyclic dependencies between classes or packages. Classycle is similar to JDepend which does also a dependency analysis but only on the package level. 22. ckjm The program ckjm calculates Chidamber and Kemerer object-oriented metrics by processing the bytecode of compiled Java files. The program calculates for each class the following six metrics proposed by Chidamber and Kemerer. 23. Jameleon Jameleon is an automated testing framework that can be easily used by technical and non-technical users alike. One of the main concepts behind Jameleon is to create a group of keywords or tags that represent different screens of an application. All of the logic required to automate each particular screen can be defined in Java and mapped to these keywords. The keywords can then be organized with different data sets to form test scripts without requiring an in-depth knowledge of how the application works. The test scripts are then used to automate testing and to generate manual test case documentation. 24. DoctorJ DoctorJ analyzes Java code, in the following functional areas: documentation verification, statistics generation and syntax analysis. 25. Macker Macker is a build-time architectural rule checking utility for Java developers. It’s meant to model the architectural ideals programmers always dream up for their projects, and then break — it helps keep code clean and consistent. You can tailor a rules file to suit a specific project’s structure, or write some general ‘good practice’ rules for your code. Macker doesn’t try to shove anybody else’s rules down your throat; it’s flexible, and writing a rules file is part of the development process for each unique project. 26. Squale Squale is a qualimetry platform that allows to analyze multi-language software applications in order to give a sharp and comprehensive picture of their quality: High level factors for top-managers and Practical indicators for development teams. 27. SourceMonitor The freeware program SourceMonitor lets you see inside your software source code to find out how much code you have and to identify the relative complexity of your modules. For example, you can use SourceMonitor to identify the code that is most likely to contain defects and thus warrants formal review. 28. Panopticon The Panopticode project provides a set of open source tools for gathering, correlating, and displaying code metrics. 29. Eclipse Metrics plugin Provide metrics calculation and dependency analyzer plugin for the Eclipse platform. Measure various metrics with average and standard deviation and detect cycles in package and type dependencies and graph them. 30. QJ-Pro QJ-Pro is a comprehensive software inspection tool targeted towards the software developer. Developers can automatically inspect their Java source code and improve their Java programming skills as they write their programs. QJ-Pro provides descriptive Java patterns explaining error prone code constructs and providing solutions for it. 31. Byecycle Byecycle is an auto-arranging dependency analysis plugin for Eclipse. Its goal is to make you feel sick when you see bad code and to make you feel happy when you see good code. 32. Coqua Coqua measures 5 distinct Java code quality metrics, providing an overview and history for the management, and down-to-the-code, detailed views for the developer. Metrics can be defined per team. Ideal for mid- to large-sized and/or offshore projects. 33. Dependency Finder Extracts dependencies and OO metrics from Java class files produced by most Java compilers. 34. Jalopy Jalopy is an easily configurable source code formatter that can detect, and fix, a number of code convention flaws that might appear in Java code. Jalopy is more of a code fixer than a code checker. Jalopy plug-ins are present for most IDEs and, in most cases, they gel quite seamlessly with the IDE. 35. JarAnalyzer JarAnalyzer is a dependency management tool for .jar files. JarAnalyzer will analyze all .jar in a given directory and identify the dependencies between each. Output formats include xml, with a stylesheet included to transform it to html, and GraphViz DOT, allowing you to produce a visual component diagram showing the relationships between .jar files. The xml output includes important design metrics such as Afferent and Efferent coupling, Abstractness, Instability, and Distance. There is also an Ant task available that allows you to include JarAnalyzer as part of your build script. 36. Condenser Condenser is a tool for finding and removing duplicated Java code. Unlike tools that only locate duplicated code, the aim of Condenser is to also automatically remove duplicated code where it is safe to do so. 37. Relief Relief provides a new look on Java projects. Relying on our ability to deal with real objects by examining their shape, size or relative place in space it gives a ‘physical’ view on java packages, types and fields and their relationships, making them easier to handle. Lets discuss quickly how we interprete physical properties and how it can help us to grasp project characteristics. 38. JCSC JCSC is a powerful tool to check source code against a highly definable coding standard and potential bad code. The standard covers naming conventions for class, interfaces, fields, parameter, … . Also the structural layout of the type (class/interface) can be defined. Like where to place fields, either before or after the methods and in which order. The order can be defined through the visibility or by type (instance, class, constant). The same is applicable for methods. Each of those rules is highly customizable. Readability is enhanced by defining where to put white spaces in the code and when to use braces. The existence of correct JavaDoc can be enforced and various levels. Apart from that, it finds weaknesses in the the code — potential bugs — like empty catch/finally block, switch without default, throwing of type ‘Exception’, slow code. 39. Spoon Spoon is a Java program processor that fully supports Java 5. It provides a complete and fine-grained Java metamodel where any program element (classes, methods, fields, statements, expressions…) can be accessed both for reading and modification. Spoon can be used on validation purpose, to ensure that your programs respect some programming conventions or guidelines, or for program transformation, by using a pure-Java template engine. 40. Lint4j Lint4j (‘Lint for Java’) is a static Java source and byte code analyzer that detects locking and threading issues, performance and scalability problems, and checks complex contracts such as Java serialization by performing type, data flow, and lock graph analysis. 41. Crap4j Crap4j is a Java implementation of the CRAP (Change Risk Analysis and Predictions) software metric – a mildly offensive metric name to help protect you from truly offensive code. 42. PathFinder Java PathFinder (JPF) is a system to verify executable Java bytecode programs. In its basic form, it is a Java Virtual Machine (JVM) that is used as an explicit state software model checker, systematically exploring all potential execution paths of a program to find violations of properties like deadlocks or unhandled exceptions. Unlike traditional debuggers, JPF reports the entire execution path that leads to a defect. JPF is especially well-suited to finding hard-to-test concurrency defects in multithreaded program 43. Soot Soot can be used as a stand alone tool to optimize or inspect class files, as well as a framework to develop optimizations or transformations on Java bytecode. 44. ESC/Java2 The Extended Static Checker for Java version 2 (ESC/Java2) is a programming tool that attempts to find common run-time errors in JML-annotated Java programs by static analysis of the program code and its formal annotations. Users can control the amount and kinds of checking that ESC/Java2 performs by annotating their programs with specially formatted comments called pragmas. This list includes open sourced and free tools. I intentionally have excluded commercial tools. I’m sure there are much more tools. In case your know some of them which isn’t listed here please add comment to this post. Don’t forget to share! Reference: Java Code Quality Tools – Overview from our JCG partner Orest Ivasiv at the Knowledge Is Everything blog....
spring-security-logo

Spring Security using API Authentication

Background While there are many blog posts that detail how to use Spring Security, I often still find it challenging to configure when a problem domain lies outside of the standard LDAP or database authentication. In this post, I’ll describe some simple customizations to Spring Security that enable it to be used with a REST-based API call. Specifically, the use case is where you have an API service that will return a user object that includes a SHA-256 password hash. Setup The prerequisites for running this sample is Git and Maven, and your choice of IDE (tested with both Eclipse and IntelliJ). The source code can be found at: https://github.com/dajevu/Spring3SecurityUsingAPI. After pulling down the code, perform the following steps:In a terminal window, cd to the Shared directory located under the root where the source code resides. Issue the command mvn clean install. This will build the Shared sub-project and install the jar into your local mvn repository. Within Eclipse or IntelliJ, import the project as a Maven project. In Eclipse, this will result in 3 projects being created: Shared, SpringWebApp, and RestfulAPI. In IntelliJ, this will be represented as sub-projects. No errors should exist after the compilation process is complete. Change directory to RestfulAPI. Then, issue the command mvn jetty:run to run the API webapp. You can then issue the following URL that will bring back a User object represented in JSON: http://localhost:9090/RestfulAPI/api/v1/user/john Open up a new terminal window, cd to SpringWebApp directory located under the project root. Issue the command mvn jetty:run. This will launch a standard Spring webapp that incorporates Spring Security. You can access the single HTML page at: http://localhost:8080/SpringWebApp/. After clicking the Login link, login with the username of john and a password of doe. You should be redirected to a Hello Admin page. In order to demonstrate the solution, three maven modules are used, which are illustrated below:SpringWebApp. This is a typical Spring webapp that serves up a single JSP page. The contents of the page will vary depending upon whether the user is currently logged in or not. When first visiting the page, a Login link will appear, which directs them to the built-in Spring Security login form. When they attempt to login, a RESTEasy client is used to place a call to the API service (described below), which returns a JSON string that is converted into a Java object via the RESTEasy client. The details of how Spring Security is configured is discussed in the following sections. RestfulAPI. An API service that serves JSON requests. It is configured using RESTEasy (a JAX-RS implementation), and is described in more detail in the next section. Shared. This contains a few Java classes that are shared between the other two projects. Specifically, the User object DTO, and the RESTEasy proxy definition (it’s shared because it can also be used by the RESTEasy client).RestfulAPI Dissection The API webapp is configured using RESTEasy’s Spring implementation. The RESTEasy documentation is very thorough, so I won’t go into a detailed explanation of its setup. A single API call is defined (in UserProxy in the Shared project) that returns a static JSON string. The API’s proxy (or interface) is defined as follows: Resteasy API Proxy @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) @Path(UserProxy.Urls.BASE_URL) public interface UserProxy { public interface Urls { public static final String BASE_URL = "/api/v1"; public static final String USER = "/user/{username}"; } @GET @Produces( { MediaType.APPLICATION_JSON }) @Path(UserProxy.Urls.USER) public User getUserByUsername(@PathParam("username") String username); }For those of you familiar with JAX-RS, you’ll easily follow this configuration. It defines an API URI that will respond to requests sent to the URL path of /api/v1/user/{username} where {username} is replaced with an actual username value. The implementation of this service, which simply returns a static response, is shown below: About the only thing remotely complicated is the use of the SHA-256 hashing of the user’s password. We’ll see shortly how this get’s interpreted by Spring Security. When the URL is accessed, the following JSON string is returned: The webapp’s web.xml contains the setup configuration to service RESTEasy requests, so if you’re curious, take a look at that. SpringWebApp Dissection Now we can look at the Spring Security configuration. The web.xml file for the project configures it as a Spring application, and specifies the file applicationContext-security.xml as the initial Spring configuration file. Let’s take a closer look at this file, as this is where most of the magic occurs: Let’s go through each of line numbers to describe their functionality. Lines 3 through 5 instructs Spring to look for Spring-backed classes in the com.acme directory and that Spring annotations will be supported. Line 7 is used to load the properties specified in the application.properties file (this is used to specify the API host). Lines 9 through 11 enable Spring Security for the application. Normally, as a child element to http, you would specify which pages should be protected using roles, but to keep this example simple, that wasn’t configured. Lines 13-17 are where the customizations to base Spring Security begin. We define a custom authentication-provider called userDetailsSrv through its bean ref. That bean is implemented through the custom class com.acme.security.UserDetailsService (line 19). Let’s take a closer look at this class: As you can see, this class implements the Spring interface org.springframework.security.core.userdetails.UserDetailsService. This requires overriding the method loadUserByUsername. This method is responsible for retrieving the user from the authentication provider/source. The returned user (or if no matching user is found, a UsernameNotFoundException is thrown – line 28) must contain a password property in order for Spring Security to compare against what was provided in the form. In this case, as we’ve seen previously, the password is returned in a SHA-256 hash. In our API implementation, the user lookup is pulled using the APIHelper class, which we’ll cover next. The returned API data is then populated in the custom class called UserDetails. This implements the Spring interface with the same name. That interface requires an concrete implementation of the getUsername() and getPassword() methods. Spring will invoke those in the next processing step of Security to compare those values against what was recorded in the web form. How does Spring go about comparing the password returned in the SHA-256 against the form password value. If you look back at the XML configuration, it contained this setting: Notice the passwordEncoder — this reference points to the Spring class ShaPasswordEncoder. This class will compute an SHA-256 password of the password provided through the web form, and then Spring will compare that computed value against what we returned via the API. Let’s close this out by looking at the APIHelper class: The first thing you’ll on lines 8 and 9 is the injection of the API.host property. As you recall, this was set in the application.properties file. This identifies the host in which to post the API call (since it’s running locally, localhost is specified). Lines 17 through 20 use one of the RESTEasy client mechanisms to post a JSON RESTful call (RESTEasy also has what is called client proxy implementation, which is easier to use/less code, but doesn’t provide as much low-level control). The resulting response from the API is then converted from JSON into the User Java object by way Jackson in line 26. That Java object is then returned to the UserDetails service. Summary/Wrap-up As you can see, the actual work involved in customizing Spring Security to authenticate against an API call (or really any external service) is really rather straightforward. Only a few classes have to be implemented, but it can be tricky trying to figure this out for the first time. Hence, the reason I included the complete end-to-end example. Reference: Spring Security using API Authentication from our JCG partner Jeff Davis at the Jeff’s SOA Ruminations blog....
junit-logo

Testing Custom Exceptions with JUnit’s ExpectedException and @Rule

Exception Testing Why test exception flows? Just like with all of your code, test coverage writes a contract between your code and the business functionality that the code is supposed to produce leaving you with a living documentation of the code along with the added ability to stress the functionality early and often. I won’t go into the many benefits of testing instead I will focus on just Exception Testing. There are many ways to test an exception flow thrown from a piece of code. Lets say that you have a guarded method that requires an argument to be not null. How would you test that condition? How do you keep JUnit from reporting a failure when the exception is thrown? This blog covers a few different methods culminating with JUnit’s ExpectedException implemented with JUnit’s @Rule functionality.The ‘old’ way In a not so distant past the process to test an exception required a dense amount of boilerplate code in which you would start a try/catch block, report a failure if your code did not produce the expected behavior and then catch the exception looking for the specific type. Here is an example: public class MyObjTest {@Test public void getNameWithNullValue() {try { MyObj obj = new MyObj(); myObj.setName(null); fail('This should have thrown an exception');} catch (IllegalArgumentException e) { assertThat(e.getMessage().equals('Name must not be null')); } } } As you can see from this old example, many of the lines in the test case are just to support the lack of functionality present to specifically test exception handling. One good point to make for the try/catch method is the ability to test the specific message and any custom fields on the expected exception. We will explore this a bit further down with JUnit’s ExpectedException and @Rule annotation. JUnit adds expected exceptions JUnit responded back to the users need for exception handling by adding a @Test annotation field ‘expected’. The intention is that the entire test case will pass if the type of exception thrown matched the exception class present in the annotation. public class MyObjTest {@Test(expected = IllegalArgumentException.class) public void getNameWithNullValue() { MyObj obj = new MyObj(); myObj.setName(null); } } As you can see from the newer example, there is quite a bit less boiler plate code and the test is very concise, however, there are a few flaws. The main flaw is that the test condition is too broad. Suppose you have two variables in a signature and both cannot be null, then how do you know which variable the IllegalArgumentException was thrown for? What happens when you have extended a Throwable and need to check for the presence of a field? Keep these in mind as you read further, solutions will follow. JUnit @Rule and ExpectedException If you look at the previous example you might see that you are expecting an IllegalArgumentException to be thrown, but what if you have a custom exception? What if you want to make sure that the message contains a specific error code or message? This is where JUnit really excelled by providing a JUnit @Rule object specifically tailored to exception testing. If you are unfamiliar with JUnit @Rule, read the docs here. ExpectedException JUnit provides a JUnit class ExpectedException intended to be used as a @Rule. The ExpectedException allows for your test to declare that an exception is expected and gives you some basic built in functionality to clearly express the expected behavior. Unlike the @Test(expected) annotation feature, ExpectedException class allows you to test for specific error messages and custom fields via the Hamcrest matchers library. An example of JUnit’s ExpectedException import org.junit.rules.ExpectedException;public class MyObjTest {@Rule public ExpectedException thrown = ExpectedException.none();@Test public void getNameWithNullValue() { thrown.expect(IllegalArgumentException.class); thrown.expectMessage('Name must not be null');MyObj obj = new MyObj(); obj.setName(null); } } As I eluded to above, the framework allows you to test for specific messages ensuring that the exception being thrown is the case that the test is specifically looking for. This is very helpful when the nullability of multiple arguments is in question. Custom Fields Arguably the most useful feature of the ExpectedException framework is the ability to use Hamcrest matchers to test your custom/extended exceptions. For example, you have a custom/extended exception that is to be thrown in a method and inside the exception has an ‘errorCode’. How do you test that functionality without introducing the boiler plate code from the try/catch block listed above? How about a custom Matcher! This code is available at: https://github.com/mike-ensor/custom-exception-testing Solution: First the test case import org.junit.rules.ExpectedException;public class MyObjTest {@Rule public ExpectedException thrown = ExpectedException.none();@Test public void someMethodThatThrowsCustomException() { thrown.expect(CustomException.class); thrown.expect(CustomMatcher.hasCode('110501'));MyObj obj = new MyObj(); obj.methodThatThrowsCustomException(); } } Solution: Custom matcher import com.thepixlounge.exceptions.CustomException; import org.hamcrest.Description; import org.hamcrest.TypeSafeMatcher;public class CustomMatcher extends TypeSafeMatcher<CustomException> {public static BusinessMatcher hasCode(String item) { return new BusinessMatcher(item); }private String foundErrorCode; private final String expectedErrorCode;private CustomMatcher(String expectedErrorCode) { this.expectedErrorCode = expectedErrorCode; }@Override protected boolean matchesSafely(final CustomException exception) { foundErrorCode = exception.getErrorCode(); return foundErrorCode.equalsIgnoreCase(expectedErrorCode); }@Override public void describeTo(Description description) { description.appendValue(foundErrorCode) .appendText(' was not found instead of ') .appendValue(expectedErrorCode); } }NOTE: Please visit https://github.com/mike-ensor/custom-exception-testing to get a copy of a working Hamcrest Matcher, JUnit @Rule and ExpectedException. And there you have it, a quick overview of different ways to test Exceptions thrown by your code along with the ability to test for specific messages and fields from within custom exception classes. Please be specific with your test cases and try to target the exact case you have setup for your test, remember, tests can save you from introducing side-effect bugs! Happy coding and don’t forget to share! Reference: Testing Custom Exceptions w/ JUnit’s ExpectedException and @Rule from our JCG partner Mike at the Mike’s site blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close