Featured FREE Whitepapers

What's New Here?

javaone-logo

JavaOne 2012: A Walk Through of Groovy’s AST Transformations

I made the very short walk from Hilton Plaza A/B back to Hilton Golden Gate 3/4/5 to see the presentation ‘Walk through Groovy’s AST Transformations.’ Groovy‘s AST Transformations are something I’ve dabbled with directly a few times, but have more often benefited from others’ work with them. I had started reading the Packt Publishing book Groovy for Domain-Specific Languages, but wanted to attend this presentation to reinvigorate my interest and kick-start my increased use of this powerful tool.Andres Almiray (Canoo) presented this presentation on Groovy AST Transformations. It did not surprise me that most of the audience had Groovy experience given that use of Groovy ASTs is likely more appealing to those with some familiarity with Groovy already. Almiray defined AST Transformations as ‘essentially byte code generation’ that ‘enables compile-time metaprogramming.’ He showed that Groovy has two types of AST Transformations: global and local. The focus of today’s presentation is on global AST transformations. The AST Transformations framework was added to Groovy years ago, but things were made much easier in Groovy 1.7. Almiray covered the Delegate Transformation (@Delegate annotation) in Groovy allows the compiled code to have all of the public methods of the field that was explicitly delegated to. Almiray explained that @Delegate works with interfaces as well as classes. Almiray also explained that any new method defined will take precedent over any delgate method of the same signature. Similarly, the first delegate encountered takes precedent over same method signature of other delegates. Almiray then covered @Singleton and the Singleton Transformation. Almiray stated that the singleton implemented with this transformation meets the definition of a safe singleton described in Josh Bloch’s Effective Java. @Immutable (the Immutable Transformation) was covered next. Just as the @Singleton transformation automatically implemented all necessary rules for singletons, the @Immutable transformation implements the rules for immutable. Almiray noted that there are different exceptions for attempts to set a property on an immutable Groovy class via property set versus method set. The next Groovy AST transformation to be covered was @Category (the Category Transformation). This was the first covered transformation that requires usage within Groovy code (not within Java code) to be fully used. The Mixin Transformation (@Mixin) was also covered. Almiray moved onto coverage of @Grab (Grab Transformation), one which I have posted about before. @Grab is useful for downloading dependencies at runtime. I like it for the same reason that Almiray mentioned: ‘it’s perfect for self-contained scripts.’ Almiray introduced the @Synchronized (Synchronized Transformation) as a Groovier way to specify synchronized blocks. Almiray covered the @Lazy (Lazy Transformation), which is used to only initialize values when actually needed (when first used). Almiray pointed out Groovy’s ability to access classes’ private fields and cautioned that this should only be used for unit testing and only when absolutely necessary in production. Almiray demonstrated use of @Newify (Newify Transformation) before showing a code sample using @Bindable (Bindable Transformation), which he stated was added to Groovy to make use of Swing easier. The transformation makes a class observable and removes the need to write all the code for explicitly doing this. The @Vetoable transformation similarly makes it easier to veto a property change. As I described in my post Easy Groovy Logger Injection and Log Guarding, @Log (Log Transformation) can be very useful (as are @Commons, @Log4j, and @Slf4j). Almiray covered some of my favorite and most often-used Groovy transformations: @ToString (see my post), @EqualsAndHashCode (see my post), @TupleConstructor (see my post), or the combination of them all (@Canonical – see my post). Following coverage of @Canonical and its constituent transformations, Almiray moved onto covering @IndexedProperty (related post). He then listed several others without code samples: @AutoClone, @AutoExternalize, @ConditionalInterrupt, @TimedInterrupt, @ThreadInterrupt, @PackageScope (‘gain back package-level access specificity in Groovy’), @WithReadLock, @WithWriteLock, and @Field (‘mostly used inside scripts’). I was happy to see Almiray mention the addition of @TypeChecked to support ‘static Groovy!’ He referenced a later presentation on new features of Groovy 2.0 to get more details. Almiray mentioned new transformations specific to Grails (@Entity) and to Griffon (@EventPublisher, @PropertyListener, @Treading, and more). He referenced @Scalify and @Bytecode as well. Although I was already familiar with a large percentage of the Groovy AST transformations covered in this presentation, it was still worthwhile to attend and learn of or be reminded of other useful transformations that are available. Don’t forget to share! Reference: JavaOne 2012: A Walk Through of Groovy’s AST Transformations from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
software-development-2-logo

Opinion: Performance Testing

Performance tuning an application is time consuming, and expensive. Useful tests often need dedicated hardware to run on. It’s specialised and time consuming to prepare the ground work and write the various fixtures needed to run, and whose only perceived benefit is preventing a production issue that you don’t even know will happen yet. Stereotypical Scenarios and Outcomes Here’s some stereotypes I’ve encountered:A feature release contains a new feature that wasn’t (or even could not have been) tested adequately that has a major performance issue from the outset. A change to existing code, perhaps to finesse or refine an existing feature, perhaps a change that’s not been requested by the customer, introduces a catastrophic performance issue. A sleeper: a change in the system that only occurs after some time (e.g. running out of values in a 4-byte integer serial column, meaning that inserts into that table require the database engine to scan for unused keys). Change of use: it was designed to be used in one way, and then the customer starts using it in another unanticipated way. Age: the database gets large and queries start to slow down, I’ve heard this referred to is somewhat wooly terms as ‘degradation’.A common resolution to these stereotypes is a herculean effort: late night and weekends with people on the phone asking ‘When will my site be fixed? Who’s responsible for this?’. I’ve heard this called ‘hero culture’. It’s a mentality that can perversely reward those who might have been expected to prevent the problem, as they are the ones best capable of fixing it. After resolution, a period of self-reflection. People are asking what can be done to show willingness to tackle the problem; perhaps in a one-off performance tuning exercise by a specialist, which resolves current issues. But, if the analysis is done by a seconded specialist who’s not part of the team, it’s an exercise whose lessons are not disseminated, and which is not repeatable. Those who do not learn from the past are doomed to repeat it*. This might be a fait-accompli: if performance testing is more expensive than the cost of fixing periodic production issues, then this is the most logical, most cost-effective approach.Many systems, perhaps due to cost or lack of time, have not been developed in a way that is amenable to automated testing. After all, when a system is first written, you might not know if it’s going to be a commercial success, so why spend money making system maintainable if it might never need to be maintained? The inability to test the performance of changes can mean that improvements to the system are prevented; the cost of introducing a bug cannot be mitigated. People start to fear change and the product stagnates. A younger, faster, competitor learns from your success. They quickly write a more modern, cheaper version of your software and starts taking your business.Automated Performance Testing How do you implement this? How do you take a system that might even be hostile to testing, and change it so that releases become more bug free and robust?Foundations You should make sure your code base is reliable before you consider performance testing. Most of this is common sense.Change and test incrementally. Bugs first, features second; new features will only introduce new bugs, so make sure you fix any bugs first. Use a mature developed build system. Be able to write and execute unit tests on your code.Once these are in place, consider using code quality tools, such as or Cobertura, to get metrics and enforce them, failing builds that don’t meet some minimum criteria.Integration Test Integration test forms the first step toward full performance testing. There’s many frameworks, depending on how users or clients interface. If it is a web-app, then you might use Selenium, a web service, a ReST or SOAP client. Generally, a popular framework is a better choice, as it encourages adoption by the rest of your team. Ask yourself – would I rather learn something well-documented, interesting and personally valuable, or wrestle with someone else’s hand-rolled vanity project? Regardless, to run integration tests, you’ll need to be able to:Build your app. Deploy to a test environment. Execute the tests. Report the results.Ideally, you should be able to do this at the touch of a button, otherwise you’ll be the only person who does it, and you’ll lose a lot of the value of your work. As you do this you’ll find that:You better understand the architecture of your app. You know how to create a suitable environment for it. You understand the deployment process. You can deploy it automatically.These are key to automating performance testing.Performance Testing Unit testing, and to some degree integration testing, have binary outcomes: they pass and everyone’s happy, they fail, and there’s a bug to be fixed. To a similar degree, the tools are well supported and everyone knows how to use them. Performance testing is a bit more of an art. Ultimately a performance test produces some measures: a series of numbers, but are those numbers good or bad? Do you want to guess? A single metric, standing on it’s own, can be un-enlightening, but you can look at its relative change to previous measurement. You need to (in order):Expose metrics (noting that you may want to introduce new ones and deprecate old ones). Sample the metrics. Run the same test from the same baseline (e.g. by starting with a freshly provisioned server, loading it with data, and warming it up). Report on the results within a tool.Again, with a single button press. If you deploy your app to one host, where do you run the tests from? What demand might they make of the office network? Do you need multiple hosts and their own LAN? You’ll need to expose your metrics first, and there are a few commercial and open source tools for Java, such as JInspired or Metrics, or, indeed you can roll your own. One feature you might want is exposing the metrics over JMX, which allow sampling. OpenNMS is a network management application that can remotely periodically sample JMX beans, and it is relatively straight forward to get graphs of those metrics. There are, of course, alternatives. Now, if you automatically deploy then performance test on each commit, then you could have the details displayed on your agile wall, so the team can see when performance changes and any hot spots appear. Best of all, once it’s all in place, you don’t need to do much to keep it up to date. Reference: Opinion: Performance Testing from our JCG partner Alex Collins at the Alex Collins ‘s blog blog....
spring-interview-questions-answers

Redis pub/sub using Spring

Continuing to discover the powerful set of Redis features, the one worth mentioning about is out of the box support of pub/sub messaging. Pub/Sub messaging is essential part of many software architectures. Some software systems demand from messaging solution to provide high-performance, scalability, queues persistence and durability, fail-over support, transactions, and many more nice-to-have features, which in Java world mostly always leads to using one of JMS implementation providers. In my previous projects I have actively used Apache ActiveMQ (now moving towards Apache ActiveMQ Apollo). Though it’s a great implementation, sometimes I just needed simple queuing support and Apache ActiveMQ just looked overcomplicated for that. Alternatives? Please welcome Redis pub/sub! If you are already using Redis as key/value store, few additional lines of configuration will bring pub/sub messaging to your application in no time. Spring Data Redis project abstracts very well Redis pub/sub API and provides the model so familiar to everyone who uses Spring capabilities to integrate with JMS. As always, let’s start with the POM configuration file. It’s pretty small and simple, includes necessary Spring dependencies, Spring Data Redis and Jedis, great Java client for Redis. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemalocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelversion>4.0.0</modelversion> <groupid>com.example.spring</groupid> <artifactid>redis</artifactid> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging><properties> <project.build.sourceencoding>UTF-8</project.build.sourceencoding> <spring.version>3.1.1.RELEASE</spring.version> </properties><dependencies> <dependency> <groupid>org.springframework.data</groupid> <artifactid>spring-data-redis</artifactid> <version>1.0.1.RELEASE</version> </dependency><dependency> <groupid>cglib</groupid> <artifactid>cglib-nodep</artifactid> <version>2.2</version> </dependency><dependency> <groupid>log4j</groupid> <artifactid>log4j</artifactid> <version>1.2.16</version> </dependency><dependency> <groupid>redis.clients</groupid> <artifactid>jedis</artifactid> <version>2.0.0</version> <type>jar</type> </dependency><dependency> <groupid>org.springframework</groupid> <artifactid>spring-core</artifactid> <version>${spring.version}</version> </dependency><dependency> <groupid>org.springframework</groupid> <artifactid>spring-context</artifactid> <version>${spring.version}</version> </dependency> </dependencies><build> <plugins> <plugin> <groupid>org.apache.maven.plugins</groupid> <artifactid>maven-compiler-plugin</artifactid> <version>2.3.2</version> <configuration> <source>1.6 <target>1.6</target> </configuration> </plugin> </plugins> </build> </project>Moving on to configuring Spring context, let’s understand what we need to have in order for a publisher to publish some messages and for a consumer to consume them. Knowing the respective Spring abstractions for JMS will help a lot with that.we need connection factory -> JedisConnectionFactory we need a template for publisher to publish messages -> RedisTemplate we need a message listener for consumer to consume messages -> RedisMessageListenerContainerUsing Spring Java configuration, let’s describe our context: package com.example.redis.config;import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.redis.connection.jedis.JedisConnectionFactory; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.data.redis.listener.ChannelTopic; import org.springframework.data.redis.listener.RedisMessageListenerContainer; import org.springframework.data.redis.listener.adapter.MessageListenerAdapter; import org.springframework.data.redis.serializer.GenericToStringSerializer; import org.springframework.data.redis.serializer.StringRedisSerializer; import org.springframework.scheduling.annotation.EnableScheduling;import com.example.redis.IRedisPublisher; import com.example.redis.impl.RedisMessageListener; import com.example.redis.impl.RedisPublisherImpl;@Configuration @EnableScheduling public class AppConfig { @Bean JedisConnectionFactory jedisConnectionFactory() { return new JedisConnectionFactory(); }@Bean RedisTemplate< String, Object > redisTemplate() { final RedisTemplate< String, Object > template = new RedisTemplate< String, Object >(); template.setConnectionFactory( jedisConnectionFactory() ); template.setKeySerializer( new StringRedisSerializer() ); template.setHashValueSerializer( new GenericToStringSerializer< Object >( Object.class ) ); template.setValueSerializer( new GenericToStringSerializer< Object >( Object.class ) ); return template; }@Bean MessageListenerAdapter messageListener() { return new MessageListenerAdapter( new RedisMessageListener() ); }@Bean RedisMessageListenerContainer redisContainer() { final RedisMessageListenerContainer container = new RedisMessageListenerContainer();container.setConnectionFactory( jedisConnectionFactory() ); container.addMessageListener( messageListener(), topic() );return container; } @Bean IRedisPublisher redisPublisher() { return new RedisPublisherImpl( redisTemplate(), topic() ); }@Bean ChannelTopic topic() { return new ChannelTopic( 'pubsub:queue' ); } } Very easy and straightforward. The presence of @EnableScheduling annotation is not necessary and is required only for our publisher implementation: the publisher will publish a string message every 100 ms. package com.example.redis.impl;import java.util.concurrent.atomic.AtomicLong;import org.springframework.data.redis.core.RedisTemplate; import org.springframework.data.redis.listener.ChannelTopic; import org.springframework.scheduling.annotation.Scheduled;import com.example.redis.IRedisPublisher;public class RedisPublisherImpl implements IRedisPublisher { private final RedisTemplate< String, Object > template; private final ChannelTopic topic; private final AtomicLong counter = new AtomicLong( 0 );public RedisPublisherImpl( final RedisTemplate< String, Object > template, final ChannelTopic topic ) { this.template = template; this.topic = topic; }@Scheduled( fixedDelay = 100 ) public void publish() { template.convertAndSend( topic.getTopic(), 'Message ' + counter.incrementAndGet() + ', ' + Thread.currentThread().getName() ); } } And finally our message listener implementation (which just prints message on a console). package com.example.redis.impl;import org.springframework.data.redis.connection.Message; import org.springframework.data.redis.connection.MessageListener;public class RedisMessageListener implements MessageListener { @Override public void onMessage( final Message message, final byte[] pattern ) { System.out.println( 'Message received: ' + message.toString() ); } } Awesome, just two small classes, one configuration to wire things together and we have full pub/sub messaging support in our application! Let’s run the application as standalone … package com.example.redis;import org.springframework.context.ApplicationContext; import org.springframework.context.annotation.AnnotationConfigApplicationContext;import com.example.redis.config.AppConfig;public class RedisPubSubStarter { public static void main(String[] args) { new AnnotationConfigApplicationContext( AppConfig.class ); } }… and see following output in a console: ... Message received: Message 1, pool-1-thread-1 Message received: Message 2, pool-1-thread-1 Message received: Message 3, pool-1-thread-1 Message received: Message 4, pool-1-thread-1 Message received: Message 5, pool-1-thread-1 Message received: Message 6, pool-1-thread-1 Message received: Message 7, pool-1-thread-1 Message received: Message 8, pool-1-thread-1 Message received: Message 9, pool-1-thread-1 Message received: Message 10, pool-1-thread-1 Message received: Message 11, pool-1-thread-1 Message received: Message 12, pool-1-thread-1 Message received: Message 13, pool-1-thread-1 Message received: Message 14, pool-1-thread-1 Message received: Message 15, pool-1-thread-1 Message received: Message 16, pool-1-thread-1 ...Great! There is much more which you could do with Redis pub/sub, excellent documentation is available for you on Redis official web site. Reference: Redis pub/sub using Spring from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog....
google-logo

RateLimiter – discovering Google Guava

RateLimiter class was recently added to Guava libraries (since 13.0) and it is already among my favourite tools. Have a look what the JavaDoc says: Let’s start from a simple example. Say we have a long running process that needs to broadcast its progress to supplied listener: def longRunning(listener: Listener) { var processed = 0 for(item <- items) { //..do work... processed += 1 listener.progressChanged(100.0 * processed / items.size) } }trait Listener { def progressChanged(percentProgress: Double) }Please forgive me the imperative style of this Scala code, but that’s not the point. The problem I want to highlight becomes obvious once we start our application with some concrete listener: class ConsoleListener extends Listener { def progressChanged(percentProgress: Double) { println('Progress: ' + percentProgress) } }longRunning(new ConsoleListener)Imagine that longRunning() method processes millions of items but each iteration takes just a split of a second. The amount of logging messages is just insane, not to mention console output is probably taking much more time than processing itself. You’ve probably faced such a problem several times and have a simple workaround: if(processed % 100 == 0) { listener.progressChanged(100.0 * processed / items.size) }There, I Fixed It! We only print progress every 100th iteration. However this approach has several drawbacks:code is polluted with unrelated logic there is no guarantee that every 100th iteration is slow enough… … or maybe it’s still too slow?What we really want to achieve is to limit the frequency of progress updates (say: two times per second). OK, going deeper into the rabbit hole: def longRunning(listener: Listener) { var processed = 0 var lastUpdateTimestamp = 0L for(item <- items) { //..do work... processed += 1 if(System.currentTimeMillis() - lastUpdateTimestamp > 500) { listener.progressChanged(100.0 * processed / items.size) lastUpdateTimestamp = System.currentTimeMillis() } } }Do you also have a feeling that we are going in the wrong direction? Ladies and gentlemen, I give you RateLimiter: var processed = 0 val limiter = RateLimiter.create(2) for (item <- items) { //..do work... processed += 1 if (limiter.tryAcquire()) { listener.progressChanged(100.0 * processed / items.size) } }Getting better? If the API is not clear: we are first creating a RateLimiter with 2 permits per second. This means we can acquire up to two permits during one second and if we try to do it more often tryAcquire() will return false (or thread will block if acquire() is used instead 1). So the code above guarantees that the listener won’t be called more that two times per second. As a bonus, if you want to completely get rid of unrelated throttling code from the business logic, decorator pattern to the rescue. First let’s create a listener that wraps another (concrete) listener and delegates to it only at a given rate: class RateLimitedListener(target: Listener) extends Listener {val limiter = RateLimiter.create(2)def progressChanged(percentProgress: Double) { if (limiter.tryAcquire()) { target.progressChanged(percentProgress) } } }What’s best about the decorator pattern is that both the code using the listener and the concrete implementation are not aware of the decorator. Also the client code became much simpler (essentially we came back to original): def longRunning(listener: Listener) { var processed = 0 for (item <- items) { //..do work... processed += 1 listener.progressChanged(100.0 * processed / items.size) } }longRunning(new RateLimitedListener(new ConsoleListener))But we’ve only scratched the surface of where RateLimiter can be used! Say we want to avoid aforementioned denial of service attack or slow down automated clients of our API. It’s very simple with RateLimiter and servlet filter: @WebFilter(urlPatterns=Array('/*')) class RateLimiterFilter extends Filter {val limiter = RateLimiter.create(100)def init(filterConfig: FilterConfig) {}def doFilter(request: ServletRequest, response: ServletResponse, chain: FilterChain) { if(limiter.tryAcquire()) { chain.doFilter(request, response) } else { response.asInstanceOf[HttpServletResponse].sendError(SC_TOO_MANY_REQUESTS) } }def destroy() {} }Another self-descriptive sample. This time we limit our API to handle not more than 100 requests per second (of course RateLimiter is thread safe). All HTTP requests that come through our filter are subject to rate limiting. If we cannot handle incoming request, we send HTTP 429 – Too Many Requests error code (not yet available in servlet spec). Alternatively you may wish to block the client for a while instead of eagerly rejecting it. That’s fairly straightforward as well: def doFilter(request: ServletRequest, response: ServletResponse, chain: FilterChain) { limiter.acquire() chain.doFilter(request, response) }limiter.acquire() will block as long as it’s needed to keep desired 100 requests per second limit. Yet another alternative is to use tryAcquire() with timeout (blocking up to given amount of time). Blocking approach is better if you want to avoid sending errors to the client. However under high load it’s easy to imagine almost all HTTP threads blocked waiting for RateLimiter, eventually causing servlet container to reject connections. So dropping of clients can be only partially avoided. This filter is a good starting point to build more sophisticated solutions. Map of rate limiters by IP or user name are good examples. What we haven’t covered yet is acquiring more than one permit at a time. It turns out RateLimiter can also be used e.g. to limit network bandwidth or the amount of data being sent/received. Imagine you create a search servlet and you want to impose that no more than 1000 results are returned per second. In each request user decides how many results she wants to receive per response: it can be 500 requests each containing 2 results or 1 huge request asking for 1000 results at once. But never more than 1000 results within a second on average. Users are free to use their quota as they wish: @WebFilter(urlPatterns = Array ('/search')) class SearchServlet extends HttpServlet {val limiter = RateLimiter.create(1000)override def doGet(req: HttpServletRequest, resp: HttpServletResponse) { val resultsCount = req.getParameter('results').toInt limiter.acquire(resultsCount) //process and return results... } }By default we acquire() one permit per invocation. Non-blocking servlet would call limiter.tryAcquire(resultsCount) and check the results, you know that by now. If you are interested in rate limiting of network traffic, don’t forget to check out my Tenfold increase in server throughput with Servlet 3.0 asynchronous processing. RateLimiter, due to a blocking nature, is not very well suited to write scalable upload/download servers with throttling. The last example I would like to share with you is throttling client code to avoid overloading the server we are talking to. Imagine a batch import/export process that calls some server thousands of times exchanging data. If we don’t throttle the client and there is no rate limiting on the server side, server might get overloaded and crash. RateLimiter is once again very helpful: val limiter = RateLimiter.create(20)def longRunning() { for (item <- items) { limiter.acquire() server.sync(item) } }This sample is very similar to the first one. Difference being that this time we block instead of discard missing permits. Thanks to blocking, external call to server.sync(item) won’t overload the 3rd-party server, calling it at most 20 times per second. Of course if you have several threads interacting with the server, they can all share the same RateLimiter. To wrap-up:RateLimiter allows you to perform certain actions not more often than with a given frequency It’s a small and lightweight class (no threads involved!) You can create thousands of rate limiters (per client?) or share one among several threads We haven’t covered warm-up functionality – if RateLimiter was completely idle for a long time, it will gradually increase allowed frequency over configured time up to configured maximum value instead of allowing maximum frequency from the very beginningI have a feeling that we’ll go back to this class soon. I hope you’ll find it useful in your next project! 1 – I am using Guava 14.0-SNAPSHOT. If 14.0 stable is not available by the time you are reading this, you must use more verbose tryAcquire(1, 0, TimeUnit.MICROSECONDS) instead of tryAcquire() and acquire(1) instead of acquire(). Happy coding and don’t forget to share! Reference: RateLimiter – discovering Google Guava from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....
javaone-logo

JavaOne 2012: The Road to Lambda

One of the presentations I most eagerly anticipated for JavaOne 2012 was Brian Goetz‘s ‘The Road to Lambda.’ The taste of Lambda at last night’s Technical Keynote only added to the anticipation. Held in the Hilton Plaza A/B, this was a short walk from the previous presentation that I attended in Golden Gate A/B/C. I had expected the relatively large Plaza A/B to be packed (standing-room only), but there were far more empty seats than I had expected.Goetz started talking about Java 8 being ‘in the home stretch,’ but not yet released or ready for delivery. He said he expects Java 8 and Lambda to be available about this time next year. Goetz said that you ‘can write any program that’s worth it to write using Java,’ but that Java 8 will make it much easier to do so. His slide ‘Modernizing Java’ talked about Java SE 8 ‘modernizing the Java language’ and ‘modernizing the Java libraries.’ His last bullet on the slide stated, ‘Together, perhaps the biggest upgrade ever to the Java programming model.’ This has been my feeling and that’s part of why I was surprised this presentation was not better attended. Goetz stated that a lambda expression is ‘an anonymous method.’ It has everything that a method has (argument list, return type, and body) except for the name. It allows you to ‘treat code as data.’ A method reference references an existing method. Goetz reiterated the huge fundamental shift to writing and using libraries that will result from addition of lambda expressions. Goetz pointed out that most languages did not have closures when Java started in 1995, but that most languages other than Java do have closures today. He then summarized some of the history of closures in Java in the slide titled ‘Closures for Java – a long and winding road.’ He referenced Odersky’s and Wadler’s 1997 ‘Pizza’ (1997), Java 1.1’s inner classes (1997), and the 2006-2008 ‘vigorous community debate about closures‘ (including BGGA and CICE). Project Lambda was formed in December 2009 and associated JSR 335 was filed in November 2010. It is ‘fairly close to completion’ today. Goetz stated that the for loop is ‘over-specified for today’s hardware’ while describing the ‘accidental complexity’ associated with use of ‘external iteration’ that we frequently use today. I agreed with the point he made that the ‘foreach loop hides complex interaction between client and library.’ The goal of lambda expressions allows the ‘how’ to be moved from the client to the library. Goetz emphasized that this is more than a syntactic change because the library is in control with lambda expressions and it is an internal iteration. Goetz stated, ‘The client handles the ‘what’ and the library handles the ‘how’ and that is a good thing.’ He added that lambda expressions have a profound effect on how we code and especially on how we develop libraries. Goetz discussed the new forEach(Block) method added to collections using the new default implementation mechanism for Java interfaces. Goetz differentiated that Java has always had multiple inheritance of types (can implement multiple interfaces), is now (Java 8) going to have multiple inheritance of behaviors (default method implementation available for interfaces), but still won’t have multiple inheritance of state (the last of which he describes as the most dangerous). Goetz had a slide dedicated to explanation of why ‘diamonds are easy’ when you take date (state) out of the multiple inheritance. Goetz had a nice slide summarizing ‘Default Methods – Inheritance Rules.’ This slide featured three rules. He pointed out that ‘if default cannot be resolved via the rules, subclass must implement it.’ Goetz pointed out that an interface could provide a ‘weak’ default implementation and subclasses can provide better implementations. Another advantage of default methods on interfaces is that the default implementation can throw an exception (such as UnsupportedOperationException) for optional methods so that subclasses not implementing the optional behavior don’t need to do anything else. Goetz also showed how lambda expressions enable the addition of reverse() and compose() methods to Comparator. Goetz showed several examples of code that illustrated that lambda expressions allow for ‘cleaner’ and ‘more natural’ representations. In his words, ‘the code reads like the problem statement’ thanks to the composability of the lambda expression-powered operations. There is also ‘no mutable state in the client.’ One of Goetz’s slides had a quote I plan to use in the future: ‘Laziness can be more efficient.’ The context of this is that laziness can be more efficient if you’re not going to use all of the results because you can stop looking once a match is determined. Stream operations are either intermediate (lazy) or terminal (naturally eager). The Stream is an abstraction introduced to allow for addition of bulk operations and ‘represents a stream of values.’ Goetz’s bullet cautioned that a Stream is ‘not a data structure’ and ‘doesn’t store the values.’ The aim here was to avoid noise in setting things up and try to be more ‘fluent.’ Goetz stated that ‘one of Java’s friends has always been libraries.’ He talked about how lambda expressions enable greater parallelism in the Java libraries. Goetz stated that fork-join is powerful but not necessarily easy to use. Goetz emphasized that ‘Writing serial code is easy; writing parallel code is a pain in the ass.’ Lambda expressions will still require parallelism to be explicit, but should be unobtrusive with lambda expressions and their impact on the libraries. To emphasize Project Lambda’s effect on parallelism in the libraries, Goetz showed a painful slide with how parallel sum with collections would be done today with fork-join and then another slide showing the much simpler use of lambda expressions. The point was made: much less code with lambda expressions, making the business logic a much larger percentage of the overall code. Goetz introduced Spliterator as the ‘parallel analogue of Iterator.’ The Spliterator’s prescribed behaviors are available to any object that knows how to split itself (Spliterable). The slide ‘Lambdas Enable Better APIs’ drove home the powerful and welcome effect of lambda expressions on the standard Java APIs. He emphasized that the ‘key effect on APIs is more composability. Goetz stated that we typically prefer evolving programming model via libraries than language syntax for numerous reasons such as less cost, less risk, etc. He summarized his presentation by stating that times have changed and it is no longer a radical idea for Java to support closures. One of the attendees asked why lambda expression method support is on the collections rather than on iterators. Goetz said that although C# did approach it from the iterator approach, his team found it less confusing for developers to have the methods on the collections instead of on the iterators. In response to another question, Goetz stated that reflection on lambda expressions is not yet available due to its complexity. In response to another question, Goetz stated that lambda expression support is built with invokedynamic and method handles. This is part of an effort to make lambda expressions ‘fun to program’ and ‘fast.’ Another question led to a really interesting response from Goetz in which Goetz explained that the availability of internal iteration within a collection itself means the iteration complexity will be encountered by far fewer people (library developers rather than end user developers). Goetz encouraged attendees to run Java 8 drops currently available to help determine if Lambda expressions are being handled correctly. Goetz remarked, ‘The most valuable contribution we get from the community are people who say, ‘I tried it out and found this bug.” Goetz started this presentation by stating that this was one in a long line of presentations at previous JavaOne conferences and other conferences on the state of lambda. What was different about this one, however, is that Project Lambda is ‘almost there’ and, with that in mind, it seems that the syntax and concepts are largely in place. This obvious solidification of the APIs and syntax is welcome and this presentation met my very high expectations for it. Don’t forget to share! Reference: JavaOne 2012: The Road to Lambda from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
spring-interview-questions-answers

Enhancing Spring Test Framework with beforeClass and afterClass setup

How to allow instance methods to run as JUnit BeforeClass behavior JUnit allows you to setup methods on the class level once before and after all tests methods invocation. However, by design on purpose that they restrict this to only static methods using @BeforeClass and @AfterClass annotations. For example this simple demo shows the typical Junit setup: package deng.junitdemo;import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test;public class DemoTest {@Test public void testOne() { System.out.println('Normal test method #1.'); }@Test public void testTwo() { System.out.println('Normal test method #2.'); }@BeforeClass public static void beforeClassSetup() { System.out.println('A static method setup before class.'); }@AfterClass public static void afterClassSetup() { System.out.println('A static method setup after class.'); } } And above should result the following output: A static method setup before class. Normal test method #1. Normal test method #2. A static method setup after class. This usage is fine for most of the time, but there are times you want to use non-static methods to setup the test. I will show you a more detailed use case later, but for now, let’s see how we can solve this naughty problem with JUnit first. We can solve this by making the test implements a Listener that provide the before and after callbacks, and we will need to digg into JUnit to detect this Listener to invoke our methods. This is a solution I came up with: package deng.junitdemo;import org.junit.Test; import org.junit.runner.RunWith;@RunWith(InstanceTestClassRunner.class) public class Demo2Test implements InstanceTestClassListener {@Test public void testOne() { System.out.println('Normal test method #1'); }@Test public void testTwo() { System.out.println('Normal test method #2'); }@Override public void beforeClassSetup() { System.out.println('An instance method setup before class.'); }@Override public void afterClassSetup() { System.out.println('An instance method setup after class.'); } } As stated above, our Listener is a simple contract: package deng.junitdemo;public interface InstanceTestClassListener { void beforeClassSetup(); void afterClassSetup(); } Our next task is to provide the JUnit runner implementation that will trigger the setup methods. package deng.junitdemo;import org.junit.runner.notification.RunNotifier; import org.junit.runners.BlockJUnit4ClassRunner; import org.junit.runners.model.InitializationError;public class InstanceTestClassRunner extends BlockJUnit4ClassRunner {private InstanceTestClassListener InstanceSetupListener;public InstanceTestClassRunner(Class<?> klass) throws InitializationError { super(klass); }@Override protected Object createTest() throws Exception { Object test = super.createTest(); // Note that JUnit4 will call this createTest() multiple times for each // test method, so we need to ensure to call 'beforeClassSetup' only once. if (test instanceof InstanceTestClassListener && InstanceSetupListener == null) { InstanceSetupListener = (InstanceTestClassListener) test; InstanceSetupListener.beforeClassSetup(); } return test; }@Override public void run(RunNotifier notifier) { super.run(notifier); if (InstanceSetupListener != null) InstanceSetupListener.afterClassSetup(); } } Now we are in business. If we run above test, it should give us similar result, but this time we are using instance methods instead! An instance method setup before class. Normal test method #1 Normal test method #2 An instance method setup after class.A concrete use case: Working with Spring Test Framework Now let me show you a real use case with above. If you use Spring Test Framework, you would normally setup a test like this so that you may have test fixture injected as member instance. package deng.junitdemo.spring;import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat;import java.util.List;import javax.annotation.Resource;import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class SpringDemoTest {@Resource(name='myList') private List<String> myList;@Test public void testMyListInjection() { assertThat(myList.size(), is(2)); } } You would also need a spring xml under that same package for above to run: <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd'> <bean id='myList' class='java.util.ArrayList'> <constructor-arg> <list> <value>one</value> <value>two</value> </list> </constructor-arg> </bean> </beans>Pay very close attention to member instance List<String> myList. When running JUnit test, that field will be injected by Spring, and it can be used in any test method. However, if you ever want a one time setup of some code and get a reference to a Spring injected field, then you are in bad luck. This is because the JUnit @BeforeClass will force your method to be static; and if you make your field static, Spring injection won’t work in your test! Now if you are a frequent Spring user, you should know that Spring Test Framework already provided a way for you to handle this type of use case. Here is a way for you to do class level setup with Spring’s style: package deng.junitdemo.spring;import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat;import java.util.List;import javax.annotation.Resource;import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.TestContext; import org.springframework.test.context.TestExecutionListeners; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.support.AbstractTestExecutionListener; import org.springframework.test.context.support.DependencyInjectionTestExecutionListener;@RunWith(SpringJUnit4ClassRunner.class) @TestExecutionListeners(listeners = { DependencyInjectionTestExecutionListener.class, SpringDemo2Test.class}) @ContextConfiguration public class SpringDemo2Test extends AbstractTestExecutionListener {@Resource(name='myList') private List<String> myList;@Test public void testMyListInjection() { assertThat(myList.size(), is(2)); }@Override public void afterTestClass(TestContext testContext) { List<?> list = testContext.getApplicationContext().getBean('myList', List.class); assertThat((String)list.get(0), is('one')); }@Override public void beforeTestClass(TestContext testContext) { List<?> list = testContext.getApplicationContext().getBean('myList', List.class); assertThat((String)list.get(1), is('two')); } } As you can see, Spring offers the @TestExecutionListeners annotation to allow you to write any Listener, and in it you will have a reference to the TestContext which has the ApplicationContext for you to get to the injected field reference. This works, but I find it not very elegant. It forces you to look up the bean, while your injected field is already available as field. But you can’t use it unless you go through the TestContext parameter. Now if you mix the solution we provided in the beginning, we will see a more prettier test setup. Let’s see it: package deng.junitdemo.spring;import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat;import java.util.List;import javax.annotation.Resource;import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.test.context.ContextConfiguration;import deng.junitdemo.InstanceTestClassListener;@RunWith(SpringInstanceTestClassRunner.class) @ContextConfiguration public class SpringDemo3Test implements InstanceTestClassListener {@Resource(name='myList') private List<String> myList;@Test public void testMyListInjection() { assertThat(myList.size(), is(2)); }@Override public void beforeClassSetup() { assertThat((String)myList.get(0), is('one')); }@Override public void afterClassSetup() { assertThat((String)myList.get(1), is('two')); } } Now JUnit only allow you to use single Runner, so we must extends the Spring’s version to insert what we did before. package deng.junitdemo.spring;import org.junit.runner.notification.RunNotifier; import org.junit.runners.model.InitializationError; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;import deng.junitdemo.InstanceTestClassListener;public class SpringInstanceTestClassRunner extends SpringJUnit4ClassRunner {private InstanceTestClassListener InstanceSetupListener;public SpringInstanceTestClassRunner(Class<?> clazz) throws InitializationError { super(clazz); }@Override protected Object createTest() throws Exception { Object test = super.createTest(); // Note that JUnit4 will call this createTest() multiple times for each // test method, so we need to ensure to call 'beforeClassSetup' only once. if (test instanceof InstanceTestClassListener && InstanceSetupListener == null) { InstanceSetupListener = (InstanceTestClassListener) test; InstanceSetupListener.beforeClassSetup(); } return test; }@Override public void run(RunNotifier notifier) { super.run(notifier); if (InstanceSetupListener != null) InstanceSetupListener.afterClassSetup(); } } That should do the trick. Running the test will give use this output: 12:58:48 main INFO org.springframework.test.context.support.AbstractContextLoader:139 | Detected default resource location 'classpath:/deng/junitdemo/spring/SpringDemo3Test-context.xml' for test class [deng.junitdemo.spring.SpringDemo3Test]. 12:58:48 main INFO org.springframework.test.context.support.DelegatingSmartContextLoader:148 | GenericXmlContextLoader detected default locations for context configuration [ContextConfigurationAttributes@74b23210 declaringClass = 'deng.junitdemo.spring.SpringDemo3Test', locations = '{classpath:/deng/junitdemo/spring/SpringDemo3Test-context.xml}', classes = '{}', inheritLocations = true, contextLoaderClass = 'org.springframework.test.context.ContextLoader']. 12:58:48 main INFO org.springframework.test.context.support.AnnotationConfigContextLoader:150 | Could not detect default configuration classes for test class [deng.junitdemo.spring.SpringDemo3Test]: SpringDemo3Test does not declare any static, non-private, non-final, inner classes annotated with @Configuration. 12:58:48 main INFO org.springframework.test.context.TestContextManager:185 | @TestExecutionListeners is not present for class [class deng.junitdemo.spring.SpringDemo3Test]: using defaults. 12:58:48 main INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader:315 | Loading XML bean definitions from class path resource [deng/junitdemo/spring/SpringDemo3Test-context.xml] 12:58:48 main INFO org.springframework.context.support.GenericApplicationContext:500 | Refreshing org.springframework.context.support.GenericApplicationContext@44c9d92c: startup date [Sat Sep 29 12:58:48 EDT 2012]; root of context hierarchy 12:58:49 main INFO org.springframework.beans.factory.support.DefaultListableBeanFactory:581 | Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@73c6641: defining beans[myList,org.springframework.context.annotation. internalConfigurationAnnotationProcessor,org. springframework.context.annotation.internalAutowiredAnnotationProcessor,org .springframework.context.annotation.internalRequiredAnnotationProcessor,org. springframework.context.annotation.internalCommonAnnotationProcessor,org. springframework.context.annotation. ConfigurationClassPostProcessor$ImportAwareBeanPostProcessor#0]; root of factory hierarchy 12:58:49 Thread-1 INFO org.springframework.context.support.GenericApplicationContext:1025 | Closing org.springframework.context.support.GenericApplicationContext@44c9d92c: startup date [Sat Sep 29 12:58:48 EDT 2012]; root of context hierarchy 12:58:49 Thread-1 INFO org.springframework.beans.factory.support. DefaultListableBeanFactory:433 | Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@ 73c6641: defining beans [myList,org.springframework.context.annotation. internalConfigurationAnnotationProcessor,org.springframework. context.annotation.internalAutowiredAnnotationProcessor,org.springframework. context.annotation.internalRequiredAnnotationProcessor,org.springframework. context.annotation.internalCommonAnnotationProcessor,org.springframework. context.annotation.ConfigurationClassPostProcessor$ImportAwareBeanPostProcessor#0]; root of factory hierarchyObviously the output shows nothing interesting here, but the test should run with all assertion passed. The point is that now we have a more elegant way to invoking a before and after test setup that are at class level, and they can be instance methods to allow Spring injection.Download the demo code You may get above demo code in a working Maven project from my sandbox Reference: Enhancing Spring Test Framework with beforeClass and afterClass setup from our JCG partner Zemian Deng at the A Programmer’s Journal blog....
spring-interview-questions-answers

Spring 3.1: Caching and EhCache

If you look around the web for examples of using Spring 3.1’s built in caching then you’ll usually bump into Spring’s SimpleCacheManager, which the Guys at Spring say is “Useful for testing or simple caching declarations”. I actually prefer to think of SimpleCacheManager as lightweight rather than simple; useful in those situations where you want a small in memory cache on a per JVM basis. If the Guys at Spring were running a supermarket then SimpleCacheManager would be in their own brand ‘basics’ product range. If, on the other hand, you need a heavy duty cache, one that’s scalable, persistent and distributed, then Spring also comes with a built in ehCache wrapper. The good news is that swapping between Spring’s caching implementations is easy. In theory it’s all a matter of configuration and, to prove the theory correct, I took the sample code from my Caching and @Cacheable blog and ran it using an EhCache implementation. The configuration steps are similar to those described in my last blog Caching and Config in that you still need to specify: <cache:annotation-driven /> …in your Spring config file to switch caching on. You also need to define a bean with an id of cacheManager, only this time you reference Spring’s EhCacheCacheManager class instead of SimpleCacheManager. <bean id='cacheManager' class='org.springframework.cache.ehcache.EhCacheCacheManager' p:cacheManager-ref='ehcache'/> The example above demonstrates an EhCacheCacheManager configuration. Notice that it references a second bean with an id of ‘ ehcache‘. This is configured as follows: <bean id='ehcache' class='org.springframework.cache.ehcache.EhCacheManagerFactoryBean' p:configLocation='ehcache.xml' p:shared='true'/> ‘ ehcache‘ has two properties: configLocation and shared. ‘ configLocation‘ is an optional attribute that’s used to specify the location of an ehcache configuration file. In my test code I used the following example file: <?xml version='1.0' encoding='UTF-8'?> <ehcache xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:noNamespaceSchemaLocation='http://ehcache.org/ehcache.xsd'> <defaultCache eternal='true' maxElementsInMemory='100' overflowToDisk='false' /> <cache name='employee' maxElementsInMemory='10000' eternal='true' overflowToDisk='false' /> </ehcache> …which creates two caches: a default cache and one named “employee”. If this file is missing then the EhCacheManagerFactoryBean simply picks up a default ehcache config file: ehcache-failsafe.xml, which is located in ehcache’s ehcache-core jar file. The other EhCacheManagerFactoryBean attribute is ‘ shared‘. This is supposed to be optional as the documentation states that it defines ‘whether the EHCache CacheManager should be shared (as a singleton at the VM level) or independent (typically local within the application). Default is ‘false’, creating an independent instance.” However, if this is set to false then you’ll get the following exception: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.cache.interceptor.CacheInterceptor#0': Cannot resolve reference to bean 'cacheManager' while setting bean property 'cacheManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cacheManager' defined in class path resource [ehcache-example.xml]: Cannot resolve reference to bean 'ehcache' while setting bean property 'cacheManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ehcache' defined in class path resource [ehcache-example.xml]: Invocation of init method failed; nested exception is net.sf.ehcache.CacheException: Another unnamed CacheManager already exists in the same VM. Please provide unique names for each CacheManager in the config or do one of following: 1. Use one of the CacheManager.create() static factory methods to reuse same CacheManager with same name or create one if necessary 2. Shutdown the earlier cacheManager before creating new one with same name. The source of the existing CacheManager is: InputStreamConfigurationSource [stream=java.io.BufferedInputStream@424c414] at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveReference(BeanDefinitionValueResolver.java:328) at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveValueIfNecessary(BeanDefinitionValueResolver.java:106) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1360) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. populateBean(AbstractAutowireCapableBeanFactory.java:1118) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. doCreateBean(AbstractAutowireCapableBeanFactory.java:517) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. createBean(AbstractAutowireCapableBeanFactory.java:456) ... stack trace shortened for clarity at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner. java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner. run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner. main(RemoteTestRunner.java:197) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cacheManager' defined in class path resource [ehcache-example.xml]: Cannot resolve reference to bean 'ehcache' while setting bean property 'cacheManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ehcache' defined in class path resource [ehcache-example.xml]: Invocation of init method failed; nested exception is net.sf.ehcache.CacheException: Another unnamed CacheManager already exists in the same VM. Please provide unique names for each CacheManager in the config or do one of following: 1. Use one of the CacheManager.create() static factory methods to reuse same CacheManager with same name or create one if necessary 2. Shutdown the earlier cacheManager before creating new one with same name. The source of the existing CacheManager is: InputStreamConfigurationSource [stream=java.io.BufferedInputStream@424c414] at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveReference(BeanDefinitionValueResolver.java:328) at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveValueIfNecessary(BeanDefinitionValueResolver.java:106) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1360) ... stack trace shortened for clarity at org.springframework.beans.factory.support.AbstractBeanFactory. getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveReference(BeanDefinitionValueResolver.java:322) ... 38 more Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ehcache' defined in class path resource [ehcache-example.xml]: Invocation of init method failed; nested exception is net.sf.ehcache.CacheException: Another unnamed CacheManager already exists in the same VM. Please provide unique names for each CacheManager in the config or do one of following: 1. Use one of the CacheManager.create() static factory methods to reuse same CacheManager with same name or create one if necessary 2. Shutdown the earlier cacheManager before creating new one with same name. The source of the existing CacheManager is: InputStreamConfigurationSource [stream=java.io.BufferedInputStream@424c414] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. initializeBean(AbstractAutowireCapableBeanFactory.java:1455) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1 .getObject(AbstractBeanFactory.java:294) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry. getSingleton(DefaultSingletonBeanRegistry.java:225) at org.springframework.beans.factory.support.AbstractBeanFactory. doGetBean(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.AbstractBeanFactory. getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveReference(BeanDefinitionValueResolver.java:322) ... 48 more Caused by: net.sf.ehcache.CacheException: Another unnamed CacheManager already exists in the same VM. Please provide unique names for each CacheManager in the config or do one of following: 1. Use one of the CacheManager.create() static factory methods to reuse same CacheManager with same name or create one if necessary 2. Shutdown the earlier cacheManager before creating new one with same name. The source of the existing CacheManager is: InputStreamConfigurationSource [stream=java.io.BufferedInputStream@424c414] at net.sf.ehcache.CacheManager.assertNoCacheManagerExistsWithSameName(CacheManager. java:521) at net.sf.ehcache.CacheManager.init(CacheManager.java:371) at net.sf.ehcache.CacheManager. (CacheManager.java:339) at org.springframework.cache.ehcache.EhCacheManagerFactoryBean. afterPropertiesSet(EhCacheManagerFactoryBean.java:104) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1514) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. initializeBean(AbstractAutowireCapableBeanFactory.java:1452) ... 55 more …when you try to run a bunch of unit tests. I think that this comes down to a simple bug Spring’s the ehcache manager factory as it’s trying to create multiple cache instances using new() rather than using, as the exception states, “one of the CacheManager.create() static factory methods’ which allows it to reuse same CacheManager with same name. Hence, my first JUnit test works okay, but all others fail.The offending line of code is: this.cacheManager = (this.shared ? CacheManager.create() : new CacheManager()); My full XML config file is listed below for completeness: <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:p='http://www.springframework.org/schema/p' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:cache='http://www.springframework.org/schema/cache' xmlns:context='http://www.springframework.org/schema/context' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.1.xsd http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd'> <!-- Switch on the Caching --> <cache:annotation-driven /><!-- Do the component scan path --> <context:component-scan base-package='caching' /><bean id='cacheManager' class='org.springframework.cache.ehcache.EhCacheCacheManager' p:cacheManager-ref='ehcache'/> <bean id='ehcache' class='org.springframework.cache.ehcache.EhCacheManagerFactoryBean' p:configLocation='ehcache.xml' p:shared='true'/> </beans> In using ehcache, the only other configuration details to consider are the Maven dependencies. These are pretty straight forward as the Guys at Ehcache have combined all the various ehcache jars into one Maven POM module. This POM module can be added to your project’s POM file using the XML below: <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache</artifactId> <version>2.6.0</version> <type>pom</type> <scope>test</scope> </dependency> Finally, the ehcache Jar files are available from both the Maven Central and Sourceforge repositories: <repositories> <repository> <id>sourceforge</id> <url>http://oss.sonatype.org/content/groups/sourceforge/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories>Happy coding and don’t forget to share! Reference: Spring 3.1: Caching and EhCache from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....
javaone-logo

JavaOne 2012: How Do Non-Blocking Data Structures Work?

I was a little surprised when I looked at my schedule for today and noted that all of the sessions I have currently planned to see today are in the Hilton. This became a little less surprising when I realized that about half of the JavaOne presentations are in the Hilton and that they seem to be roughly located by track. Tobias Lindaaker‘s (Neo Technology) presentation ‘How Do Atomic Data Structures Work?’ was held in the Hilton’s Golden Gate 3/4/5 conference room area. Lindaaker changed his presentation’s title since he originally submitted the abstract. The abstract’s title (and that listed in the conference materials) was ‘How Do Atomic Data Structures Work?,’ but he has renamed it to ‘How Do Non-Blocking Data Structures Work?’ Lindaaker explained that ‘atomic’ comes from Greek and meaning ‘undividable.’ He explained that a ‘lock-free data structure’ is ‘a data structure that does not block any threads when performing an operation on the data structure (read or write).’ He stated that one wants to avoid ‘spin-waiting‘ whenever possible. Lindaaker talked about synchronized regions. He said such regions ‘create a serialized path through the code’ and ‘guarantee safe publication.’ He defined ‘safe publication’ as meaning ‘everything written before exiting synchronized [block]‘ and ‘guaranteed to be visible on entry of synchronized [block].’ One of his bullets stated, ‘volatile fields give you safe publication without serialization.’ Lindaaker focused more on the volatile keyword modifier in his ‘volatile fields’ slide. The slide ‘What is a memory barrier?’ provided a simple visual representation of the memory barrier concept. For his slide ‘Atomic updates,’ Lindaaker stated that the easiest way to access an atomic reference is via use of java.util.concurrent.atomic.AtomicReference<V>. Lindakker provided a physical demonstration using coasters to illustrate the difference between compareAndSet (sets a value if the conditional matches favorably) and getAndSet. (sets new value returns old value). Lindaaker prefers java.util.concurrent.atomic.AtomicReferenceFieldUpdater<T,V> because of its ‘lower memory overhead’ (‘fewer object headers’) and ‘better memory locality’ (‘no reference indirection’). Lindaaker explained that array-based queues do block (sometimes a benefit when amount of work needs to be limited due to finite hardware resources), linked queues do not block. Lindaaker used a supermarket queue as an example of the differences. In the link-based queue, you always stand behind the same customer in front of you in the queue. In the array-based queue, you always remain in the same position. Bounded queues ‘frequently perform better,’ but will block when full. One of the main themes of this presentation was the idea of learning new ideas and then individually researching them further. Lindaaker recommended that audience members look at the JDK’s code to see some impressive and less impressive code examples. Lindaaker referenced LMAX (London Multi Asset Exchange) Disruptor as an example of a ‘ring buffer’ (‘array with a read mark and a write mark’). He stated that ‘readers contend on the read mark, writers on write mark’ and highlighted the consequence of this, ‘With single reader / single writer, there is no contention.’ The Disruptor page describes Disruptor as a ‘High Performance Inter-Thread Messaging Library.’ Lindaaker stated that java.util.concurrent.ConcurrentHashMap is a good general choice, but is not very exciting for discussion in his presentation. He stated that it ‘scales reasonably well on current commodity hardware’ (fewer than 100 CPUs) with proper tuning. Neo Technology provides a database implementation (Neo4j) that is not relational (graph database). Lindaaker described the Neo Technology’s graph-based database offering as, ‘Stores data as nodes and relationships between nodes.’ Don’t forget to share! Reference: JavaOne 2012: How Do Non-Blocking Data Structures Work? from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
grails-logo

Stuff I Learned from Grails Consulting

I don’t do much Grails consulting since I work for the Engineering group, and we have an excellent group of support engineers that usually work directly with clients. I do occasionally teach the 3-day Groovy and Grails course but I’ve only been on two onsite consulting gigs so far, and one was a two-week engagement that ended last week. As is often the case when you teach something or help someone else out, I learned a lot and was reminded of a lot of stuff I’d forgotten about, so I thought it would be good to write some of that down for future reference. SQL Logging There are two ways to view SQL output from queries; adding logSql = true in DataSource.groovy and configuring Log4j loggers. The Log4j approach is a lot more flexible since it doesn’t just dump to stdout, and can be routed to a file or other appender and conveniently enabled and disabled. But it turns out it’s easy to toggle logSql SQL console logging. Get a reference to the sessionFactory bean (e.g. using dependency injection with def sessionFactory) and turn it on with sessionFactory.settings.sqlStatementLogger.logToStdout = true and off with sessionFactory.settings.sqlStatementLogger.logToStdout = falsestacktrace.log The stacktrace.log file was getting very large and they wanted to configure it to use a rolling file appender. Seemed simple enough, but it took a lot longer than I expected. The trick is to create an appender with the name 'stacktrace'; the Grails logic that parses the Log4j DSL looks for an existing appender and uses it, and only configures the default one if there isn’t one already configured. So here’s one that configures a RollingFileAppender with a maximum of 10 files, each a maximum of 10MB in size, and with the standard layout pattern. In addition it includes logic to determine if it’s deployed in Tomcat so it can write to the Tomcat logs folder, or the target folder if you’re using run-app. If you’re deploying to a different container, adjust the log directory calculation appropriately. appenders { String logDir = grails.util.Environment.warDeployed ? System.getProperty('catalina.home') + '/logs' : 'target' rollingFile name: 'stacktrace', maximumFileSize: 10 * 1024 * 1024, file: '$logDir/stacktrace.log', layout: pattern( conversionPattern: ''%d [%t] %-5p %c{2} %x - %m%n''), maxBackupIndex: 10 } Dynamic fooId property In a many-to-one where you have a Foo foo field (or static belongsTo = [foo: Foo] which triggers adding a ‘foo’ field) you can access its foreign key with the dynamic fooId property. This can be used in a few ways. Since references like this are lazy by default, checking if a nullable reference exists using foo != null involves loading the entire instance from the database. But checking fooId != null involves no database access. Other queries or updates that really only need the foreign key will be cheaper using fooId. For example, to set a reference in another instance you would typically use code like this: bar2.foo = bar1.foo bar2.save() But you can use the load method bar2.foo = bar1.fooId ? Foo.load(bar1.fooId) : null bar2.save() and avoid loading the Foo instance just to set its foreign key in the second instance and then discard it. Deleting by id is less expensive too; ordinarily you use get to load an instance and call its delete method, but retrieving the entire instance isn’t needed. You can do this instead: Foo.load(bar.fooId).delete()DRY constraints You can use the importFrom method inside a constraints block in a domain class to avoid repeating constraints. You can import all constraints from another domain class: static constraints = { someProperty nullable: true ... importFrom SomeOtherDomainClass } and optionally use the include and/or exclude properties to use a subset: static constraints = { someProperty nullable: true ... importFrom SomeOtherDomainClass, exclude: ['foo', 'bar'] }Flush event listener They were seeing some strange behavior where collections that weren’t explicitly modified were being changed and saved, causing StaleObjectStateExceptions. It wasn’t clear what was triggering this behavior, so I suggested registering a Hibernate FlushEventListener to log the state of the dirty instances and collections during each flush: package com.burtbeckwith.blogimport org.hibernate.HibernateException import org.hibernate.collection.PersistentCollection import org.hibernate.engine.EntityEntry import org.hibernate.engine.PersistenceContext import org.hibernate.event.FlushEvent import org.hibernate.event.FlushEventListenerclass LoggingFlushEventListener implements FlushEventListener {void onFlush(FlushEvent event) throws HibernateException { PersistenceContext pc = event.session.persistenceContextpc.entityEntries.each { instance, EntityEntry value -> if (instance.dirty) { println 'Flushing instance $instance' } }pc.collectionEntries.each { PersistentCollection collection, value -> if (collection.dirty) { println 'Flushing collection '$collection.role' $collection' } } } } It’s not sufficient in this case to use the standard hibernateEventListeners map (described in the docs here) since that approach adds your listeners to the end of the list, and this listener needs to be at the beginning. So instead use this code in BootStrap.groovy to register it: import org.hibernate.event.FlushEventListener import com.burtbeckwith.blog.LoggingFlushEventListenerclass BootStrap {def sessionFactorydef init = { servletContext ->def listeners = [new LoggingFlushEventListener()] def currentListeners = sessionFactory.eventListeners.flushEventListeners if (currentListeners) { listeners.addAll(currentListeners as List) } sessionFactory.eventListeners.flushEventListeners = listeners as FlushEventListener[] } }“Read only” objects and Sessions The read method was added to Grails a while back, and it works like get except that it marks the instance as read-only in the Hibernate Session. It’s not really read-only, but if it is modified it won’t be a candidate for auto-flushing using dirty detection. But you can explicitly call save() or delete() and the action will succeed. This can be useful in a lot of ways, and in particular it is more efficient if you won’t be changing the instance since Hibernate will not maintain a copy of the original database data for dirty checking during the flush, so each instance will use about half of the memory that it would otherwise. One limitation of the read method is that it only works for instances loaded individually by id. But there are other approaches that affect multiple instances. One is to make the entire session read-only: session.defaultReadOnly = true Now all loaded instances will default to read-only, for example instances from criteria queries and finders. A convenient way to access the session is the withSession method on an arbitrary domain class: SomeDomainClass.withSession { session -> session.defaultReadOnly = true } It’s rare that an entire session will be read-only though. You can set the results of individual criteria query to be read-only with the setReadOnly method: def c = Account.createCriteria() def results = c { between('balance', 500, 1000) eq('branch', 'London') maxResults(10) setReadOnly true } One significant limitation of this technique is that attached collections are not affected by the read-only status of the owning instance (and there doesn’t seem to be a way to configure collection to ignore changes on a per-instance basis). Read more about this in the Hibernate documentation Reference: Stuff I Learned Consulting from our JCG partner Burt Beckwith at the An Army of Solipsists blog....
javaone-logo

JavaOne 2012: JavaOne Technical Keynote

Mark Reinhold started off the JavaOne 2012 Technical Keynote. He said this year’s edition would be a little different because it would use largely the same example to illustrate various aspects of Java rather than standalone individual coverage of each component of Java. Richard Bair and Jasper Potts of the JavaFX team (and associated with FXExperience) introduced this example application, a schedule builder with presentation and speaker data from this year’s JavaOne. As part of the introduction of the example application, the presenters made extra effort to point out that Oracle is shipping the JVM for MacOS and that OpenJDK is what is being used in the example. They also stated that the example runs on Linux as well. They used Java SE 7 and JavaFX 2 for this application and they talked about the availability of SceneBuilder for building a JavaFX application. They demonstrated the use of SceneBuilder within NetBeans to generate the JavaFX-based login page. Other interesting JavaFX advancements mentioned include the addition of a ComboBox (though there is no Date Picker yet), interoperability with SWT, and the availability of a JavaFX Packager. It was also mentioned that JavaFX was architected and designed from the beginning to allow for the main UI thread to be separate from background threads, allowing it to take advantage of multiple CPUs. Bair showed the relatively verbose code that would be required to implement a JavaFX application to fully take advantage of multiple threads today. Brian Goetz came to the stage to describe how Project Lambda and the changes to the Java language will enable ‘better parallel libraries.’ Goetz said that the easiest way to help developers is to give them better libraries, but the language must sometime be extended when the limits of the language prevent libraries from being written to fully satisfy the need. Goetz stated that the goals of inner classes are the same as Project Lambda, but inner classes have ‘a whole lot of other baggage.’ Goetz added that bulk operations on collections may not ‘really be needed, but things are better this way.’ Goetz then showed a simple but highly illustrative example of how Project Lambda changes how we process bulk data changes in a collection. His slide showed the J2SE 5 enhanced for loop is used today but can be done with the forEach method (added to all of the collections via the new default implementation interface approach) and a Groovy-like closure syntax (->). Goetz’s next slide was even more impressive. He showed what appeared to be three operations being performed on a collection as it was iterated. However, he pointed out that these would all be enacted at once on the collection with only a single traversal of that collection. All I could think was, ‘Wow!’ Goetz also had a slide showing off the computeIfAbsent operation on collections. He ended by saying there’s still lots of work to do and citing two URLs for playing with Project Lambda: http://openjdk.java.net/projects/lambda/ and http://jdk8.java.net/lambda/. There was some interesting discussion on the differences between traditional Java environments and embedded environments. Raspberry Pi received multiple and prominent mentions. Reinhold started talking about modularity and Project Jigsaw and showed a ‘little bit of a spaghetti diagram that is way cleaner than where we started, which was a total spaghetti diagram.’ He used this as a starting point for discussing the controversial decision to boot Project Jigsaw from Java 8 to Java 9. Reinhold had a slide focused on things that are in Java 8 such as Project Lambda, Compact Profiles, Type Annotations, Project Nashorn, and the new Date/Time API. Reinhold added that ‘all this work is being done in OpenJDK’ and that ‘all the specification work is being done in the JCP.’ Arun Gupta had the unenviable task of beginning his presentation at the time the keynote was scheduled to end (7 pm local time). He talked about Java EE and showed a slide titled, ‘Java EE Past, Present, & Future.’ This slide showed how Java EE has added features since the ten specifications of J2EE 1.2 in December 1999. Gupta had another slide talking about ‘Java EE 7 Revised Scope’ and how it increases productivity (via less boilerplate code with richer functionality and more defaults) and adds HTML5 support (WebSocket, JSON, and HTML5 Forms). Another Gupta slide was titled ‘Java EE 7 – Candidate JSRs’ that listed JSRs that are all new to Java EE 7 as well as those being modified. He then focused individual slides on some of them. His ‘Java API for RESTful Web Services 2.0′ slide talked about a standarized approach using a client API. Gupta’s slides showing how this is done today (without libraries) and comparing it to the next client API demonstrated how much simpler this is going to be. Gupta’s coverage of JMS 2.0 included discussion of less verbosity in JMS thanks to annotations and other new features in the Java programming language. He mentioned that the required resource adapter will make it easier to ‘mix and match’ JMS providers in the future. Gupta showed a slide full of small-font code (‘this code is not meant to be readable’) demonstrating sending a message using JMS 1.1. This was followed with a slide showing significantly less (and much clearer) code in JMS 2.0 taking advantage of annotations and resource injection to send a message. Gupta’s coverage of the JSON support to be added to Java EE included the bullet ‘API to parse, generate, transform, query, etc. JSON.’ He then showed some slides with example JSON-formatted data and example code for using builder-style to access the JSON. It felt a lot like Groovy’s JSON handling. Java API for WebSocket 1.0 will allow annotations to be used to easily work with WebSocket. When covering Bean Validation 1.1, Gupta pointed out that not all new adopted JSRs are being led by Oracle. He showed using the built-in @NotNull annotation on method parameters, but also showed that one will be able to write custom constraints that can be similarly applied to method arguments. Gupta highlighted miscellaneous improvements to Java EE such as JPA 2.1, EJB 3.2, etc. The majority of these JSRs have early public drafts available. GlassFish 4 is the reference implementation of Java EE 7 and already includes WebSocket, JSON, JMS 2, and more. One of Gupta’s slides was focused on Avatar. The ‘Angry Bids’ example application was demonstrated. It is based on Avatar and runs on GlassFish and uses standard Java EE 7 components. Gupta introduced Project Easel for NetBeans. It was mentioned that NetBeans 7.3 beta would be coming out later this week and will include support for HTML5 as a new project type. The example being showed uses JQuery and CSS. The NetBeans-based example communicated through Google Chrome to WebKit (it also works with the JavaFX-embedded browser), but it is expected to work eventually with any WebKit-based browser or device. The demonstrator showed how his changes to HTML5 code (HTML, JavaScript, and CSS) within NetBeans were updated in the Google Chrome browser. It was pretty impressive and makes me wish I had enough time to have accepted an invitation to provide early testing of NetBeans 7.3. NetBeans is going to be able to generate RESTful clients, support JQuery, and provide a Project Nashorn editor. A similar demo to this one is available at http://netbeans.org/kb/docs/web/html5-gettingstarted-screencast.html. Like the Strategy Keynote, this Technical Keynote was held in the Masonic Auditorium. One of the interesting trends I noticed in tonight’s keynotes was that at least three different people from three different organizations mentioned looking for skilled Java developers should contact them if interested in job opportunities. Reference: JavaOne 2012: JavaOne Technical Keynote from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close