Featured FREE Whitepapers

What's New Here?

groovy-logo

Quartz 2: Exploring different scheduling types

We often think of Cron when we want to schedule a job. Cron is very flexible in expressing an repeating occurance of an event/job in a very compact expression. However it’s not answer for everything, as I often see people are asking for help in the Quartz user forum. Did you know that the popular Quartz 2 library provide many other schedule types (called Trigger) besides cron? I will show you each of the Quartz 2 built-in schedule types here within a complete, standalone Groovy script that you can run and test it out. Let’s start with a simple one.         @Grab('org.quartz-scheduler:quartz:2.1.6') @Grab('org.slf4j:slf4j-simple:1.7.1') import org.quartz.* import org.quartz.impl.* import org.quartz.jobs.*import static org.quartz.DateBuilder.* import static org.quartz.JobBuilder.* import static org.quartz.TriggerBuilder.* import static org.quartz.SimpleScheduleBuilder.*def trigger = newTrigger() .withSchedule( simpleSchedule() .withIntervalInSeconds(3) .repeatForever()) .startNow() .build() dates = TriggerUtils.computeFireTimes(trigger, null, 20) dates.each{ println it } This is the Quartz’s SimpleTrigger, and it allows you to create a fixed rate repeating job. You can even limit to certain number of count if you like. I have imported all the nessary classes the script needs, and I use the latest Quartz 2.x builder API to create an instance of the trigger. The quickest way to explore and test out whether a scheduling fits your need is to print out its future execution times. Hence you see me using TriggerUtils.computeFireTimes in the script. Run the above and you should get the datetimes as scheduled to be run, in this case every 3 seconds. bash> $ groovy simpleTrigger.groovy Tue Oct 23 20:28:01 EDT 2012 Tue Oct 23 20:28:04 EDT 2012 Tue Oct 23 20:28:07 EDT 2012 Tue Oct 23 20:28:10 EDT 2012 Tue Oct 23 20:28:13 EDT 2012 Tue Oct 23 20:28:16 EDT 2012 Tue Oct 23 20:28:19 EDT 2012 Tue Oct 23 20:28:22 EDT 2012 Tue Oct 23 20:28:25 EDT 2012 Tue Oct 23 20:28:28 EDT 2012 Tue Oct 23 20:28:31 EDT 2012 Tue Oct 23 20:28:34 EDT 2012 Tue Oct 23 20:28:37 EDT 2012 Tue Oct 23 20:28:40 EDT 2012 Tue Oct 23 20:28:43 EDT 2012 Tue Oct 23 20:28:46 EDT 2012 Tue Oct 23 20:28:49 EDT 2012 Tue Oct 23 20:28:52 EDT 2012 Tue Oct 23 20:28:55 EDT 2012 Tue Oct 23 20:28:58 EDT 2012 The most frequent used scheduling type is the CronTrigger, and you can test it out in similar way. @Grab('org.quartz-scheduler:quartz:2.1.6') @Grab('org.slf4j:slf4j-simple:1.7.1') import org.quartz.* import org.quartz.impl.* import org.quartz.jobs.* import static org.quartz.DateBuilder.* import static org.quartz.JobBuilder.* import static org.quartz.TriggerBuilder.* import static org.quartz.CronScheduleBuilder.* def trigger = newTrigger() .withSchedule(cronSchedule('0 30 08 * * ?')) .startNow() .build() dates = TriggerUtils.computeFireTimes(trigger, null, 20) dates.each{ println it } The javadoc for CronExpression is very good and you should definately read it throughly to use it effectively. With the script, you can explore all the combination you want easily and verify future fire times before your job is invoked. Now, if you have some odd scheduling needs such as run a job every 30 mins from MON to FRI and only between 8:00AM to 10:00AM, then don’t try to cramp all that into the Cron expression. The Quartz 2.x has a dedicated trigger type just for this use, and it’s called DailyTimeIntervalTrigger! Check this out: @Grab('org.quartz-scheduler:quartz:2.1.6') @Grab('org.slf4j:slf4j-simple:1.7.1') import org.quartz.* import org.quartz.impl.* import org.quartz.jobs.*import static org.quartz.DateBuilder.* import static org.quartz.JobBuilder.* import static org.quartz.TriggerBuilder.* import static org.quartz.CronScheduleBuilder.*def trigger = newTrigger() .withSchedule(cronSchedule("0 30 08 * * ?")) .startNow() .build() dates = TriggerUtils.computeFireTimes(trigger, null, 20) dates.each{ println it } Another hidden Trigger type from Quartz is CalendarIntervalTrigger, and you would use this if you need to repeat job that’s in every interval of a calendar period, such as every year or month etc, where the interval is not fixed, but calendar specific. Here is a test script for that. @Grab('org.quartz-scheduler:quartz:2.1.6') @Grab('org.slf4j:slf4j-simple:1.7.1') import org.quartz.* import org.quartz.impl.* import org.quartz.jobs.*import static org.quartz.DateBuilder.* import static org.quartz.JobBuilder.* import static org.quartz.TriggerBuilder.* import static org.quartz.CalendarIntervalScheduleBuilder.* import static java.util.Calendar.*def trigger = newTrigger() .withSchedule( calendarIntervalSchedule() .withInterval(2, IntervalUnit.MONTH)) .startAt(futureDate(10, IntervalUnit.MINUTE)) .build() dates = TriggerUtils.computeFireTimes(trigger, null, 20) dates.each{ println it } I hope these will help you get started on most of your scheduling need with Quartz 2. Try these out and see your future fire times before even scheduling a job into the scheduler should save you some times and troubles.   Reference: Exploring different scheduling types with Quartz 2 from our JCG partner Zemian Deng at the A Programmer’s Journal blog. ...
batoo-logo

Batoo JPA – 15x Faster Than The Leading JPA Provider

Introduction I loved the JPA 1.0 back in early 2000s. I started using it together with EJB 3.0 even before the stable releases. I loved it so much that I contributed bits and parts for JBoss 3.x implementations. Those were the days our company was considerably still small in size. Creating new features and applications were more priority than the performance, because there were a lot of ideas that we have and we needed to develop and market those as fast as we can. Now, we no longer needed to write tedious and error prone xml descriptions for the data model and deployment descriptors. Nor we needed to use the curse called “XDoclet”. On the other side, our company grew steadily, our web site has become the top portal in the country for live events and ticketing. We now had the performance problems! Although the company grew considerably, due to the economics in the industry, we did not make a lot of money. The challenge we had was our company was a ticketing company. Every e-commerce business has high and low seasons. But for ticketing there is low seasons and high hours. While you sell avarage x tickets an hour, when a blockbuster event goes on sale suddenly demand becomes 1000s of xs for an hour. Welcome to hell! We worked day and night to tweak and enhance the application to use whatever available to keep it up on a big day. To be frank there was always a bigger event that was capable of bringing the site down no matter how hard we tried. The dream was over, I came to realize that developing applications on top of frameworks is a bit “be careful!” along with “fun”. I Kept Learning I loved programming, I loved Java, I loved opensource. I developed almost every possible type applications on every possible platform I could. For the rest I went in and discovered stuff. I learned a lot from masters thanks to open source. In contrast to most, I read articles and codes written by great programmers like Linus Torvalds, Gavin King, Ed Merks and so many others. With the experiences I gathered, I quit the ticketing company I loved and became a Software Consultant. This opened a new era in front of me that there were a lot of industries and a lot of different platforms and industries. In each project I became the performance police of the application. I am now the performance freak! I Took The Red Pill! One day I said to myself, could JPA be faster? If yes, how fast can it be. I spent about two weeks to create an entitymanager that persisted and loaded entities. Then I ran it and compared the results to ones off of Hibernate. The results were not really promising I was only about %50 faster than Hibernate in persisting and finding the entities. I spent another week to tweak the loops, cached metamodel chunks, changed access to classes from interfaces to abstract classes, modified the lists to arrays and so many other things. Suddenly I had a prototype that were 50+ times faster than Hibernate! Development of Batoo JPA I was astonished by how drastically performance went up by just paying attention to performance centric coding. By then I was using Visual VM to measure the times spent in the JPA layer. I got down and wrote a self profiling tool that measured the CPU resources spent at the JPA Layer and started implementing every aspect of the JPA 2.0 Specification. After each iteration I re-run the benchmark and when the performance dropped considerably I went back to changes and inspected the new code line by line – the profiling tool I created reported performance hit of every line of the JPA Stack. It took about 6 months to implement the specification as a whole, on top of it, I introduced a Maven Plugin to create bytecode instrumentation at build time and a complementary Eclipse Plugin to allow use of instrumentation in Eclipse IDE. After a carriage of 6 months Batoo JPA was born in August 2012. it measured over 15 times faster than Hibernate. Benchmark As stated earlier, a benchmark was introduced to measure every micro development iteration of Batoo JPA. This benchmark was not created to put forward the areas Batoo JPA was fast so that other would believe in Batoo JPA, but was created to put together a most common domain model and persistence operations that existed in almost every JPA application – so that I knew how fast Batoo JPA was. Performance Metrics The scenario is:A Person objectWith phonenumbers – PhoneNumber object With addresses – Address objectThat point to country – Country ObjectCommon life-cycle tasks has been introduced:Persist 100K person objects with two phone numbers and two addresses in lots of 10 per session Locate and load 250K person objects with lots of 10 per session Remove 5K person objects with lots of 5 per session Update 100K person objects with lots of 100 Query person objects 25K times using Object Oriented Criteria Querying API. Query person objects 25K times using JPQL – Java Persistence Query Language, an SQL-like query scripting language.For the sake of simplicity, the benchmark was run on top of in-memory embedded Derby with the profiler slicing the times spent at theUnit Test Layer JPA Layer Derby LayerThe times spent at the Unit Test Layer is omitted from the Results due to irrelevancy. Results The times given in the below tables are in milliseconds spent in the JPA layer while running the benchmark scenario. The same tests are run for Batoo and Hibernate JPA in different runs to isolate boot, memory, cache, garbage collection etc. effects. The tables below showthe total time spent at Derby Layer as DB Operation Total the type of the test as Test the times for each test at Derby Layer as DB Operation the times for each test at JPA Layer as Core Operation the total time spent at JPA Layer as Core Operation Total the total time spent at both JPA and Derby Layers as Operation TotalBelow are the ratios of CPU resources spent by Hibernate and Batoo JPA. It is assumed that an an application generates average 1 save, 5 locate, 2 remove and 3 update and 5 + 5 total of ten queries in ratios. Now although these numbers are extremely dependent on the application nature, some sort of assumption is needed to measure the overall speed comparison.Given the scenario above, Batoo JPA measures over 15 times faster than Hibernate – the leading JPA implementation. As you may have noticed Batoo JPA not only performs insanely fast at the JPA Layer it also employs a number of optimizations to relieve the pressure on the database. This is why Batoo JPA measures half the time at DB Layer in comparison to the one off of Hibernate. Interpretation of Results We do appreciate that JPA is not the single part of an application. But we do believe that the current JPA implementation consume quite a bit of your server budget. While a typical application cluster spends CPU resources for persistence layer about %20 to %40, Batoo JPA will well be able to bring your cluster down to half of its size allowing you save a lot on licensing administration and hardware, as well as room to scale up even for non-cluster friendly applications – in my experience I saw applications running on 96 core Solaris systems simply because they are not scalable. Using Batoo JPA Conclusion We have managed to create a JPA Product that allows you to enjoy the great features of JPA Technology but also do not require you to compromise on performance! On top of that Batoo JPA is developed using the Apache Coding Standards and has valuable documentation within the code. The project codebase is released with LGPL license and there is absolutely no closed source part and we envision that it would be that way forever. As stated earlier, it also has a complementary Maven and Eclipse plugin to provide instrumentation for build and development phases. Batoo JPA deviates from the specification almost zero, making it easy for existing JPA applications be migrated to Batoo JPA, while requiring no additional learning phase to start using it. Last but not the least, Batoo JPA not only saves you when you run your application, but also during the time you deploy your application. Batoo JPA employs parallel deployer managers to handle deployment in parallel. Considering a developer deploys the application during his / her development phase well 10x times a day if not 100, with a moderately large domain model this may take quite a bit of developers time when summed up. Although we haven’t made a concrete benchmark on deployment speed, we know that Batoo JPA deploys about 3 4 times faster than Hibernate. We appreciate the time you spent to read this paper and would love to have us give you a free inspection of your application and demonstrate how much you can gain by simply replacing you JPA implementation. Useful Links: The project website – http://batoo.jp/ The sources and issue management of Batoo JPA is hosted at Github – https://github.com/organizations/BatooOrg You may discuss Batoo JPA on StackOverflow.com – http://stackoverflow.com/questions/ask?tags=batoo+jpa ...
enterprise-java-logo

Java Temporary Caching API – Test-driving the Early Draft Review RI

It was known as ‘ The Neverending Story‘. The JSR kicked of 11 and a half year ago and passed the JSR Review Ballot on 06 Mar, 2001. If you ever wondered what it takes to get a fancy low JSR number in the hundreds: That is the secret. Unlike in the German fantasy novel by Michael Ende this was not about people’s lack of imagination but about resources, political discussions and finally about licensing. But let’s forget about the past and move to what is there since yesterday. Note that this material was uploaded to the JCP in February but was delayed while the legal complications of having two companies as shared spec leads got sorted out. That is done and will not be an issue going forward in the process. What is it all about? Caching is known for dramatically speeding up applications. Those typically use temporary data which is expensive to create but has a long lifetime during which it can be re-used. This specification standardizes caching of Java objects in a way that allows an efficient implementation, and removes from the programmer the burden of implementing cache expiration, mutual exclusion, spooling, and cache consistency. It is designed to work with both Java SE and Java EE. For the later it still is not ensured, that it will be included in upcoming EE 7 release but the EG is working hard on it and needs your feedback. How do I get my hands on it? That is easy. All the needed artifacts are in maven central already. Let’s build a very simple sample for you to get you started. Fire up NetBeans and create a new Maven Java Application. Name it whatever you like (e.g. cachingdemo, open the pom.xml and add the following two dependencies to it: <dependency> <groupId>javax.cache</groupId> <artifactId>cache-api</artifactId> <version>0.5</version> </dependency><dependency> <groupId>javax.cache.implementation</groupId> <artifactId>cache-ri-impl</artifactId> <version>0.5</version> </dependency> And if you are there, change the junit version to 4.8.2. Refactor the AppTest to utilize the new junit: package net.eisele.samples.cachingdemo;import org.junit.Test;/** * Simple Cache Test */ public class AppTest {@Test public void testApp() { } } All set. To make this simple, I’m going to add some caching features in the test-case. The Basic Concepts From a design point of view, the basic concepts are a CacheManager that holds and controls a collection of Caches. Caches have entries. The basic API can be thought of map-­like. Like a map, data is stored as values by key. You can put values, get values and remove values. But it does not have high network cost map-like methods such as keySet() and values(). And generally it prefers zero or low cost return types. So while Map has V put(K key, V value) javax.cache.Cache has void put(K key, V value). // Name for the cache String cacheName = 'myfearsCache'; // Create a cache using a CacheBuilder Cache<Integer, String> cache = Caching.getCacheManager().<Integer, String>createCacheBuilder(cacheName).build(); // define a value String value1 = 'Markus'; // define a key Integer key = 1; //put to the cache cache.put(key, value1); // get from the cache String value2 = cache.get(key); //compare values assertEquals(value1, value2); // remove from the cache cache.remove(key); // ceck if removed assertNull(cache.get(key));   Things to come   This basically is all that is possible at the moment. Going down the road with subsequent releases you should be able to: – Integrate with Spring and CDI via @Annotations – Use CacheEventListener – Work with Transactions The EG is actively searching for feedback on the available stuff. So, if you can get your hands on it, give it a try and let the EG know what you think!   Links and Reading JCP page: JSR 107: JCACHE – Java Temporary Caching API Group Mailing List http://groups.google.com/group/jsr107 Log issues in the Issue Tracker https://github.com/jsr107/jsr107spec/issues A very simple demo https://github.com/jsr107/demo ehcache-jcache – an implementation of the 0.5 specification https://github.com/jsr107/ehcache-jcache   Reference: Java Temporary Caching API – Test-driving the Early Draft Review RI from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog. ...
android-logo

Java to iOS Objective-C translation tool and runtime

If you work on a mobile app and you’re planning on developing it on Android and iOS, it may be less work for you to write it on Android first. Google recently released a new tool that makes porting Java code to iOS much easier. The project (j2objc) can be found here. To give the program a quick test run, I ran it using this Java file:       public class hello { public static void main(String[] args) { System.out.println("To Objective C we go!"); } } After running the above code using j2objc, I received two files (as expected): the header file, hello.h, and the source file, hello.m. The source file looks like this: // // Generated by the J2ObjC translator. DO NOT EDIT! // source: hello.java // // Created by Isaac on 10/18/12. //#import "IOSObjectArray.h" #import "hello.h"@implementation hello@endint main( int argc, const char *argv[] ) { int exitCode = 0; NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; IOSObjectArray *args = J reEmulationMainArguments(argc, argv);NSLog(@"%@", @"To Objective C we go!");[pool release]; return exitCode; } J2Objc certainly has to make a few decisions when converting between the two languages. One is seen above, as System.out.printLn is converted to NSLog. Currently J2Objc can convert much of Java, including things like reflection and anonymous classes in Java. I imagine these kinds of decisions are made through the program. Although the library can’t touch any specific Java code related to the Android APIs, it can handle anything written in pure Java. The tool certainly has potential, especially for converting standard java classes directly to their Objective C equivalent.   Reference: A Reason to Write Android Apps First from our JCG partner Isaac Taylor at the Programming Mobile blog. ...
java-logo

Design Best practices using Factory Method Pattern

In the previous Design Pattern example we have explained about a flavor of Factory pattern which is commonly used nowadays. In this session we will understand a more advanced solution which had more abstraction. This pattern is called Factory Method design pattern.Definition:The Factory method pattern exposes a method for creating objects but delegates the object creation to the subclasses. Factory method design pattern resolves these problems in similar lines to the Factory pattern with an additional level of abstraction.The object can be instantiated using the new keyword. e.g. Object A creates another object B using: ClassB objB = new ClassB(); So object A holds a reference to object B.Since Object A is now dependent on Object B if the later gets modified then we will have to recompile the Object A. Well life is not that easy. The object creation can be more complex and if there is more coupling then the maintenance will be a painful and expensive job in software development. To avoid such worst case situations the creational design patterns comes for rescue. They try to create loose coupling between the client and the object creator and gives several other design benefits for the developers. Factory Method pattern is one such pattern to solve the design issues.Common use:Factory method design pattern is commonly used in various frameworks such as Struts, Spring, Apache in conjunction with decorator design pattern. There are various J2EE patterns which are based on this Factory pattern e.g. DAO pattern.Let’s take the same example of Garment Factory where we were creating various types of Garments but the client was completely unaware of how these products are created. Even if we had to add a new Garment Type like Jacket the client code need not be changed and thus increases the flexibility of the application.When to use Factory Method Pattern?The creation of object requires reuse of the code without significant duplication of code. A class will not know what subclasses will be required to create. Subclasses may specify what objects should be created. Parent classes will delegate the creation of objects to its subclasses.Structure The below diagram highlights a typical structure of the Factory Method Design pattern. Unlike the above example there is an additional Factory Abstract (Factory) class has been added.In the above diagram following are the participants:Product: This defines an interface for the objects the factory methods creates. Concrete Products: implements the Product interface. Factory (Creator): This is an abstract class which defines the Factory Method which returns a product object. Concrete Factory: This class implements and overrides the methods which were declared by the parent Factory class.The client(e.g. object Class A) will wants to use the products which are created by the ConcreteFactory class (object Class B). However in this case the client only holds a reference to Interface B rather than the object ‘Class B’ and so it doesn’t need to know anything about classB. In fact there can be multiple class which can implement the abstract class.What is meant by Factory Method pattern allows the subclasses to decide which class to instantiate?It basically means that the Factory abstract class is coded without knowing what actual ConcreteProduct classes will be instantiated i.e. whether it is Trouser or whether it is Shirt. This is completely determined by the ConcreteFactory class.Now let’s implement the above pattern to our GarmentFactory example.Let’s do some hands on now. We are not repeating the code for Concrete Products like Shirt.java and Trouser.java which can be found in the Factory Pattern article. A new Factory abstract class has been created which is client facing. public abstract class Factory { protected abstract GarmentType createGarments(String selection); } GarmentFactory class needs to be modified to inherit the abstract class Factory. public class GarmentFactory extends Factory{ public GarmentType createGarments(String selection) { if (selection.equalsIgnoreCase('Trouser')) { return new Trouser(); } else if (selection.equalsIgnoreCase('Shirt')) { return new Shirt(); } throw new IllegalArgumentException('Selection doesnot exist'); } } The client class refers to the Factory class and class the createGarments(selection) method of the Factory to create the product at runtime. Factory factory = new GarmentFactory(); GarmentType objGarmentType = factory.createGarments(selection); System.out.println(objGarmentType.print());   Benefits:Code is flexible, loosely coupled and reusable by moving the object creation from the client code to the Factory class and it’s subclasses. It is easier to maintain such code since the objection creation is centralized. The client code deals with only the Product interface and hence any Concrete Products can be added without modifying the client code logic. The advantage of a Factory Method is that it can return the same instance multiple times, or can return a subclass rather than an object of that exact type. It encourages a consistency in the code as object is created through a Factory which forces a definite set of rules which everybody must follow. This avoids using different constructor at different client.  Example: JDBC is a good example for this pattern; application code doesn’t need to know what database it will be used with, so it doesn’t know what database-specific driver classes it should use. Instead, it uses factory methods to get Connections, Statements, and other objects to work with. This gives flexibility to change the back-end database without changing your DAO layer. Below are some examples from the SDK: valueOf() method which returns object created by factory equivalent to value of parameter passed. getInstance() method which creates instance of Singleton class. newInstance() method which is used to create and return new instance from factory method every time called. Download Sample Code   Reference: Design Best practices using Factory Method Pattern from our JCG partner Mainak Goswami at the Idiotechie blog. ...
scala-logo

Testing Quartz Cron expressions

Declaring complex Cron expressions is still giving me some headaches, especially when some more advanced constructs are used. After all, can you tell when the following trigger will fire '0 0 17 L-3W 6-9 ? *'? Since triggers are often meant to run far in the future, it’s desired to test them beforehand and make sure they will actually fire when we think they will.   Quartz scheduler (I’m testing version 2.1.6) doesn’t provide direct support for that, but it’s easy to craft some simple function based on existing APIs, namely CronExpression.getNextValidTimeAfter() method. Our goal is to define a method that will return next N scheduled executions for a given Cron expression. We cannot request all since some triggers (including the one above) do not have end date, repeating infinitely. We can only depend on aforementioned getNextValidTimeAfter() which takes a date as an argument and returns nearest fire time T1 after that date. So if we want to find second scheduled execution, we must ask about next execution after the first one ( T1). And so on. Let’s put that into code:def findTriggerTimesIterative(expr: CronExpression, from: Date = new Date, max: Int = 100): Seq[Date] = { val times = mutable.Buffer[Date]() var next = expr getNextValidTimeAfter from while (next != null && times.size < max) { times += next next = expr getNextValidTimeAfter next } times } If there is no next fire time (e.g. trigger is suppose to run only in 2012 and we ask about fire times after 1st of January 2013), null is returned. A little bit of crash testing: findTriggerTimesRecursive(new CronExpression('0 0 17 L-3W 6-9 ? *')) foreach println yields: Thu Jun 27 17:00:00 CEST 2013 Mon Jul 29 17:00:00 CEST 2013 Wed Aug 28 17:00:00 CEST 2013 Fri Sep 27 17:00:00 CEST 2013 Fri Jun 27 17:00:00 CEST 2014 Mon Jul 28 17:00:00 CEST 2014 Thu Aug 28 17:00:00 CEST 2014 Fri Sep 26 17:00:00 CEST 2014 Fri Jun 26 17:00:00 CEST 2015 Tue Jul 28 17:00:00 CEST 2015 Fri Aug 28 17:00:00 CEST 2015 Mon Sep 28 17:00:00 CEST 2015 Mon Jun 27 17:00:00 CEST 2016 ... Hope the meaning of our complex Cron expression is now clearer: closest week day (W) three days before the end of month (L-3) between June and September (6-9) at 17:00:00 (0 0 17). Now I started experimenting a little bit with different implementations to find the most elegant and suitable for this quite simple problem. First I noticed that the problem is not iterative, but recursive: finding next 100 execution times is equivalent to finding first execution and finding 99 remaining executions after the first one: def findTriggerTimesRecursive(expr: CronExpression, from: Date = new Date, max: Int = 100): List[Date] = expr getNextValidTimeAfter from match { case null => Nil case next => if (max > 0) next :: findTriggerTimesRecursive(expr, next, max - 1) else Nil } Seems like the implementation is much simpler: no matches – return empty list ( Nil). Match found – return it prepended to next matches, unless we already collected enough dates. There is one problem with this  implementation though, it’s not tail-recursive. Very often this can be changed by introducing second function and accumulating the intermediate results in arguments: def findTriggerTimesTailRecursive(expr: CronExpression, from: Date = new Date, max: Int = 100) = {@tailrec def accum(curFrom: Date, curMax: Int, acc: List[Date]): List[Date] = { expr getNextValidTimeAfter curFrom match { case null => acc case next => if (curMax > 0) accum(next, curMax - 1, next :: acc) else acc } }accum(from, max, Nil) } A little bit more complex, but at least StackOverflowError won’t wake us up in the middle of night. BTW I just noticed IntelliJ IDEA not only shows icons identifying recursion (see next to line number), but also uses different icons when tail-call optimization is employed (!):So I thought that’s best what I can get when another idea came to me. First of all, the artificial max limit (defaulting to 100) seemed awkward. Also why accumulate all the results if we can compute them on the fly, one after another? This is when I realized that I don’t need Seq or List, I need an Iterator[Date]! class TimeIterator(expr: CronExpression, from: Date = new Date) extends Iterator[Date] { private var cur = expr getNextValidTimeAfter fromdef hasNext = cur != nulldef next() = if (hasNext) { val toReturn = cur cur = expr getNextValidTimeAfter cur toReturn } else { throw new NoSuchElementException } } I’ve spent some trying to reduce the if true-branch into one-liner and avoid intermediate toReturn variable. It’s possible, but for clarity (and to spare your eyes) I won’t reveal it *. But why an iterator, known to be less flexible and pleasant to use? Well, first of all it allows us to lazily generate next trigger times, so we don’t pay for what we don’t use. Also intermediate results aren’t stored anywhere, so we can save memory as well. And because everything that works for sequences works for iterators as well, we can easily work with iterators in Scala, e.g. printing ( taking) first 10 dates: new TimeIterator(expr) take 10 foreach println It’s tempting to do a little benchmark comparing different implementations (here using caliper): object FindTriggerTimesBenchmark extends App { Runner.main(classOf[FindTriggerTimesBenchmark], Array('--trials', '1')) }class FindTriggerTimesBenchmark extends SimpleBenchmark {val expr = new CronExpression('0 0 17 L-3W 6-9 ? *')def timeIterative(reps: Int) { for (i <- 1 to reps) { findTriggerTimesIterative(expr) } }def timeRecursive(reps: Int) { for (i <- 1 to reps) { findTriggerTimesRecursive(expr) } }def timeTailRecursive(reps: Int) { for (i <- 1 to reps) { findTriggerTimesTailRecursive(expr) } }def timeUsedIterator(reps: Int) { for (i <- 1 to reps) { (new TimeIterator(expr) take 100).toList } }def timeNotUsedIterator(reps: Int) { for (i <- 1 to reps) { new TimeIterator(expr) } } } Seems like the implementation changes have negligible impact on time since most of the CPU is presumably burnt inside getNextValidTimeAfter().What have we learnt today?don’t think too much about performance unless you really have a problem. Strive for best design and simplest implementation. think a lot about data structures you want to use to represent your problem and solution. In this (trivial on first sight) problem Iterator (lazily evaluated, possibly infinite stream of items) turned out to be the best approach* OK, here’s how. Hint: assignment hasUnit type and (Date, Unit) tuple is involved here: def next() = if (hasNext) (cur, cur = expr getNextValidTimeAfter cur)._1 else throw new NoSuchElementException   Reference: Testing Quartz Cron expressions from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog. ...
java-logo

Abstract Factory Design Pattern Explained

Abstract Factory Design Pattern is another flavor of Factory Design Pattern. This pattern can be considered as a “super factory” or “Factory of factories”. The Abstract Factory design pattern (part of the Gang of Four) falls under the Creational design pattern category and it provides a way to encapsulate a group of factories that have a common link without highlighting their concrete classes. This is all about a factory creating various objects at run time based on user demand. The client remains completely unaware (decoupled) on which concrete products it gets from each of these individual factories and the client is only able to access a simplified interface.Definition:Abstract Factory Design Pattern provides an interface for creating families of related or dependent objects without specifying their concrete classes.Problem Statement: We will consider our same previous example of Garment Factory and extend it to understand the problem statement for Abstract factory. Consider a garments factory specialized in creating trousers and shirts. Now the Parent company which is a famous Retail brand is now venturing into Gadget section. They are also planning to expand their Factories having one centre in US and another one in UK. The client should be completely unaware of how the objects are created. What is the best design pattern we can use to resolve this requirement? Solution: To solve the above design problem we will use Abstract Factory Pattern. As mentioned earlier this is the super Factory. The above problem cannot be resolved efficiently using the Factory method pattern as this involves multiple Factories and products which are related to the parent company or dependent. Note: In design pattern abstract and interface can be referred by the same name. Structure:In the above diagram the additional items created are the additional layer of abstraction through AbstractFactory having the createProductA() and createProductB() method. There are multiple ConcreteFactories which can implement the methods of the AbstractFactory. The client now accesses only the AbstractFactory interface. The other part is the Product. The client now accesses to different AbstractProduct interfaces AbstractProductA and AbstractProductB. All the ConcreteProducts for AbstractProducts are created by the ConcreteFactories (ConcreteFactory1 and ConcreteFactory2) and it’s logic. Now let’s have a look at our real life GarmentFactory example and what’s the difference than the Factory Method pattern.In the above real life example the RetailFactory is AbstractFactory class which now has multiple Concrete factories (UKFactory and USFactory) in various locations like US and UK specialized in creating multiple products like Shirt/Laptop and Trouser/Mobile respectively. In this example we have also created another additional class called FactoryMaker which takes the choice of Factory from the client and then delegates the job to appropriate Factory classes accordingly. The client is completely unaware of how this processing is done and has reference to only the RetailFactory interface and GarmentType and GadgetType interface. This loose coupling also helps in terms of addition of multiple Concrete Products without much change in the client code. Benefits: Use of this pattern makes it possible to interchange the concrete classes without changing the client code even at runtime.Drawback: One of the main drawbacks is the extra complexity and writing the code during the initial stages.Do you know?Data Access Object in JEE uses the (GoF) Abstract Factory Pattern to create various product DAO from RdbDAOFactory, XmlDAOFactory, OdbDAOFactory.  Interesting points:Abstract Factory, Builder, and Prototype can use Singleton in their implementation. Abstract Factory Pattern is often used along with Factory Method but also can be implemented using Prototype pattern to increase the performance and simplifying the code. Abstract Factory can be used as an alternative to Façade pattern to hide platform specific classes AbstractFactory class declares only an interface for creating the products. The actual creation is the task of the ConcreteProduct classes, where a good approach is applying the Factory Method design pattern for each product of the family.Difference between Abstract Factory and Factory Method pattern:Factory Method pattern exposes a method to the client for creating the object whereas in case of Abstract Factory they expose a family of related objects which may consist of these Factory methods. Designs start out using Factory Method (less complicated, more customizable, subclasses proliferate) and evolve toward Abstract Factory, Prototype, or Builder (more flexible, more complex) as the designer discovers where more flexibility is needed. Factory Method pattern hides the construction of single object where as abstract factory method hides the construction of a family of related objects. Abstract factories are usually implemented using (a set of) factory methods.  Reference: Abstract Factory Design Pattern Explained from our JCG partner Mainak Goswami at the Idiotechie blog. ...
javaone-logo

JavaOne 2012 Coverage

JavaOne, the annual Java extravaganza conference, took place from 30 September to 4 October in San Francisco. Numerous interesting presentations took place, proving once again a healthy Java ecosystem. Java Code Geeks could not make it to the conference but our JCG partner Dustin Marx was there and was generous enough to provide a full coverage of the event writing several articles about it. Some of those articles were republished by Java Code Geeks. In this post I am making a list with all the relevant articles, published either on Java Code Geeks or Dustin’s blog. So, here is the list:Java Strategy Keynote and IBM Keynote JavaOne Technical Keynote How Do Non-Blocking Data Structures Work? The Road to Lambda A Walk Through of Groovy’s AST Transformations Looking into the JVM Crystal Ball NetBeans – Project Easel NetBeans.Next – The Roadmap Ahead 101 Ways to Improve Java – Why Developer Participation Matters Early General Impressions From Instants to Eras, the Future of Java Modern Web Developmentwith Play Framework 2.0 Build Your Own Type System for Fun and Profit Scala Tricks JavaFX on Smart Embedded Devices Building Mobile Apps with HTML5 and Java [Tiggzi] Custom Static Code Analysis [NetBeans] Griffon, Up Close and Personal JSR 353: Java API for JSON Processing JavaFX Graphics Tips and Tricks What’s New in Scala 2.10 What’s New in Groovy 2.0 Diagnosing Your Application on the JVM Community Keynote Up, Up, and Out: Scaling Software with Akka Mastering Java Deployment Getting Started with the NetBeans Platform Introduction to Ceylon Observations and ImpressionsI hope you enjoyed it! Don’t forget to share! ...
agile-logo

Transition to Agile, Large Technical Debt, Small Project

Many months ago, Rebecca asked an interesting question about technical debt in projects. She asked, How to start when there’s a really big mess? In that case, small, just being a professional clean-up acts may not even make a dent. Of course, as with any good question, the answer is, “it depends.” And the biggest flavor of depends is whether the project is large or small and if the project is collocated or distributed. For the sake of argument, let’s assume it’s small and collocated. When you transition to agile and you have a reasonably size codebase, chances are quite good that you’ve been working on the product for a while. You have legacy code. You might have legacy tests. You certainly have legacy ways of thinking about the code and the tests. How do you work yourself out of the technical debt you have accumulated over time? Can you approach the work in the way I outlined in Thoughts on Infrastructure, Technical Debt, and Automated Test Framework? Yes and No. Let’s assume that for some small set of features you can eat some small pieces of debt. And, you have so much debt, that there are some areas of technical debt that you just do not want to touch those areas of code, or that if you touch those areas, you know you are going to wade into quicksand. You know you have to create more tests than you have “time” to create. That is, the size of the story is significantly smaller than the size of the debt, even with swarming. Now, what do you do? You tell people. You tell the product owner. You tell your colleagues. Me, I would probably write some tests for the code anyway, because I would want to know the next time I wade into quicksand. But I have much more gray hair experience now than I did when I was younger, so I make different choices about the product. If I was a project manager for this project, I would want to know, because I would want to manage the risk. In my experience, that much technical debt affects a team’s ability to produce features. And, if I was a product owner, I would want to know, because the technical debt would affect our ability to do anything with the product. And, this is a case, where you might want to consider having three backlogs as a funnel into one backlog for an iteration. Read Might Three Backlogs Be Better Than One? And, make sure to read all the comments. They are quite insightful. The idea is you still only have one backlog for the an iteration. But you have visibility into all the work you have to do that you want to rank for the product. It’s easy in hindsight to say, “Don’t get into that situation.” Well, duh. But organizations are in this situation. And, they need help. I still think the best answer is to pay off the technical debt is to work in small features and pay off the debt as you find it. That way you never pay off more than you need to, you never do more architecture work than you need to, and you never have this strange backlog issue. On the other hand, some people don’t realize how much debt they have. Anything that helps them see what they have is useful. But maybe there is a better way. Maybe you have a better way? Let me summarize:Pay off technical debt when you implement a story, if you can. Swarm to start and finish a story. This will help you avoid and pay off debt. Write more tests to expose the debt, so no one is surprised in the future. Expose the debt by creating a debt backlog so the debt can be ranked in preparation for iteration planning. When planning an iteration, take the top item off the debt backlog. Do pairwise comparison of that item with the top item on the feature backlog. Which item has more value? Put that item on the iteration backlog. Continue until the team says, “Stop, we cannot do more in this iteration.” Your very last solution is rearchitecting. Why? Because it prevents you from making progress in the project. Read Startup Suicide – Rewriting the Code. It’s not just suicide for startups.Always make sure the technical debt is visible. That is key to managing it. Whether you like my solution(s) or not, make the debt visible. And, if you don’t like any of my ideas, please do comment. Heck, comment if you do like them. I would love to know what you think. Reference: Transition to Agile, Large Technical Debt, Small Project from our JCG partner Johanna Rothman at the Managing Product Development blog. ...
software-development-2-logo

You can’t Refactor your way out of every Problem

Refactoring is a disciplined way to clarify, retain or restore the design of a system as you make changes, and to help cleanup and correct the mistakes and mess that we all make as we work, to clear away the evidence of false starts and changes in direction and back tracking and to help fill in gaps and misunderstandings. As a colleague of mine has pointed out, you can get a lot out of even the most simple and obvious refactoring changes: eliminating duplication, changing variable and method names to be more meaningful, extracting methods, simplifying conditional logic, replacing a magic number with a named constant. These are easy things to do, and will give you a big return in understandability and maintainability. But refactoring has limitations – there are some problems that refactoring won’t solve. Refactoring can’t help you if the design is fundamentally wrong Some people naively believe that you can refactor your way out of any design mistake or misunderstanding – and that you can use refactoring as a substitute for upfront design. This assumes that you will be able to immediately recognize mistakes and gaps from customer feedback and correct the design as you are developing. But it can take a long time, usually only once the system is being used in the real world by real customers to do real things, before you learn how wrong you actually were, how much you missed and misunderstood, exceptions and edge cases and defects piling up before you finally understand (or accept) that no, the design doesn’t hold up, you can’t just keep on extending it and patching what you have – you need a different set of abstractions or a different architecture entirely. Refactoring helps you make course corrections. But what if you find out that you’ve been driving the entire time in the wrong direction, or in circles? Barry Boehm, in Balancing Agility and Discipline, explains that starting simple and refactoring your way to the right answer sometimes falls down: “Experience to date also indicates that low-cost refactoring cannot be depended upon as projects scale up. The most serious problems that arise with simple design are problems known as “architecture breakers”. These highly expensive problems can occur when early, simple design decisions result in forseeable changes that cause breakage in design beyond the ability of refactoring to handle.” This is another argument in the “Refactor or Design” holy war over how much design should be / needs to be done upfront and how much can be filled in as you go through incremental change and refactoring. Deep Decisions Many design ideas can be refined, elaborated, iterated and improved over time, and refactoring will help you with this. But some early decisions on approach, packaging, architecture, and technology platform are too fundamental and too deep to change or correct with refactoring. You can use refactoring to replace in-house code with standard library calls, or to swap one library for another – doing the same thing in a different way. Making small design changes and cleaning things up as you go with refactoring can be used to extend or fill in gaps in the design and to implement cross-cutting features like logging and auditing, even access control and internationalization – this is what the XP approach to incremental design is all about. But making small-scale design changes and improvements to code structure, extracting and moving methods, simplifying conditional logic and getting rid of case statements isn’t going to help you if your architecture won’t scale, or if you chose the wrong approach (like SOA) or the wrong application framework (J2EE with Enterprise Java Beans, any multi-platform UI framework or any of the early O/R mapping frameworks – remember the first release of TopLink?, or something that you rolled yourself before you understood how the language actually worked), or the wrong language (if you found out that Ruby or PHP won’t scale), or a core platform middleware technology that proves to be unreliable or that doesn’t hold up under load or that has been abandoned, or if you designed the system for the wrong kind of customer and need to change pretty much everything. Refactoring to Patterns and Large Refactorings Joshua Kerievsky’s work on Refactoring to Patterns provides higher-level composite refactorings to improve – or introduce – structure in a system, by properly implementing well-understood design patterns such as factories and composites and observers, replacing conditional logic with strategies and so on. Refactoring to Patterns helps with cleaning up and correcting problems like “duplicated code, long methods, conditional complexity, primitive obsession, indecent exposure, solution sprawl, alternative classes with different interfaces, lazy classes, large classes, combinatorial explosions and oddball solutions”.Lippert and Roock’s work on Large Refactorings explains how to take care of common architectural problems in and between classes, packages, subsystems and layers, doing makeovers of ugly inheritance hierarchies and reducing coupling between modules and cleaning up dependency tangles and correcting violations between architectural layers – the kind of things that tools like Structure 101 help you to see and understand. They have identified a set of architectural smells and refactorings to correct them:Smells in dependency graphs: Visible dependency graphs, tree-like dependency graphs, cycles between classes, unused classes Smells in inheritance hierarchies: Parallel inheritance hierarchies, list-like inheritance hierarchy, inheritance hierarchy without polymorphic assignments, inheritance hierarchy too deep, subclasses without redefinitions Smells in packages: Unused packages, cycles between packages, too small/large packages, packages unclearly named, packages too deep or nesting unbalanced Smells in subsystems: Subsystem overgeneralized, subsystem API bypassed, subsystem too small/large, too many subsystems, no subsystems, subsystem API too large Smells in layers: Too many layers, no layers, strict layers violated, references between vertically separate layers, upward references in layers, inheritance between protocol-oriented layers (coupling).Composite refactorings and large refactorings raise refactoring to higher levels of abstraction and usefulness, and show you how to identify problems on your own and how to come up with your own refactoring patterns and strategies. But refactoring to patterns or even large-scale refactoring still isn’t enough to unmake or remake deep decisions or change the assumptions underlying the design and architecture of the system. Or to salvage code that isn’t safe to refactor, or worth refactoring. Sometimes you need to rewrite, not refactor There is no end of argument over how bad code has to be before you should give up and rewrite it rather than trying to refactor your way through it. The best answer seems to be that refactoring should always be your first choice, even for legacy code that you didn’t write and don’t understand and can’t test (there is an entire book written on how and where to start refactoring legacy spps). But if the code isn’t working, or is so unstable and so dangerous that trying to refactor it only introduces more problems, if you can’t refactor or even patch it without creating new bugs, or if you need to refactor too much of the code to get it into acceptable shape (I’ve read somewhere than 20% is a good cut-off, but I can’t find the reference), then it’s time to declare technical bankruptcy and start again. Rewriting the code from scratch is sometimes your only choice. Some code shouldn’t be – or can’t be – saved. ‘Sometimes code doesn’t need small changes—it needs to be tossed out so that you can start over. If you find yourself in a major refactoring session, ask yourself whether instead you should be redesigning and reimplementing that section of code from the ground up.’ Steve McConnell,Code Complete You can use refactoring to restore, repair, cleanup or adapt the design or even the architecture of a system. Refactoring can help you to go back and make corrections, reduce complexity, and help you fill in gaps. It will pay dividends in reducing the cost and risk of ongoing development and support. But refactoring isn’t enough if you have to reframe the system – if you need to do something fundamentally different, or in a fundamentally different way – or if the code isn’t worth salvaging. Don’t get stuck believing that refactoring is always the right thing to do, or that you can refactor yourself out of every problem.   Reference: You can’t Refactor your way out of every Problem from our JCG partner Jim Bird at the Building Real Software blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close