Featured FREE Whitepapers

What's New Here?

scala-logo

FactoryPal: New Scala framework for creating objects as test data

FactoryPal is a scala framework that lets you create objects as test data. All you have to do is define the templates for each of the classes that you want FactoryPal to create objects from. After that, FactoryPal takes care of the rest. Have you ever heard of factory_girl a super cool Ruby framework? Well, FactoryPal is factory_girl for Scala. It is pretty similar in its use. The difference is that FactoryPal is 100% type safe, which all of us Scala people love. Here is a link to Github for the anxious https://github.com/mgonto/factory_pal       How do we use this? FactoryPal is a singleton object where you can register all of the templates. For example, you can define a template as follows: FactoryPal.register[Person] { person => person.name.mapsTo('gonto') and person.age.isRandom } In this example, we register a new template for class model. If we try to set a value for a property that Person doesn’t has, your project won’t compile. If you try to set a value to a property that isn’t the type of that property, the project won’t compile either. Pretty cool huh? This was possible thanks to Scala Macros and Dynamic, two features added in the latest Scala 2.10 RC release. For the time being, there are 3 supported operations on a field template.mapsTo: This sets a certain specific value to that property. isRandom: This sets a random value based on the type of the field. I’ve created some implicit randomizers for common objects (String, Long, Int, Double, etc.) but you can create your own. This is pretty similar to Ordering[T] used in List. isAnotherFactoryModel: You tell FactoryPal that this is an inner object that can be constructed with another template of FactoryPal. For the time being, there can only be one template for each class. I’m going to change this very soon.After we created the template, we can instantiate objects of that template as follows: val person = FactoryPal.create[Person] The create method has another overload that lets you add some field overriders for certain test. For example you can do the following: val person = FactoryPal.create[Person] { (person : ObjectBuilder[Person]) => person.age.mapsTo(45) alone } And that’s it. That’s all you need to know to use this. How can I add this to my project? This is an example configuration for Build.scala for your SBT project. There’re only snapshots for now as Scala 2.10 is not yet final. Once it’s, I’m going to make a release. import sbt._ import sbt.Keys._object ApplicationBuild extends Build {lazy val root = Project( id = 'factory_pal_sample', base = file('.'), settings = Project.defaultSettings ++ Seq( name := 'factory_pal_sample', organization := 'ar.com.gonto', version := '0.1', scalaVersion := '2.10.0-RC3', scalacOptions += '', licenses := ('Apache2', new java.net.URL('http://www.apache.org/licenses/LICENSE-2.0.txt')) :: Nil, libraryDependencies ++= Seq( 'org.scala-lang' % 'scala-compiler' % '2.10.0-RC3', 'ar.com.gonto' % 'factory_pal_2.10' % '0.1-SNAPSHOT', 'org.scalatest' % 'scalatest_2.10.0-RC3' % '1.8-B1' % 'test' ), resolvers ++= Seq( 'Typesafe repository' at 'http://repo.typesafe.com/typesafe/releases/', Resolver.url('Factory Pal Repository', url('http://mgonto.github.com/snapshots/'))(Resolver.ivyStylePatterns) ) ) ) } Take a look at the dependency and the repository! What does it use internally? Internally, this framework uses Scala Macros, Dynamic and the new Reflection library provided by Scala 2.10. Next Steps The next things I want to do are:Add the posibility to have multiple templates for one Class Add template inheritance Add helpers to use this with ScalaTest and Specs2. For the moment, you can create the templates in the before.For more information or to take a look at the code go to Github   Reference: FactoryPal: New Scala framework for creating objects as test data from our JCG partner Sebastian Scarano at the Having fun with Play framework! blog. ...
selenium-logo

A Selenium/WebDriver example in Java

A couple of years back, I was pitching for some work and the client wanted to see how I would tackle a real world problem. They asked me to automate some tasks on the woot.com web site. The task was to go to various woot web sites and to read the product name and price of the offer of the day. I wrote a little bit of Selenium code and thought I’d post it here in case any of it is useful to anyone. I got the job – so it can’t be too bad. First up I defined an interface to represent a woot page: package uk.co.doogle;import com.thoughtworks.selenium.Selenium;/*** This interface defines the methods we must implement for classes* of type Woot. Woot web sites have one item for sale every 24 hours.* @author Tony*/public interface Woot {/*** Defines the interface of the method we use to get the price* of the item for sale on a Woot website* @param selenium the selenium object we pass in which is used to interact* with the browser/web page* @return String representation of the price of the item for sale*/public String getPrice(Selenium selenium);/*** Defines the interface of the method we use to get the product name* of the item for sale on a Woot website* @param selenium the selenium object we pass in which is used to interact* with the browser/web page* @return String representation of the product name of the item for sale*/public String getProductName(Selenium selenium);} Then I implemented this interface a few times to represent the actual behaviour of the various woot pages – here for example if the winewoot page: public class WineWoot extends BaseWoot {/*** Constructor* @param url pass in the url of the web site*/public WineWoot(String url) {super(url);}/*** Implementation of the method to get the price of the object for sale on* the Woot web site.*/public String getPrice(Selenium selenium) {//if you need to update the xpath to the piece of text of interest - use xpather firefox pluginString xPath = '//html/body/header/nav/ul/li[8]/section/div/a/div[3]/span';selenium.waitForCondition('selenium.isElementPresent(\'xpath=' + xPath + '\');', '12000');return selenium.getText(xPath) + ' ';}/*** Implementation of the method to get the product name of the item for sale* on the Woot web site**/public String getProductName(Selenium selenium) {//if you need to update the xpath to the piece of text of interest - use xpather firefox pluginString xPath = '//html/body/header/nav/ul/li[8]/section/div/a/div[2]';selenium.waitForCondition('selenium.isElementPresent(\'xpath=' + xPath + '\');', '12000');return selenium.getText(xPath) + ' ';}} Note – back then I used the xPather plugin – this doesn’t work for recent versions of firefox, so now I use firebug. Then I wrote the actual ‘test': package uk.co.doogle;import com.thoughtworks.selenium.*;import java.io.BufferedWriter;import java.io.FileWriter;import java.util.ArrayList;import java.util.List;/*** This class is where we define tests of the Woot web sites* @author Tony**/public class TestWoots extends SeleneseTestCase {/*** Outputstream for our results file*/private BufferedWriter out;/*** Our list of Woot web sites we want to test*/private List<BaseWoot> sites = new ArrayList<BaseWoot>();/*** This is where we do any set up needed before our test(s) run.* Here we add the list of Woot web sites we want to test and we create an* output stream ready to write results to file*/public void setUp() throws Exception {sites.add(new BaseWoot('http://www.woot.com/'));sites.add(new ShirtWoot('http://shirt.woot.com/'));sites.add(new WineWoot('http://wine.woot.com/'));try {//let's append to our file...FileWriter fstream = new FileWriter('out.csv', true);out = new BufferedWriter(fstream);out.write('Site, Product Name, Product Price');out.newLine();} catch (Exception e) {System.err.println('Error creating a file to write our results to: ' + e.getMessage());}}/*** Tests getting the item name and price for the item for sale on each Woot web site we test. We see the results of the test* in std out in the form of a table and we also write the results to a csv file.* If there are any errors getting the information, this is displayed instead.** How to run me: open command prompt and from the directory where our selenium server is* located type: java -jar selenium-server-standalone-2.0b3.jar (or equivalent) and wait for the server to start up.* Then just run this unit test.*/public void testGetItemsAndPrices() throws Exception {//for each Woot site in our list of sites we want to testfor (BaseWoot woot : sites) {//let's put this in a try catch block as we want to try ALL the sites - some may be down or slow...try {selenium = new DefaultSelenium('localhost', 4444, '*firefox', woot.getUrl());selenium.start();selenium.open('/');selenium.waitForPageToLoad('50000');//add a new row for our table to std outSystem.out.println();//print out the information we need - the site, the title of the item for sale and the priceString siteUrl = woot.getUrl();String productName = woot.getProductName(selenium);String productPrice = woot.getPrice(selenium);//sometimes there are commas which mess up our csv file - so//we substitute with ;productName = productName.replace(',', ';');System.out.print('website: ' + siteUrl + ' ');System.out.print('product name: ' + productName);System.out.print('price: ' + productPrice);out.write(siteUrl + ', ' + productName + ', ' + productPrice);out.newLine();} catch (Exception ex) {//here may may see that the web site under test has changed and the xpath to the price or product name may need to//be changed in the Woot classSystem.out.print('problem getting the data for: ' + woot.getUrl()+ ' ' + ex.getMessage() + ' ');} finally {selenium.stop();}}}/*** Any tear-down we need to do to cleanup after our test(s).* Here we just stop selenium and close the output stream*/public void tearDown() throws Exception {selenium.stop();out.close();}} I know this code worked for a couple of years, and I have made some minor changes to get it to work with the current woot.com web sites – all I had to do was get the latest selenium-server-standalone.jar for it to work with the latest firefox and also to update the xpaths to the price and product name information. That would be a good improvement to the code – to make it data driven – such that we could just update the xpaths in a config file rather than changing the hard-coded ones I have used here. That was the only feedback from the client actually.   Reference: A Selenium/WebDriver example in Java from our JCG partner Tony Dugay at the Doogle Ltd blog. ...
java-logo

Local variables inside a loop and performance

Overview Sometimes a question comes up about how much work allocating a new local variable takes.  My feeling has always been that the code becomes optimised to the point where this cost is static i.e. done once, not each time the code is run. Recently Ishwor Gurung suggested considering moving some local variables outside a loop. I suspected it wouldn’t make a difference but I had never tested to see if this was the case.     The test This is the test I ran: public static void main(String... args) { for (int i = 0; i < 10; i++) { testInsideLoop(); testOutsideLoop(); } }private static void testInsideLoop() { long start = System.nanoTime(); int[] counters = new int[144]; int runs = 200 * 1000; for (int i = 0; i < runs; i++) { int x = i % 12; int y = i / 12 % 12; int times = x * y; counters[times]++; } long time = System.nanoTime() - start; System.out.printf("Inside: Average loop time %.1f ns%n", (double) time / runs); }private static void testOutsideLoop() { long start = System.nanoTime(); int[] counters = new int[144]; int runs = 200 * 1000, x, y, times; for (int i = 0; i < runs; i++) { x = i % 12; y = i / 12 % 12; times = x * y; counters[times]++; } long time = System.nanoTime() - start; System.out.printf("Outside: Average loop time %.1f ns%n", (double) time / runs); } and the output ended with: Inside: Average loop time 3.6 ns Outside: Average loop time 3.6 ns Inside: Average loop time 3.6 ns Outside: Average loop time 3.6 ns Increasing the time the test takes to 100 million iterations made little difference to the results. Inside: Average loop time 3.8 ns Outside: Average loop time 3.8 ns Inside: Average loop time 3.8 ns Outside: Average loop time 3.8 ns Replacing the modulus and multiplication with >>, &, + I got int x = i & 15; int y = (i >> 4) & 15; int times = x + y; prints Inside: Average loop time 1.2 ns Outside: Average loop time 1.2 ns Inside: Average loop time 1.2 ns Outside: Average loop time 1.2 ns While modulus is relatively expensive the resolution of the test is to 0.1 ns or less than 1/3 of a clock cycle. This would show any difference between the two tests to an accuracy of this. Using Caliper As @maaartinus comments, Caliper is a micro-benchmarking library so I was interested in how much slower it might be that doing the code by hand. public static void main(String... args) { Runner.main(LoopBenchmark.class, args); }public static class LoopBenchmark extends SimpleBenchmark { public void timeInsideLoop(int reps) { int[] counters = new int[144]; for (int i = 0; i < reps; i++) { int x = i % 12; int y = i / 12 % 12; int times = x * y; counters[times]++; } }public void timeOutsideLoop(int reps) { int[] counters = new int[144]; int x, y, times; for (int i = 0; i < reps; i++) { x = i % 12; y = i / 12 % 12; times = x * y; counters[times]++; } } } The first thing to note is the code is shorter as it doesn’t include timing and printing boiler plate code.  Running this I get on the same machine as the first test. 0% Scenario{vm=java, trial=0, benchmark=InsideLoop} 4.23 ns; σ=0.01 ns @ 3 trials 50% Scenario{vm=java, trial=0, benchmark=OutsideLoop} 4.23 ns; σ=0.01 ns @ 3 trialsbenchmark   ns linear runtime InsideLoop 4.23 ============================== OutsideLoop 4.23 =============================vm: java trial: 0 Replacing the modulus with shift and and 0% Scenario{vm=java, trial=0, benchmark=InsideLoop} 1.27 ns; σ=0.01 ns @ 3 trials 50% Scenario{vm=java, trial=0, benchmark=OutsideLoop} 1.27 ns; σ=0.00 ns @ 3 trialsbenchmark   ns linear runtime InsideLoop 1.27 ============================= OutsideLoop 1.27 ==============================vm: java trial: 0 This is consistent with the first result and only about 0.4 – 0.6 ns slower for one test. (about two clock cycles), and next to no difference for the shift, and, plus test.  This may be due to the way calliper samples the data but doesn’t change the outcome. It is worth nothing that when running real programs, you typically get longer times than a micro-benchmark as the program will be doing more things so the caching and branch predictions is not as ideal.  A small over estimate of the time taken may be closer to what you can expect to see in a real program. Conclusion This indicated to me that in this case it made no difference.  I still suspect the cost of allocating local variables is don’t once when the code is compiled by the JIT and there is no per-iteration cost to consider.   Reference: Can synchronization be optimised away? from our JCG partner Peter Lawrey at the Vanilla Java blog. ...
java-logo

Weak, Weaker, Weakest, Harnessing The Garbage Collector With Specialist References

When and when not to use specialist references in Java Weak, Soft and Phantom references are dangerous and powerful. If they are used the wrong way they can destroy JVM performance; however, when used the correct way they can substantially enhance performance and program clarity. Weak and Soft references are the more obvious of the three. They are pretty much the same thing actually! The idea is simply that they be used to access an object but will not prevent that object being reclaimed by the garbage collector:     Object y=new Object(); // y is a hard reference to the object // and so that object cannot be reclaimed.Obejct x=WeakReference<Object>(y); // now x is a weak reference to the object // (not to y - as y is just a variable). // The object still cannot be reclaimed // because y is still a hard reference to it.y=null; // now there is only a weak reference to //the object, it is eligible for garbage collection.if(x.get()==null){ System.out.println("The object has gone away"); }else{ System.out.println("The object is " + x.get().toString()); } Have you spotted the deliberate mistake? It is an easy one to miss and it will probably not show up in unit testing. It is exactly the sort of issue which makes me say: Only Use Weak/Soft References If You Absolutely Have To And Probably Not Even Then. When the JVM is under memory pressure it might reclaim the object between the first and second invocations of the get method in the weak reference. This will result in the program throwing a null pointer exception when the toString method is invoked on null. The correct form for the code is: Object x=x.get(); // Now we have null xor a hard reference to // the object if(z==null){ System.out.println("The object has gone away"); }else{ System.out.println("The object is " + z.toString()); } So they are mad, bad and dangerous to be with; why do we want them? We have not fully touched on why they are really, really dangerous yet. To do that we need to see why we might want them and why we might need them. There are two common situations in which weak and soft references might seem like a good idea (we will look at the difference between soft and weak in a little bit). The first of these is in some form of RAM cache. It works like this: We have some data, for example customer details, which is stored in a database. We keep looking it up and that is slow. What we can do is cache that data in RAM. However, eventually the RAM will fill up with names and addresses and the JVM throw an OutOfMemoryError. The solution is to store the names and addresses in objects which are only weakly reachable. Something like this: ConcurrentHasMap>String,WeakReference>CustomerInfo<< cache=new ConcurrentHashMap><(); ... CustomerInfo currentCustomer=cache.get(customerName); if(currentCustomer==null){ currentCustomer=reloadCachesEntry(customerName); } This innocent little pattern is quite capable of bringing a monster sized JVM to its knees. The pattern is using the JVM’s garbage collector to manage an in-memory cache. The garbage collector was never designed to do that. The pattern abuses the garbage collector by filling up the memory with weakly reachable objects which run the JVM out of heap space. When the JVM gets low in memory, it has to traverse all the reference, weak, soft and otherwise, in its heap and reclaim RAM. This is expensive and shows up as a processing cost. It is even worse on very big JVM instances with a lot of processor cores because the garbage collector may well end up having to perform a ‘stop the world’ full cycle and hence reduce performance down to single core levels! I am not saying in memory cache technology is a bad idea. Oh no – it is a great idea. However, just throwing it against the garbage collector and not expecting trouble is a very poor design choice. Weak vs Soft what is the difference? Well, there is much really. On some JVMs (the client hostspot JVM for example – but that might change at any time) weak reference are marked for preferential garbage collection. In other words, the garbage collector should make more effort to reclaim memory from the object graph to which they refer (and no soft or hard references refer) than for other memory. Soft references do not have this idea to them. However, this is just an optional idea on some JVMs and cannot be relied upon, and it is a bad idea anyway. I would suggest using either soft or weak references all the time and stick with it. Pick which ever you like the sound of. I prefer the name WeakReference, so tend to use that. There is one other difference; an object which is referenced to by a soft reference and a weak reference, but not a hard reference, can have the situation where it can still be acquired from the .get() method of the weak reference but not that of the soft reference. The reverse is not possible not the other way around. Code that relies on this behaviour is probably wrong headed. Good uses for weak references do exist. What weak references are great for it keeping track of objects which are being used else where. An example is from Sonic Field (an audio processing package). In this example, ‘slots’ in files contain audio data and are associated with objects in memory. This model does not use the weak references to refer to in-memory copies of the data. In memory objects use the slots. Weak references are used to allow the file management system to reuse slots. The code using slots does not need (and should not need to) be concerned with the management of disk space. It is the concern of the file manager to do that. The file manager has weak references to the objects using the slots. When a new slot is requested, the file manager checks for any existing slots referred to via weak references which have been reclaimed (and hence return null from the get method). If it finds such a reference, it can reuse the slot. Automatic notification of reclamation Sometimes we might want to be told when a weak or soft (or the other sort – phantom) reference has been reclaimed. This can be done via the en-queuing system. We can do this using a reference queue: WeakReference(T referent, ReferenceQueue<? super T> q) We do something like this: ReferenceQueue junkQ = new ReferenceQueue<>(); .... WeakReference<FileSlot> mySlot=new WeakReference<>(aSlot); .... // In a different thread - make sure it is daemon! WeakReference<FileSlot> isDead; while(true){ isDead = junkQ.remove(); // Take some action based on the fact it is dead // But - it might not be dead - see end of post :( ... } But, remember, by the time weak reference ends on the junkQ calling .get() on it will return null. If you will have to store information to allow what ever action you are interesting it to happen somewhere else (like a ConcurrentHashMap where the reference is the key), So What Is A Phantom Reference? Phantom references are the one sort which, when you need them, you really need them. But on the face of it, they seem utterly useless. You see, whenever you invoke .get() on a phantom reference, you always get null back. It is not possible to use a phantom reference to get to the object to which it refers – ever. Well – that is not quite true. We can achieve this via JNI sometimes but we should never do so. Consider the situation where you allocate native memory in JNI associated with a Java object. This is the sort of model which the DirectBuffers in the noi package of the JDK use. It is something I have used repeatedly in large commercial projects. So, how do we reclaim that native memory? In the case of file like systems, it is possible to say that the memory is not reclaimed until the file is closed. This places the responsibility of resource management on the shoulders of the programmer; which is exactly what the programmer expects for things like files. However, for lighter weight objects, we programmers do not like to have to think about resource management – the garbage collector is there to do it for us. We could place code in a finalizer which calls into the JNI code to reclaim the memory. This is bad (as in lethal) because JVMs make almost guarantee that they will call finalizers. So, don’t do that! But, phantom references come to the rescue! First we need to understand ‘phantom reachable': A phantom reference will only become enqueued if the thing to which it refers cannot be reach via any other sort of reference (hard, weak or soft). At this point the phantom reference can be enqueued. If the object had a finalizer, then it will either have been ignored or run; but it will not have ‘brought the object back to life’. Phantom reachable objects are ‘safe’ for JNI native code (or any other code) to reclaim resources against. So our code with phantom references can look like this: ReferenceQueue<FileSlot> junkQ = new ReferenceQueue<>(); .... Phantom<FileSlot> mySlot=new Phantom<>(aSlot); .... // In a different thread - make sure it is daemon! Phantom<FileSlot> isDead; while(true){ isDead=junkQ.remove(); long handle=lookUpHandle(isDead); cleanNativeMemory(handle); } In this pattern we keep a handle which the native code can use to find and reclaim resources in a structure (another hashmap probably) in Java. When we are absolutely sure that Java object cannot be brought back to life – it is phantom reachable (i.e. a ghost – we can then safely reclaim the native resource. If your code does other ‘naughty’ things using sun.misc.unsafe (for example) this trick might be of use as well. For a full example which uses this technique – check out this post. One final thought about phantom references. It is technically possible to implement the same pattern as above using weak references. However, that is not the purpose of weak references and such a pattern would be abusing them. Phantom references makes an absolute guarantee that an object really is dead and so resource can be reclaimed. For just one example, it is theoretically possible for a weak reference to be enqueued and then the object be brought back to life by its finalizer because the finalization queue is running slower than the weak reference queue. This sort of edge case horror story cannot happen with phantom references. There is one little problem, which is a weakness of the JVM design. That is that the JNI global weak reference type has an undefined relationship with phantom references. Some people suggest that you can use a global weak reference even to get to am object even when it is enqueued as a phantom reference. This is a quirk of one particular implementation of the JVM and should never be used.   Reference: Weak, Weaker, Weakest, Harnessing The Garbage Collector With Specialist References from our JCG partner Alexander Turner at the Java Advent Calendar blog. ...
agile-logo

Agile Software Developer Terminology for New Programmers

This is a post for new developers, young, inexperienced or old and retraining into information technology. Recently, I had a discussion with many engineers at one of those many London user group nights about how there is so much new stuff that we have to explain to people new to programming. One person had to coach a graduate developer on writing unit tests. Another person had to explain the reasons why dependency injection is better than dependency lookup. I can recall similarly stuff, being able to gently and concise explain why we should have unit tests in the code, and why we need them.       Here is my current matrix of terms:Term DescriptionYAGNI You Are Not Going To Need It – The issue hare is that far more code is written than necessary to solve or deliver application functionality.Classic symptom: Added unused finder methods to session beans in Java EEDRY Don’t Repeat Yourself – writing code that has lot of duplication across methods, classes, packages and package object.Classic symptoms: Copy & Paste coding in unit tests and repeated metadata in entity and the front endKISS Keep It Simple Silly [or Stupid] – a mantra to describe writing only code to solve the function problem instead of writing a less complicated codeAlso see Occam’s Razor. Class symptom: Too many abstract layers in a software applicationWET Write Every Time – the antithesis to DRY, where code is deliberately written that repeats lots and lots of time in different classes, packages, and functions.Symptoms: Deciding to do things your way and instead of collaborating with the other developers and finding some common ground. Classic antidote: DRY, Unit Tests and REFACTORINGWETTT Write Everything Ten Thousand Times – the hyperbole colloquial version of WETR3 Rules of Three – This is not the proprietary operating system of the same name, or the classic 1980?s arcade game, or either the description of a maxed out pimp-my-ride Volkswagen Golf; but the idea that when ever you have three duplicated parts of code in a method, function or classes then it is time to refactor the duplication in to single method.Related to DRY and WET Class Symptom: Ignoring the code repeats because of time pressures, or the SCRUM master says no, don’t do it in this sprint.DBC Design By Contract – the idea of building a service from a contract first. In Java you write the interface as simply as possible and then secondly worry about the implementation class.Interfaces are easier to refactor around and because you can plug different implementations into the interface you get higher cohesion and lower coupling.Classic Example: JDBC specification since version 1.0. There are tons of implementations for different relational databases including MySQL, Postgres, H2, Derby etc. Every Java programmer knows how to code against JDBC because that they don’t have to fight with an different implementation, which vary, because the DBC implied it will do the right thing most of the time. Also related to standardisation of application programming interfaces.BDUF or BUDF Big Up-Front Design – a problem with many large corporate institutions that sometimes require a 100 – 1000 page document full of business requirement before any software construction gets the green lightSome poor architect or business analyst will have spent weeks investigating and chatting to the business about the requirement, only for the development team to say the document is practically worthless. Antidote: Get the technical lead and some key developers talking with the business and the analysts and the customer. Ideation sessions everybody! Symptom: Waterfall methodology and aspects of the investment banking IT culture Antidote: Unknown, lots of people how tried to bring “Agile” with both a big A and small A to many of these institutions with some success and failurePink Book Describes the book with a Pink Cover called Extreme Programming Installed by Ron Hendries et AlI do not recommend this for new starters unless for reference, since the Pink Book is now rather old, 2000, there are other more recent Agile development books and courses that help new Java developers.YOLO You Only Load It Once [If Ever] – this is related to fact that you want to have a single source of truth in an application system. This is problem of mis-architecture, where someone has not thought the functional requirement through enoughClass Example: Shopping Cart Service EJB – you only ever want one implementation of the pay-point in an application, albeit you will have many pay-point providers (credit and debit card third parties and PayPal)The If-Ever part is YOLO and YAGNI added together. You are not sure of the another part of the system loads the data, so you decide to keep the component. (You probably want to put a logging client on the YOLOIF thing so that you can effectively decide to chuck the component if nobody has used the function in 12 or 18 months time.) Digression: If you have find data that is constantly being uploaded to serve a web request then perhaps it really needs caching and not YOLO.Spike A quick exploration of coding in Sprint in SCRUM methodology. In Spike you are probably are looking an new API, like a cloud service or user interface API like JavaFX or similar and basically you explore if the function can implemented relatively well in the new API. In short, you are trying to build some confidence in a new area before committing yourself and other resources to it.SSpikes are usually contained and protected from the flow of the critical path and restricted to a time length. See SMART goals. Classic Examples: Adopting a build system – moving from Apache Ant to Maven; moving from Subversion to Git; Adopting a new open source libraryTDD Test Driven Development – often conflicted as not being fully explained as a change of discipline and mind.”You are only ever doing one of these four things: writing unit tests, writing production code, refactoring unit tests, and refactoring production code; and never doing more than one at these previous at the same time.”TFD Test First Development – builds on the ideas of TDD and then extends the discipline to writing unit test code before any production code. Once you have a brand new unit test written completely, then you making sure that new unit test(s) actually fail in order to switch over to writing production code ensures the the new test passes. After you have done that you refactor the tests. Run all tests for all the green bars. Refactor the production code and runs the tests for all the green bar.Repeat: Go back to the start; write a new unit to that will check validate operation of the next function for the application. Repeat with the same formula as above.Velocity A very basic measurement on the Return-on-Investment for SCRUM software development and it has nothing at all to do with financial budgets and reporting.Velocity is the number of story points completed per team per iteration. To the SCRUM experienced: Velocity is equal to the aggregated units of work completed over aggregated time intervals, which implies you measure each progress of tasks in two or more sprints.Story Point For each user story in the sprint or task, predict how hard it is to implement by using unit of reference. Story points are usually written in Fibonacci numbers: 0, 1, 1, 3, 5, 8, 13, 21, 34, 55, 89, 144Every agile team in the world has a definition of a user story unit point. Teams decide on the backlog items in order to come up with predictions and these joint predictions every member of team go to decide what should applied to the next sprint.DTSTTCPW Do The Simplest Thing That Could Possibly Work – Related to Spike and KISS in many ways. If you are pressed for time and some trading systems developers in investment bank are then this is your working life.DTSTTCPW certainly invites team collaboration and effective sound-boarding from other developers and members of the team, otherwise you are asking for trouble.VoC Voice of the Customer – This is a term from SCRUM methodology, but I am about 20% unsure about this one; I believe 80% of the time to mean a proxy, a placeholder, for the real customer, the person who understands the business requirement and will of the customer. Because the true user is unavailable for some reason due to authority, culture or organisation, or even geo-location.Some people have amusing called this abbreviation, the Voice of Reason, especially when they do not enjoy working with the customer directly.Unit Test In Java programming, a Unit Test is a Java JUnit framework test class or TestNG framework test class that specifically verified and validates a single function of work in an applicationA unit test requires a target, which can be a Java Class, a Service Bean, Managed Bean or something implements the said functionality.Unit test are often seen as low-level fast and efficient testsFunctional Test A functional test is a larger test, which can also be a unit test, designed to test packages of classes or sub part of the overall application infrastructure. Functional test validates if the application meets one of the customer’s external requirements on performance, results and efficiency.Symptom: a functional test is not necessarily a unit test, and not all functional tests are acceptance tests.Acceptance Test An acceptance test is the same as a functional test in name only. Acceptance tests are those where the customer wants to see the validation pass in order they sign of the implementation.Symptom: If the customer is disastisfied with the application at demonstration time, then at least the one of the acceptance test is broken. Add one in the next sprint.SOLID A set of five principles:Single Responsibility, Open-Closed, Liskov Substitution, Interface Segregation and Dependency Inversion.Single Responsibility Principle An object class, a service bean, web service, a function or procedure should have only one single responsibilitySymptom: It is hard to write unit test for complex object, because it doing WETTTOpen Closed Principle Open for extension and closed for modificationIt means you can subclass the object, but the object is encapsulated by not allowing an outside object to gainfully change the internals.Symptom: leakage in object implementation, hard code dependency, and not working with Java interfaces (or interface like constructs i.e. Scala traits and mix-ins)Liskov Substitution Principle Idea of swap-ability and is expressed as a Design By Contract (DBC).I can swap in another object X which is an implementation of T if that objects is a type of T and the overall application works. This is the basis of mocking objects, mocking implementation frameworks; testing in general; proxy remote objects, persistence capable objects; application server and lifecycle monitoring situations; plug-and-play and restartable applications. I could go on, but I won’t.Interface Segregation Principle A service interface that does only single specific functional thing is better than a service interface that does several different things.Symptom: Failure to adhere to the KISS principle. In days gone, the non standard C++ String libraries where everybody threw in the kitchen sink of methods for any operation that one would want to write that manipulated a C/C++ String (char*)Dependency Inversion Principle Idea of not hard-wiring a direct relationship to a dependent into object.In Java EE world, you would use a dependency injection container such as CDI to inject different managed beans into a service bean.Dependency inversion also should mean in my humble opinion given up on managing the life-cycle of service components and beans. The lifecycle is managed by the application container, the cloud provider or whatever it is you are using. In another school of thought: every application is managed these days, whether it is the operating system, a virtual machine or web container or mobile platform (iOS and Android). This is the way forward.Design Patterns A classic book on Design Patterns by Erich Gamma et Al.Ask your local technical leader to lend you his or her copy of the book; and if they don’t have a copy then that really sucks. Tell them to give a training budget and buy the book yourself!  Reference: Agile Software Developer Terminology for New Programmers from our JCG partner Peter Pilgrim at the Peter Pilgrim’s blog blog. ...
apache-commons-logo

Using Apache Commons Functor functional interfaces with Java 8 lambdas

Apache Commons Functor (hereon [functor]) is an Apache Commons component that provides a functional programming API and several patterns implemented (visitor, generator, aggregator, etc). Java 8 has several nice new features, including lambda expressions and functional interfaces. In Java 8, lambdas or lambdas expressions are closures that can be evaluated and behave like anonymous methods.             Functional interfaces are interfaces with only one method. These interfaces can be used in lambdas and save you a lot of time from writing anonymous classes or even implementing the interfaces. [functor] provides several functional interfaces (thanks to Matt Benson). It hasn’t been released yet, but there are some new examples in the project site, in the trunk of the SVN. I will use one of these examples to show how [functor] functional interfaces can be used in conjunction with Java 8 lambdas. After the example with [functor] in Java 8, I will explain how I am running Java 8 in Eclipse (it’s kind of a gambiarra, but works well). [functor] example Here is a simple example with one Predicate. Listnumbers = Arrays.asList(1, 2, 3, 4);UnaryPredicateisEven = new UnaryPredicate() { public boolean test(Integer obj) { return obj % 2 == 0; } };for( Integer number : numbers ) { if (isEven.test(number)) { System.out.print(number + ' '); } } It prints only the the even numbers, those that pass by the predicate test. [functor] example with lambdas This modified version is using Java 8 lambdas List numbers = Arrays.asList(1, 2, 3, 4); UnaryPredicate isEven = (Integer obj) -> { return obj % 2 == 0; }; for( Integer number : numbers ) { if (isEven.test(number)) { System.out.print(number + " "); } } The behaviour is the same. UnaryPredicate is a functional interface. Its only method is boolean test(A obj);. And when used in a lambda expression you just have to provide the right number of arguments and implement the closure code. The difference in the two code snippets are the way that the UnaryPredicate for even numbers is created. Below you can see the two ways of creating this predicate, with and without Java 8 lambdas. // pre-java-8 UnaryPredicate isEven = new UnaryPredicate() { public boolean test(Integer obj) { return obj % 2 == 0; } }; // with lambda-8 UnaryPredicate isEven = (Integer obj) -> { return obj % 2 == 0; }; Java 8 in Eclipse Eclipse 8 doesn’t support Java 8, so you have to create a new builder in order to have Eclipse compiling your project’s sources. For a complete step-by-step guide on how to set up Eclipse Juno and Java 8, please refer to http://tuhrig.de/?p=921. I will summarize the steps here, and will show how to include [functor] jar to the project classpath.Download the JDK from http://jdk8.java.net/lambda and install it (I installed in /opt/java/jdk1.8.0) Create a new Java project in Eclipse (try-lambdas in my case) Disable the default Java Builder from your Eclipse project, as it doesn’t work with Java 8 Create a new builder. When prompted with a screen that lets you browse for a program, select Java 8 javac (for me it was /opt/java/jdk1.8.0/bin/javac) Add the arguments below to your builder: -classpath %CLASSPATH%;commons-functor-1.0-SNAPSHOT-jar-with-dependencies.jar;. -source 8 -d ${workspace_loc:/lambdas}/bin ${workspace_loc:/Java8}/src/lambdas/*.javaYou have to include [functor]‘s jar, as well as its dependencies. For the sake of convenience, I used maven-assembly-plugin to generate a jar with dependencies for [functor]. The code and the jar are available from this GitHub repository. Or if you prefer generate your own [functor] jar with dependencies, check out the code from the repository as below. svn checkout https://svn.apache.org/repos/asf/commons/sandbox/functor/trunk/ commons-functor And finally include the following to [functor] pom.xml before running mvn clean assembly:assembly. <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.3</version> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> </plugin>   Reference: Using Apache Commons Functor functional interfaces with Java 8 lambdas from our JCG partner Bruno Kinoshita at the Kinoshita’s blog blog. ...
java-logo

Ensuring the order of execution for tasks

Sometimes it is necessary to impose certain order on the tasks in a threadpool. Issue 206 of the JavaSpecialists newsletter presents one such case: we have multiple connections from which we read using NIO. We need to ensure that events from a given connection are executed in-order but events between different connections can be freely mixed.               I would like to present a similar but slightly different situation: we have N clients. We would like to execute events from a given client in the order they were submitted, but events from different clients can be mixed freely. Also, from time to time, there are ‘rollup’ tasks which involve more than one client. Such tasks should block the tasks for all involved clients (but not more!). Let’s see a diagram of the situation:As you can see tasks from client A and client B are happily processed in parallel until a ‘rollup’ task comes along. At that point no more tasks of type A or B can be processed but an unrelated task C can be executed (provided that there are enough threads). The skeleton of such an executor is available in my repository. The centerpiece is the following interface: public interface OrderedTask extends Runnable { boolean isCompatible(OrderedTask that); } Using this interface the threadpool decides if two tasks may be run in parallel or not (A and B can be run in parallel if A.isCompatible(B) && B.isComaptible(A)). These methods should be implemented in a fast, non locking and time-invariant manner. The algorithm behind this threadpool is as follows:If the task to be added doesn’t conflict with any existing tasks, add it to the thread with the fewest elements. If it conflicts with elements from exactly one thread, schedule it to be executed on that thread (and implicitly after the conflicting elements which ensures that the order of submission is maintained) If it conflicts with multiple threads, add tasks (shown with red below) on all but the first one of them on which a task on the first thread will wait, after which it will execute the original task.More information about the implementation:The code is only a proof-of-concept, some more would would be needed to make it production quality (it needs code for exception handling in tasks, proper shutdown, etc) For maximum performance it uses lock-free* structures where available: each worker thread has an associated ConcurrentLinkedQueue. To achieve the sleep-until-work-is-available semantics, an additional Semaphore is used** To be able to compare a new OrderedTask with currently executing ones, a copy of their reference is kept. This list of copies is updated whenever new elements are enqueued (this is has the potential of memory leaks and if tasks are infrequent enough alternatives – like an additional timer for weak references – should be investigated) Compared to the solution in the JavaSpecialists newsletter, this is more similar to a fixed thread pool executor, while the solution from the newsletter is similar to a cached thread pool executor. This implementation is ideal if (a) the tasks are (mostly) short and (mostly) uniform and (b) there are few (one or two) threads submitting new tasks, since multiple submissions are mutually exclusive (but submission and execution isn’t) If immediately after a ‘rollup’ is submitted (and before it can be executed) tasks of the same kind are submitted, they will unnecessarily be forced on one thread. We could add code rearrange tasks after the rollup task finished if this becomes an issue.Have fun with the source code! (maybe some day I’ll find the time to remove all the rough edges). * somewhat of a misnomer, since there are still locks, only at a lower – CPU not OS – level, but this is the accepted terminology ** – benchmarking indicated this to be the most performant solution. This was inspired from the implementation of the ThreadPoolExecutor.   Reference: Ensuring the order of execution for tasks from our JCG partner Attila-Mihaly Balazs at the Java Advent Calendar blog. ...
java-logo

Can synchronization be optimised away?

Overview There is a common misconception that because the JIT is smart and synchronization can be eliminated for an object which is only local to a method that there is no performance impact.             A test comparing StringBuffer and StringBuilder These two classes do basically the same thing except one is synchronized (StringBuffer) and the other is not. It is also a class which is often used in one method to build a String.  The following test attempts to determine how much difference using one other the other can make. static String dontOptimiseAway = null; static String[] words = new String[100000];public static void main(String... args) { for (int i = 0; i < words.length; i++) words[i] = Integer.toString(i);for (int i = 0; i < 10; i++) { dontOptimiseAway = testStringBuffer(); dontOptimiseAway = testStringBuilder(); } }private static String testStringBuffer() { long start = System.nanoTime(); StringBuffer sb = new StringBuffer(); for (String word : words) { sb.append(word).append(','); } String s = sb.substring(0, sb.length() - 1); long time = System.nanoTime() - start; System.out.printf("StringBuffer: took %d ns per word%n", time / words.length); return s; }private static String testStringBuilder() { long start = System.nanoTime(); StringBuilder sb = new StringBuilder(); for (String word : words) { sb.append(word).append(','); } String s = sb.substring(0, sb.length() - 1); long time = System.nanoTime() - start; System.out.printf("StringBuilder: took %d ns per word%n", time / words.length); return s; } at the end prints with -XX:+DoEscapeAnalysis using Java 7 update 10 StringBuffer: took 69 ns per word StringBuilder: took 32 ns per word StringBuffer: took 88 ns per word StringBuilder: took 26 ns per word StringBuffer: took 62 ns per word StringBuilder: took 25 ns per word Testing with one million words doesn’t change the results significantly. ConclusionWhile the cost of using synchronization is small, it is measurable and if you can use StringBuilder it is preferred as it states in the Javadocs for this class. In theory, synchronization can be optimised away, but it is yet to be the case even in simple cases.  Reference: Can synchronization be optimised away? from our JCG partner Peter Lawrey at the Vanilla Java blog. ...
java-interview-questions-answers

JAXB – Representing Null and Empty Collections

Demo Code The following demo code will be used for all the different versions of the Java model. It simply sets one collection to null, the second to an empty list, and the third to a populated list.             package package blog.xmlelementwrapper;import java.util.ArrayList; import javax.xml.bind.*;public class Demo {public static void main(String[] args) throws Exception { JAXBContext jc = JAXBContext.newInstance(Root.class);Root root = new Root();root.nullCollection = null;root.emptyCollection = new ArrayList<String>();root.populatedCollection = new ArrayList<String>(); root.populatedCollection.add('foo'); root.populatedCollection.add('bar');Marshaller marshaller = jc.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); marshaller.marshal(root, System.out); }} Mapping #1 – Default JAXB models do not require any annotations (see JAXB – No Annotations Required). First we will look at what the default behaviour is for collection properties. package blog.xmlelementwrapper;import java.util.List; import javax.xml.bind.annotation.*;@XmlRootElement @XmlAccessorType(XmlAccessType.FIELD) public class Root {List<String> nullCollection;List<String> emptyCollection;List<String> populatedCollection;} Examining the output we see that the output corresponding to the nullCollection and emptyCollection fields is the same. This means with the default mapping we can’t round trip the instance. For the unmarshal use case the value of the nullCollection and emptyCollection the value of the fields will be whatever the class initialized them to (null in this case). <?xml version='1.0' encoding='UTF-8'?> <root> <populatedCollection>foo</populatedCollection> <populatedCollection>bar</populatedCollection> </root> Mapping #2 – @XmlElementWrapper The @XmlElementWrapper annotation is used to add a grouping element around the contents of a collection. In addition to changing the appearance of the XML representation it also allows us to distinguish between null and empty collections. package blog.xmlelementwrapper;import java.util.List; import javax.xml.bind.annotation.*;@XmlRootElement @XmlAccessorType(XmlAccessType.FIELD) public class Root {@XmlElementWrapper List<String> nullCollection;@XmlElementWrapper List<String> emptyCollection;@XmlElementWrapper List<String> populatedCollection;} The representation for the null collection remains the same, it is absent from the XML document. For an empty collection we see that only the grouping element is marshalled out. Since the representations for null and empty are different we can round trip this use case. <?xml version='1.0' encoding='UTF-8'?> <root> <emptyCollection/> <populatedCollection> <populatedCollection>foo</populatedCollection> <populatedCollection>bar</populatedCollection> </populatedCollection> </root> Mapping #3 – @XmlElementWrapper(nillable=true) The nillable property on the @XmlElementWrapper annotation can be used to change the XML representation of null collections. package blog.xmlelementwrapper;import java.util.List; import javax.xml.bind.annotation.*;@XmlRootElement @XmlAccessorType(XmlAccessType.FIELD) public class Root {@XmlElementWrapper(nillable=true) List<String> nullCollection;@XmlElementWrapper(nillable=true) List<String> emptyCollection;@XmlElementWrapper(nillable=true) List<String> populatedCollection;} Now the grouping element is present for all three fields. The xsi:nil attribute is used to indicate that the nullCollection field was null. Like the previous mapping this one can be round tripped. <?xml version='1.0' encoding='UTF-8'?> <root> <nullCollection xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:nil='true'/> <emptyCollection/> <populatedCollection> <populatedCollection>foo</populatedCollection> <populatedCollection>bar</populatedCollection> </populatedCollection> </root>   Reference: JAXB – Representing Null and Empty Collections from our JCG partner Blaise Doughan at the Java XML & JSON Binding blog. ...
groovy-logo

A simple Groovy issue tracker using file system

It will be a chaos not to track bugs and feature requests when you developing software. Having a simple issue tracker would make managing the project much more successful. Now I like simple stuff, and I think for small project, having this tracker right inside the source control (especially with DSVC like Mercurial/Git etc) repository is not only doable, but very convenient as well. You don’t have to go crazy with all the fancy features, but just enough to track issues are fine. I would like to propose this layout for you.           Let’s say you have a project that looks like this project +- src/main/java/Hello.java +- issues/issue-001.md +- pom.xml All I need is a simple directory issues to get going. Now I have a place to track my issue! First issue issue-000.md should be what your project is about. For example: /id=issue-001 /createdon=2012-12-16 18:07:08 /type=bug /status=new /resolution= /from=Zemian /to= /found= /fixed= /subject=A simple Java Hello program# Updated on 2012-12-16 18:07:08We want to create a Maven based Hello world program. It should print 'Hello World.' I choose .md as file extension for intending to write comments in Markdown format. Since it’s a text file, you do what you want. To be more structured, I have added some headers metadata for issue tracking. Let’s define some here. I would propose to use these and formatting: /id=issue-<NUM> /createdon=<TIMESTAMP> /type=feature|bug|question /status=new|reviewing|working|onhold|testing|resolved /resolution=fixed|rejected|duplicated /from=<REPORTER_FROM_NAME> /to=<ASSIGNEE_TO_NAME> /found=<VERSION_FOUND> /fixed=<VERSION_FIXED> That should cover most of the bug and feature development issues. It’s not cool to write software without a history of changes, including these issues created. So let’s use a source control. I highly recommend you to use Mercurial hg. You can create and initialize a new repository like this. bash> cd project bash> hg init bash> hg add bash> hg commit -m 'My hello world project' Now your project is created and we have a place to track your issues. Now it’s simple text file, so use your favorite text editor and edit away. However, creating new issue with those header tags is boring. It will be nice to have a script that manage it a little. I have a Groovy script issue.groovy (see at the end of this article) that let you run reports and create new issues. You can add this script into your project/issues directory and you can instantly creating new issue and querying reports! Here is an example output on my PC: bash> cd project bash> groovy scripts/issue.groovySearching for issues with /status!=resolved Issue: /id=issue-001 /status=new /subject=A simple Java Hello program 1 issues found.bash> groovy scripts/issue.groovy --new /type=feature /subject='Add a unit test.'project/issues/issue-002.md created. /id=issue-002 /createdon=2012-12-16 19:10:00 /type=feature /status=new /resolution= /from=Zemian /to= /found= /fixed= /subject=Add a unit test.bash> groovy scripts/issue.groovySearching for issues with /status!=resolved Issue: /id=issue-000 /status=new /subject=A simple Java Hello program Issue: /id=issue-002 /status=new /subject=Add a unit test. 2 issues found.bash> groovy scripts/issue.groovy --details /id=002Searching for issues with /id=002 Issue: /id=issue-002 /createdon=2012-12-16 19:10:00 /found= /from=Zemian /resolution= /status=new /type=feature /subject=Add a unit test. 1 issues found.bash> groovy scripts/issue.groovy --update /id=001 /status=resolved /resolution=fixed 'I fixed this thang.' Updating issue /id=issue-001 Updating /status=resolved Updating /resolution=fixedUpdate issue-001 completed. The script give you some quick and consistent way to create/update/search issues. But they are just plain text files! You can just as well fire up your favorite text editor and change any any thing you want. Save and even commit it into your source repository. All will not lost. Here is my issue.groovy script: #!/usr/bin/env groovy // // A groovy script to manage issue files and its metadata/headers. // // Created by Zemian Deng <saltnlight5@gmail.com> 12/2012 v1.0.1 // // Usage: // bash> groovy [java_opts] issue.groovy [option] [/header_name=value...] [arguments] // // Examples: // # Report all issues that match headers (we support RegEx!) // bash> groovy issue /resolution=fixed // bash> groovy issue /status!=onhold // bash> groovy issue '/subject=Improve UI|service' // bash> groovy issue --details /status=resolved // // # Create a new bug issue file. // bash> groovy issue --new /type=bug /to=zemian /found=v1.0.1 /subject='I found some problem.' 'More details here.' // // # Update an issue // bash> groovy issue --update /id=issue-001 /status=resolved /resolution=fixed 'I fixed this issue with Z algorithm.' // // Becareful on the following notes: // * Ensure your filename issue id match to the /id or your search may not work! // * You need to use quote the entire header such as these 'key=space value' // class issue { def ISSUES_HEADERS = ['/id', '/createdon', '/type', '/status', '/resolution', '/from', '/to', '/found', '/fixed', '/subject'] def ISSUES_HEADERS_VALS = [ '/type' : ['feature', 'bug', 'question'] as Set, '/status' : ['new', 'reviewing', 'working', 'onhold', 'testing', 'resolved'] as Set, '/resolution' : ['fixed', 'rejected', 'duplicated'] as Set ] def issuesDir = new File(System.getProperty("issuesDir", getDefaultIssuesDir())) def issuePrefix = System.getProperty("issuePrefix", 'issue') def arguments = [] // script arguments after parsing def options = [:] // script options after parsing def headers = [:] // user input issue headersstatic void main(String[] args) { new issue().run(args) }// Method declarations def run(String[] args) { // Parse and save options, arguments and headers vars args.each { arg -> def append = true if (arg =~ /^--{0,1}\w+/) { options[arg] = true append = false } else { def pos = arg.indexOf('=') if (pos >= 1 && arg.length() > pos) { def name = arg.substring(0, pos) def value = arg.substring(pos + 1) headers.put(name, value) append = false } }if (append) { arguments << arg } }// support short option flag if (options['-d']) options['--details'] = true// Run script depending on options passed if (options['--help'] || options['-h']) { printHelp() } else if (options['--new'] || options['-n']) { createIssue() } else if (options['--update'] || options['-u']) { updateIssue() } else { reportIssues() } }def printHelp() { new File(getClass().protectionDomain.codeSource.location.path).withReader{ reader -> def done = false def line = null while (!done && (line = reader.readLine()) != null) { line = line.trim() if (line.startsWith("#") || line.startsWith("//")) println(line) else done = true } } }def validateHeaders() { def headersSet = ISSUES_HEADERS.toSet() headers.each{ name, value -> if (!headersSet.contains(name)) throw new Exception("ERROR: Unkown header name $name.") if (ISSUES_HEADERS_VALS[name] != null && !(ISSUES_HEADERS_VALS[name].contains(value))) throw new Exception("ERROR: Unkown header $name=$value. Allowed: ${ISSUES_HEADERS_VALS[name].join(', ')}") } }def getDefaultIssuesDir() { return new File(getClass().protectionDomain.codeSource.location.path).parentFile.path }def getIssueIds() { def issueIds = [] def files = issuesDir.listFiles() if (files == null) return issueIds files.each{ f -> def m = f.name =~ /^(\w+-\d+)\.md$/ if (m) issueIds << m[0][1] } return issueIds }def getIssueFile(String issueid) { return new File(issuesDir, "${issueid}.md") }def reportIssues() { def userHeaders = new HashMap(headers) if (userHeaders.size() ==0) userHeaders['/status!'] = 'resolved' def headersLine = userHeaders.sort{ a,b -> a.key <=> b.key }.collect{ k,v -> "$k=$v" }.join(', ') println "Searching for issues with $headersLine" def count = 0 getIssueIds().each { issueid -> def file = getIssueFile(issueid) def issueHeaders = [:] file.withReader{ reader -> def done = false def line = null while (!done && (line = reader.readLine()) != null) { if (line =~ /^\/\w+=.*$/) { def words = line.split('=') if (words.length >= 2) { issueHeaders.put(words[0], words[1..-1].join('=')) } } else if (issueHeaders.size() > 0) { done = true } } } def match = userHeaders.findAll{ k,v -> if (k.endsWith("!")) (issueHeaders[k.substring(0, k.length() - 1)] =~ /${v}/) ? false : true else (issueHeaders[k] =~ /${v}/) ? true : false } if (match.size() == userHeaders.size()) { def line = "Issue: /id=${issueHeaders['/id']}" if (options['--details']) { def col = 4 def issueHeadersKeys = issueHeaders.keySet().sort() - ['/id', '/subject'] issueHeadersKeys.collate(col).each { set -> line += "\n " + set.collect{ k -> "$k=${issueHeaders[k]}" }.join(" ") } line += "\n /subject=${issueHeaders['/subject']}" } else { line += " /status=${issueHeaders['/status']}" + " /subject=${issueHeaders['/subject']}" } println line count += 1 } } println "$count issues found." }def createIssue() { validateHeaders() if (headers['/status'] == 'resolved' && headers['/resolution'] == null) throw new Exception("You must provide /resolution after resolved an issue.")def ids = getIssueIds().collect{ issueid -> issueid.split('-')[1].toInteger() } def nextid = ids.size() > 0 ? ids.max() + 1 : 1 def issueid = String.format("${issuePrefix}-%03d", nextid) def file = getIssueFile(issueid) def createdon = new Date().format('yyyy-MM-dd HH:mm:ss') def newHeaders = [ '/id' : issueid, '/createdon' : createdon, '/type' : 'bug', '/status' : 'new', '/resolution' : '', '/from' : System.properties['user.name'], '/to' : '', '/found' : '', '/fixed' : '', '/subject' : 'A bug report' ] // Override newHeaders from user inputs headers.each { k,v -> newHeaders.put(k, v) }//Output to file file.withWriter{ writer -> ISSUES_HEADERS.each{ k -> writer.println("$k=${newHeaders[k]}") } writer.println() writer.println("# Updated on ${createdon}") writer.println() arguments.each { writer.println(it) writer.println() } writer.println() }// Output issue headers to STDOUT println "$file created." ISSUES_HEADERS.each{ k -> println("$k=${newHeaders[k]}") } }def updateIssue() { validateHeaders() if (headers['/status'] == 'resolved' && headers['/resolution'] == null) throw new Exception("You must provide /resolution after resolved an issue.")def userHeaders = new HashMap(headers) userHeaders.remove('/createdon') // we should not update this fielddef issueid = userHeaders.remove('/id') // We will not re-update /id if (issueid == null) throw new Exception("Failed to update issue: missing /id value.") if (!issueid.startsWith(issuePrefix)) issueid = "${issuePrefix}-${issueid}" println("Updating issue /id=${issueid}")def file = getIssueFile(issueid) def newFile = new File(file.parentFile, "${file.name}.update.tmp") def hasUpdate = false def issueHeaders = [:]if (!file.exists()) throw new Exception("Failed to update issue: file not found for /id=${issueid}")// Read and update issue headers file.withReader{ reader -> // Read all issue headers first def done = false def line = null while (!done && (line = reader.readLine()) != null) { if (line =~ /^\/\w+=.*$/) { def words = line.split('=') if (words.length >= 2) { issueHeaders.put(words[0], words[1..-1].join('=')) } } else if (issueHeaders.size() > 0) { done = true } }// Find issue headers differences userHeaders.each{ k,v -> if (issueHeaders[k] != v) { println("Updating $k=$v") issueHeaders[k] = v if (!hasUpdate) hasUpdate = true } }// Update issue file if (hasUpdate) { newFile.withWriter{ writer -> ISSUES_HEADERS.each{ k -> writer.println("${k}=${issueHeaders[k] ?: ''}") } writer.println()// Write/copy the rest of the file. done = false while (!done && (line = reader.readLine()) != null) { writer.println(line) } writer.println() } } } // readerif (hasUpdate) { // Rename the new file back to orig file.delete() newFile.renameTo(file) }// Append any arguments as user comments if (arguments.size() > 0) { file.withWriterAppend{ writer -> writer.println() writer.println("# Updated on ${new Date().format('yyyy-MM-dd HH:mm:ss')}") writer.println() arguments.each{ text -> writer.println(text) writer.println() } println() } }println("Update $issueid completed.") } }   Reference: A simple Groovy issue tracker using file system from our JCG partner Zemian Deng at the A Programmer’s Journal blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close