Featured FREE Whitepapers

What's New Here?


Iterator Pattern and Java

Hello all, in this post we’ll be checking on the Iterator Pattern. A design pattern that I know many of you have already used, but maybe you didn’t realize it was pattern or didn’t know its great value. According to the book Head First Design:The Iterator Pattern provides a way to access the elements of an aggregate object sequentially without exposing its underlaying representation.Whaaaaat? Well, it says that no matter what data structure (arrays, lists, hashtables,etc.) you are using, you can always traverse it in the same way if you implement this pattern. It gives you a uniform way of accessing the elements of your data structures (aggregates), but you don’t have to know what kind of data structure   you are traversing… nice! Also, it sets the responsibility of iteration on the Iterator object not on your data structure, which simplifies the coding in your data structure. Let’s check the classic class diagram for the Iterator pattern:The actual class diagram for the Iterator Pattern has a few changes, specially in the Iterator class (interface), where we now have different methods as we’ll see in a minute, but first, lets review each of the previous classes (or interfaces):Aggregate: This is the base class (or interface) of our data structures, you can think of it as the java.util.Collection interface, which defines a lot of methods for the collection classes. ConcreteAggregate: This is the concrete data structure that we’ll be iterating, for example, java.util.ArrayList, java.util.Vector, etc. Iterator: Base class (or interface) for iterators. You can find one in Java at java.util.Iterator. You can notice that the Java version has different methods that we’ll be discussing later in this post. Here you define de methods you need in order to traverse the data structures. ConcreteIterator: As you want to traverse different data structures you need different iterators. So a concreteIterator is an Iterator for the data structure you want to traverse.Now, let’s take a look at the Java implementation of the Iterator Pattern. The following diagram was generated using Architexa’s Free Tool for Understanding Code and it shows the relations between some classes of the Java Collections Framework where we can see a structure similar to the classic class diagram:The above diagram shows only one implementation of the pattern in Java, there are a lot more, but they always use the java.util.Iterator interface; this is the interface you should use in your implementations of the Iterator Pattern when coding in Java. Let’s compare both diagrams:Classic DiagramJava exampleAggregatejava.util.AbstractListConcreteAggregatejava.util.ArrayList, java.util.VectorIteratorjava.util.IteratorConcreteIteratorPrivate internal class at java.util.AbstractList You can’t see this class in  the JavaDocs, but it is there in the source code: java.util.ItrNotice that the methods of the Iterator object in the Java example are different from the methods in the classic class diagram:There is no +First() method. If you need to go to the first element, you have to instance a new iterator. The +IsDone() method has been renamed to +hasNext(). The +Next() and +CurrentItem() has been merged into +next(). The +remove() method has been added.So, if you ever have to work with different data structures and need a uniform way to traverse them and/or access their items, think about the Iterator Pattern: //... in a class /** * Traverse a list, hashtable, vector, etc. what ever that implements * the Iterator Pattern */ public void traverse(Iterator iter) { while(iter.hasNext()) { System.out.println(iter.next()); } } Of course, you will always have to create the ConcreteIterator class for your data structure, but if you are using classes from the Java Collections Framework, it’s already done. One last thing, remember the most important OO Principle of all: Always use the simplest solution that meets your needs, even if it doesn’t include a pattern. Resources: Freeman Eric and Freeman Elisabeth and Sierra Kathy and Bates Bert (2004). Head First Design Patterns. United States of America: O’Reilly Media, Inc.   Reference: Iterator Pattern and Java from our JCG partner Alexis Lopez at the Java and ME blog. ...

10 XML Interview questions and answers for Java Programmer

XML Interview questions are very popular in various programming job interviews, including Java interviews for web developer. XML is a matured technology and often used as standard for transporting data from one platform other. XML Interview questions contains questions from various XML technologies like XSLT which is used to transform XML files, XPATH, XQuery and fundamentals of XML e.g. DTD or Schema. In this article we will see 10 frequently asked XML Interview questions and answers from above topics. These questions are mostly asked in various Java interviews but they are equally useful in other programming interviews like C, C++, Scala or any other programming language. Since XML is not tied with any programming language   and like SQL its one of the desired skill in programmer, it make sense to practice some XML questions before appearing in any technical job interview. XML Interview Questions and Answers Here is my list of some common and frequently asked Interview questions on XML technologies. Questions on this list is not very tough but touches some important areas of XML technologies e.g. DTD, XML Schema, XSLT transformations, XPATH evaluation, XML binding, XML parsers and fundamentals of XML e.g. namespace, validation, attribute, elements etc. Question 1: What is XML ? Answer : XML stands for Extensible Markup language which means you can extend XML based upon your needs. You can define custom tags like <books>, <orders> etc in XML easily as opposed to other mark-up language like HTML where you need to work with predefined tags e.g. <p> and you can not use user defined tag. Though structure of XML can be standardize by making use of DTD and XML Schema. XML is mostly used to transfer data from one system to another e.g. between client and server in enterprise applications. Question 2: Difference between DTD and XML Schema? Answer : There are couple of differences between DTD and XML Schema e.g. DTD is not written using XML while XML schema are xml documents in itself, which means existing XML tools like XML parsers can be used to work with XML schema. Also XML schema is designed after DTD and it offer more types to map different types of data in XML documents. On the other hand DTD stands for Document Type definition and was a legacy way to define structure of XML documents. Question 3: What is XPath ? Answer : XPath is an XML technology which is used to retrieve element from XML documents. Since XML documents are structured, XPath expression can be used to locate and retrieve elements, attributes or value from XML files. XPath is similar to SQL in terms of retrieving data from XML but it has it’s own syntax and rules. See here to know more about How to use XPath to retrieve data from XML documents. Question 4: What is XSLT? Answer : XSLT is another popular XML technology to transform one XML file to other XML, HTML or any other format. XSLT is like a language which specifies its own syntax, functions and operator to transform XML documents. Usually transformation is done by XSLT Engine which reads instruction written using XSLT syntax in XML style sheets or XSL files. XSLT also makes extensive use of recursion to perform transformation. One of the popular example of using XSLT is for displaying data present in XML files as HTML pages. XSLT is also very handy to transforming one XML file into another XML document. Question 5: What is element and attribute in XML? Answer : This can be best explained by an example. let’s see a simple XML snippet <Orders> <Order id="123"> <Symbol> 6758.T</Symbol> <Price> 2300</Price> <Order> <Orders> In this sample XML id is an attribute of element. Here , and are also other elements but they don’t have any attribute. Question 6: What is meaning of well formed XML ? Answer : Another interesting XML interview question which most appeared in telephonic interviews. A well formed XML means an XML document which is syntactically correct e.g. it has a root element, all open tags are closed properly, attributes are in quotes etc. If an XML is not well formed, it may not be processed and parsed correctly by various XML parsers. Question 7: What is XML namespace? Why it’s important? Answer : XML namespace are similar to package in Java and used to provide a way to avoid conflict between two xml tags of same name but different sources. XML namespace is defined using xmlns attribute at top of the XML document and has following syntax xmlns:prefix=’URI’. later that prefix is used along with actual tag in XML documents. Here is an example of using XML namespace : <root xmlns:inst="http://instruments.com/inst" <inst:phone> <inst:number>837363223</inst:number> </inst:phone> </root>Question 8: Difference between DOM and SAX parser ? Answer : This is another very popular XML interview question, not just in XML world but also on Java world. Main difference between DOM and SAX parser is the way they parse XML documents. DOM creates an in memory tree representation of XML documents during parsing while SAX is a event driven parser. See Difference between DOM and SAX parser for more detailed answer of this question. Question 9: What is a CDATA section in XML? Answer : I like this XML Interview questions for its simplicity and importance, yet many programmer doesn’t know much about it. CDATA stands for character data and has special instruction for XML parsers. Since XML parser parse all text in XML document e.g. <name>This is name of person</name> here even though value of tag <name> will be parsed because it may contain XML tags e.g. <name><firstname>First Name</firstname></name>. CDATA section is not parsed by XML parser. CDATA section starts with “<![CDATA[” and finishes with “]]>”. Question 10: What is XML data Binding in Java? Answer : XML binding in Java refers to creating Java classes and object from XML documents and then modifying XML documents using Java programming language. JAXB , Java API for XML binding provides convenient way to bind XML documents with Java objects. Other alternatives for XML binding is using open source library e.g. XML Beans. One of the biggest advantage of XML binding in Java is to leverage Java programming capability to create and modify XML documents. This list of XML Interview questions and answers are collected from programmers but useful to anyone who is working in XML technologies. Important of XML technologies like XPath, XSLT, XQuery is only going to increase because of platform independent nature of XML and it’s popularity of transmitting data over cross platform. Though XML has disadvantage like verbosity and size but its highly useful in web services and transmitting data from one system to other where bandwidth and speed is of secondary concern. Other Interview questions articles from Javarevisited Top 30 UNIX and Linux command Interview questions – Answered20 design pattern and software design Interview questions with answers 10 Oracle Interview questions with answers 15 Java multi-threading Interview questions with answers asked in Investment banks Top 10 Java String Interview questions – Answered  Reference: 10 XML Interview questions and answers for Java Programmer from our JCG partner Javin Paul at the Javarevisited blog. ...

When git ignores your… .gitignore?

I feel like I should start this post saying that I absolutely love git. If you’ve never heard of it, is a source control system like CVS or Subversion but, unlike those two, is a distributed version control system. I’m not going to get into much details about the history and capabilities of git but if you’re curious about it you can go to http://git-scm.com/book which is an amazing resource with everything from intro to advanced concepts. I imagine that by now most professional software developers use some form of version control at their daily jobs but you shouldn’t stop there. I use git for personal, one-man projects as well. While some people might think that this could be an overkill, I completely disagree. There is nothing like the comfort in knowing that all the history   of all your files is safe and ready to be brought back if you ever need them. We all make mistakes sometimes after all. With git this is as easy as writing 3 simple commands: mkdir myNewProject cd myNewProject git init That’s it! Every file you create or modify on “myNewProject” will be now tracked by git. A pretty useful feature that you get with every source control tool is the possibility to ignore certain files from the tool’s tracking. Generally speaking, you don’t want to get into your code repository any file that can be computed as a result of another file. In a typical Java Maven project this would be for example the “target” directory, or if you are using Eclipse the “.metadata”, “.project” or “.settings” files. In git the easiest way to do this is to have a special file named “.gitignore” at the root of your project with all the exclusion rules you want to set. The syntax of this file is fairly straightforward. You can also have a “.gitignore” file for each subdirectory in your project, but this is less common. A tricky thing with git and ignore rules is that if the file you want to ignore is already being tracked by git then adding it to “.gitignore” won’t make git to automatically forget about the file. To illustrate this point, consider the following example. First we create a repository with two initial files and commit them: mkdir gitExample cd gitExample touch file1 file2 git init git add . git commit -m 'Initial commit' Let’s create now the .gitignore file to try to ignore “file2? and commit that: echo file2 > .gitignore git add . git commit -m 'Added gitignore for file2' Now, let’s modify “file2? and see what happens: echo 'Hello World' >> file2 git status We get:# On branch master # Changes not staged for commit: # (use 'git add ...' to update what will be committed) # (use 'git checkout -- ...' to discard changes in working directory) # # modified: file2 # no changes added to commit (use 'git add' and/or 'git commit -a')Git is effectively still tracking file2 even though is already on our .gitignore. Like I said before, this happens because git was already tracking the file when we added it to our ignores. So let’s see what happens when we add the “.gitignore” before adding the file to git: mkdir gitExample cd gitExample touch file1 file2 git init echo 'file2' > .gitignore git status And now we get:# On branch master # # Initial commit # # Untracked files: # (use 'git add ...' to include in what will be committed) # # .gitignore # file1 nothing added to commit but untracked files present (use 'git add' to track)Cool! No mention of file2 anywhere! But if, as our first example, we forgot to add the files to our .gitignore initially? How do we stop git from tracking them? A nice command we can use for this cases is git rm --cached file. In our first example: git rm --cached file2 git commit -m 'Removed file2' If we now modify the file again and do a git status we get: echo 'Hello World Again' >> file2 git status# On branch master nothing to commit (working directory clean)Exactly what we wanted! Note that this little command will remove the file from git index but it won’t do anything with your working copy. That means that you will still have the file on your directory but the file won’t be a part of the git repository anymore. This also implies that the next time you do a push to a remote repository the file won’t be pushed and, if it already existed on the remote repository, it will be deleted. This is fine for a typical use case when you added a file that you never intended to have on the repository. But consider this other use case. In a lot of projects the developers upload into the remote central repository a set of config files for their IDE with default values for things like code formatting and style checking. But at the same time, when developers clone the repository they customize those config files to suit their personal preferences. However, those changes that apply to each particular developer should not be commited and pushed back to the repository. The problem with using git rm --cached here is that, while each developer will still have their own copy, the next time they push to the server they’ll remove the default config from there. In cases like this, there is another pretty useful command that will do the trick: git update-index --assume-unchanged <file>. Let’s see that with an example: mkdir gitExample cd gitExample touch file1 file2 git init git add . git commit -m 'Initial commit' There we have our default “file2?. Now let’s use the git update-index command and make some changes to the file: git update-index --assume-unchanged file2 echo 'Hello World' >> file2 git status The result:# On branch master nothing to commit (working directory clean)Magic! Changes to our file are no longer seen by git and the original file is still on the repository to be cloned by other users with its default value.   Reference: When git ignores your… .gitignore? from our JCG partner Jose Luis at the Development the way it should be blog. ...

Java Memory Model and optimisation

Overview Many developers of multi-threaded code are familiar with the idea that different threads can have a different view of a value they are holding, this not the only reason a thread might not see a change if it is not made thread safe. The JIT itself can play a part. Why do different threads see different values? When you have multiple threads, they will attempt to minimise how much they will interact e.g. by trying to access the same memory. To do this they have a separate   local copy e.g. in Level 1 cache. This cache is usually eventually consistent. I have seen short periods of between one micro-second and up to 10 milli-seconds where two threads see different values. Eventually the thread is context switched, the cache cleared or updated. There is no guarantee as to when this will happen but it is almost always much less than a second. How can the JIT play a part? The Java Memory Model says there is no guarantee that a field which is not thread safe will ever see an update. This allows the JIT to make an optimisation where a value only read and not written to is effectively inlined into the code. This means that even if the cache is updated, the change might not be reflected in the code. An example This code will run until a boolean is set to false. >static class MyTask implements Runnable { private final int loopTimes; private boolean running = true; boolean stopped = false;public MyTask(int loopTimes) { this.loopTimes = loopTimes; }@Override public void run() { try { while (running) { longCalculation(); } } finally { stopped = true; } }private void longCalculation() { for (int i = 1; i < loopTimes; i++) if (Math.log10(i) < 0) throw new AssertionError(); } }public static void main(String... args) throws InterruptedException { int loopTimes = Integer.parseInt(args[0]); MyTask task = new MyTask(loopTimes); Thread thread = new Thread(task); thread.setDaemon(true); thread.start(); TimeUnit.MILLISECONDS.sleep(100); task.running = false; for (int i = 0; i < 200; i++) { TimeUnit.MILLISECONDS.sleep(500); System.out.println("stopped = " + task.stopped); if (task.stopped) break; } } This code repeatedly performs some work which has no impact on memory. The only difference it makes is how long it takes. By taking longer, it determines whether the code in run() will be optimised before or after running is set to false. If I run this with 10 or 100 and -XX:+PrintCompilation I see 111 1 java.lang.String::hashCode (55 bytes) 112 2 java.lang.String::charAt (29 bytes) 135 3 vanilla.java.perfeg.threads.OptimisationMain$MyTask :longCalculation (35 bytes) 204 1 % ! vanilla.java.perfeg.threads.OptimisationMain$MyTask :run @ 0 (31 bytes) stopped = false stopped = false stopped = false stopped = false ... many deleted ... stopped = false stopped = false stopped = false stopped = false stopped = false If I run this with 1000 you can see that the run() hasn’t been compiled and the thread stops 112 1 java.lang.String::hashCode (55 bytes) 112 2 java.lang.String::charAt (29 bytes) 133 3 vanilla.java.perfeg.threads.OptimisationMain $MyTask::longCalculation (35 bytes) 135 1 % vanilla.java.perfeg.threads.OptimisationMain $MyTask::longCalculation @ 2 (35 bytes) stopped = true Once the thread has been compiled, the change is never seen even though the thread will have context switched etc. many times. How to fix this The simple solution is to make the field volatile. This will guarantee the field’s value consistent, not just eventually consistent which is what the cache might do for you. Conclusion While there are many examples of question like; Why doesn’t my thread stop? The answer has more to do with Java Memory Model which allows the JIT “inline” the fields that it does the hardware and having multiple copies of the data in different caches.   Reference: Java Memory Model and optimisation from our JCG partner Peter Lawrey at the Vanilla Java blog. ...

Scala: Collections 1

This post contains some info on Scala’s collections. Problem? We want a function that will take an List of Rugby players as input and return those players names that play for Leinster and can run the 100 meters from the fastest to the slowest. Step 1: Have a representation for a Rugby player. Ok so it’s obvious we want something like a POJO to represent a Rugby player. This   representation should have a player’s name, their team and the time they can the 100 meters in. Let’s use Scala case class construct which removes the need for boiler plate code. case class RugbyPlayerCaseClass(team: String, sprintTime100M: BigDecimal, name: String) Step 2: Create some rugby players val lukeFitzGerald = RugbyPlayerCaseClass('Leinster', 10.2, 'Luke Fitzgerald'); val fergusMcFadden = RugbyPlayerCaseClass('Leinster', 10.1, 'Fergus McFadden'); val rog = RugbyPlayerCaseClass('Leinster', 12, 'Ronan O'Gara'); val tommyBowe = RugbyPlayerCaseClass('Ulster', 10.3, 'Tommy Bowe'); val leoCullen = RugbyPlayerCaseClass('Leinster', 15, 'Leo Cullen'); The code above should be self explanatory. The various rugby players are instantiated. Note the inferred typing. There is no need to declare any of the rugby players as RugbyPlayers types. Instead, it is inferred. Another thing that is interesting is the keyword val is used. This means the reference is immutable It is the equivalent to final in Java. Step 3: Write the function def validByAge(in: List[RugbyPlayerCaseClass]) = in.filter(_.team == 'Leinster').sortWith(_.sprintTime100M < _.sprintTime100M).map(_.name); Key points regarding this function:The function begins with def keyword signifying a function declartion. A List of RugbyPlayerCaseClass instances are taken in as input. The List type is a Scala type. The return type is optional. In this case it is not explictly specified as it is inferred. The part to the left of the = is what the function does. In this case the function invokes three difference collection operators..filter(_.team =='Leinster) – this iterates over every element in the List. In each iteration the _ is filled in with the current value in the List. If the team property of the current Rugby player is Leinster the element is included in the resulting collection. .sortWith(_.sprintTime100M < _.sprintTime100M) – sortWith is a special method which we can use to sort collections. In this case, we our sorting the output fromthe previous collection operator and we are sorting based on the sprintTime for 100M. .map(_.name) – this maps every element from the output of the sort operator to just ther name property.The function body does not need to be surrounded by {} because it is only one line code. There is no return statement needed. In Scala, whatever the last line evaluates to will be returned. In this example since there only is one line, the last line is the first line.Finally – put it all together. object RugbyPlayerCollectionDemos { def main(args: Array[String]){ println('Scala collections stuff!'); showSomeFilterTricks(); }// Case class remove need for boiler plater code. case class RugbyPlayerCaseClass(team: String, sprintTime100M: BigDecimal, name: String)def showSomeFilterTricks() {// team: String, sprintTime100M: Int, name: String val lukeFitzGerald = RugbyPlayerCaseClass('Leinster', 10.2, 'Luke Fitzgerald'); val fergusMcFadden = RugbyPlayerCaseClass('Leinster', 10.1, 'Fergus McFadden'); val rog = RugbyPlayerCaseClass('Munster', 12, 'Ronan O'Gara'); val tommyBowe = RugbyPlayerCaseClass('Ulster', 10.3, 'Tommy Bowe'); val leoCullen = RugbyPlayerCaseClass('Leinster', 15, 'Leo Cullen');println(validByAge(List(lukeFitzGerald, fergusMcFadden, rog, tommyBowe, leoCullen)));}def validByAge(in: List[RugbyPlayerCaseClass]) = in.filter(_.team == 'Leinster').sortWith(_.sprintTime100M < _.sprintTime100M).map(_.name);} The above program will output: Scala collections stuff! List(Luke Fitzgerald, Fergus McFadden, Leo Cullen) Something similar in Java Pre Java 8, to implement the same functionality in Java would be a lot more code. public class RugbyPLayerCollectionDemos { public static void main(String args[]) { RugbyPLayerCollectionDemos collectionDemos = new RugbyPLayerCollectionDemos(); collectionDemos.showSomeFilterTricks(); }public void showSomeFilterTricks() { // team: String, sprintTime100M: Int, name: String final RugbyPlayerPOJO lukeFitzGerald = new RugbyPlayerPOJO('Leinster', new BigDecimal(10.2), 'Luke Fitzgerald'); final RugbyPlayerPOJO fergusMcFadden = new RugbyPlayerPOJO('Leinster', new BigDecimal(10.1), 'Fergus McFadden'); final RugbyPlayerPOJO rog = new RugbyPlayerPOJO('Munster', new BigDecimal(12), 'Ronan O'Gara'); final RugbyPlayerPOJO tommyBowe = new RugbyPlayerPOJO('Ulster', new BigDecimal(10.3), 'Tommy Bowe'); final RugbyPlayerPOJO leoCullen = new RugbyPlayerPOJO('Leinster', new BigDecimal(15), 'Leo Cullen');ListrugbyPlayers = Arrays.asList(lukeFitzGerald, fergusMcFadden, rog, tommyBowe, leoCullen);System.out.println(filterRugbyPlayers(rugbyPlayers)); }/** * Return the names of Leinster Rugby players in the order of their sprint times. */ public ListfilterRugbyPlayers(Listpojos) { ArrayListleinsterRugbyPlayers = new ArrayList();for (RugbyPlayerPOJO pojo: pojos) { if (pojo.getTeam().equals('Leinster')) { leinsterRugbyPlayers.add(pojo); } }RugbyPlayerPOJO [] rugbyPlayersAsArray = leinsterRugbyPlayers.toArray(new RugbyPlayerPOJO[0]);Arrays.sort(rugbyPlayersAsArray, new Comparator() { public int compare(RugbyPlayerPOJO rugbyPlayer1, RugbyPlayerPOJO rugbyPlayer2) { return rugbyPlayer1.getSprintTime100M().compareTo(rugbyPlayer2.getSprintTime100M()); } });ListrugbyPlayersNamesToReturn = new ArrayList();for (RugbyPlayerPOJO rugbyPlayerPOJO: rugbyPlayersAsArray) { rugbyPlayersNamesToReturn.add(rugbyPlayerPOJO.getName()); }return rugbyPlayersNamesToReturn; }class RugbyPlayerPOJO { private BigDecimal sprintTime100M; private String team; private String name;public RugbyPlayerPOJO(String team, java.math.BigDecimal sprintTime100M, String name) { this.name = name; this.sprintTime100M = sprintTime100M; this.team = team; }public BigDecimal getSprintTime100M() { return sprintTime100M; }public String getTeam() { return team; }public String getName() { return name; } } } Does Java 8 help out? Yes. According to the Project Lambda specsJava 8 will have similar looking filter,map and sort functions. The functionality in this post in Java 8 would look something like: ListrugbyPlayers = Arrays.asList(lukeFitzGerald, fergusMcFadden, rog, tommyBowe, leoCullen); //... //... ListfilteredPLayersNames = rugbyPlayers.filter(e -> e.getTeam.equals('Leinster')). sorted((a, b) -> a.getSprintTime100M() - b.getSprintTime100M()).mapped(e -> {return e.getName();}).into(new List<>()); So Java 8 is definetly catching up a great deal in this regard. But will it be enough?   Reference: Scala: Collections 1 from our JCG partner Alex Staveley at the Dublin’s Tech Blog blog. ...

Structural (or) Type Safe Duck Typing in Scala

Structural typing as defined by Wikipedia “A structural type system (or property-based type system) is a major classof type system, in which type compatibility and equivalence are determined by the type’s structure, and not by other characteristics such as its name or place of declaration “ Strutural types in scala allows code modularity for some specific situations.For instance, if a behaviour is implemented across several classes and those behaviours need to invoked by the structure of the type. This approach rules out the need for an abstract class or trait merely for the purpose of calling single overridden method. Structural typing not onlyadds syntatic sugar but also makes the code much modular. Lets consider a behaviour ‘walk’ in classes Cat and Dog. The StrucType class’s whoIsWalking takes a type param which states that “Accept any object which has a method walk and that returns a string” type is aliazed with a variable ‘c’ and with in the method the aliazed variable can invoke ‘walk’. class StrucType { def whoIsWalking(c:{def walk():String}) = println(c.walk) } Below are the classes which has the method commons.class class Cat { def walk():String = 'Cat walking' }class Dog { def walk():String = 'Dog walking' } /** * * object Main { def main(args:Array[String]) {println('Hello Scala')val walkerStruct = new StrucType();walkerStruct.whoIsWalking(new Cat());walkerStruct.whoIsWalking(new Dog()); } } */ Structural typing can also be considered to refactor your next strategy pattern implementation. Iam planning to explain about them in my next post stay tuned !!!!!   Reference: Structural (or) Type Safe Duck Typing in Scala from our JCG partner Prasanna Kumar at the Prassee on Scala blog. ...

Loan pattern in Java (a.k.a lender lendee pattern)

This post is about implementing loan pattern in Java. Use Case Implement separation between the code that holds resource from that of accessing it such that the accessing code doesn’t need to manage the resources. The use case mentioned holds true when we write code to read/write to a file or querying SQL / NOSQL dbs. There are certainly API’s handled this with the help of AOP. But I thought if a pattern based approach could help us to deal with these kind of use case, that’s where I came to know about Loan Pattern (a.k.a lender lendee pattern).   What it does Loan pattern takes a “lending approach” i.e the code which keep hold of the resources “lends” if to the calling code. The lender (a.k.a code which holds resources) manages the resources once the lendee (code accessing the resource) has used it (with no interest ). Lets get in to lender code: /** * This class is an illustration of using loan pattern(a.k.a lender-lendee pattern) * @author prassee */ public class IOResourceLender {/** * Interface to write data to the buffer. Clients using this * class should provide impl of this interface * @author sysadmin * */ public interface WriteBlock { void call(BufferedWriter writer) throws IOException; }/** * Interface to read data from the buffer. Clients using this * class should provide impl of this interface * @author sysadmin * */ public interface ReadBlock { void call(BufferedReader reader) throws IOException; }/** * method which loans / lends the resource. Here {@link FileWriter} is the * resource lent. The resource is managed for the given impl of {@link WriteBlock} * * @param fileName * @param block * @throws IOException */ public static void writeUsing(String fileName, WriteBlock block) throws IOException { File csvFile = new File(fileName); if (!csvFile.exists()) { csvFile.createNewFile(); } FileWriter fw = new FileWriter(csvFile.getAbsoluteFile(), true); BufferedWriter bufferedWriter = new BufferedWriter(fw); block.call(bufferedWriter); bufferedWriter.close(); }/** * method which loans / lends the resource. Here {@link FileReader} is the * resource lent. The resource is managed for * the given impl of {@link ReadBlock} * * @param fileName * @param block * @throws IOException */ public static void readUsing(String fileName, ReadBlock block) throws IOException { File inputFile = new File(fileName); FileReader fileReader = new FileReader(inputFile.getAbsoluteFile()); BufferedReader bufferedReader = new BufferedReader(fileReader); block.call(bufferedReader); bufferedReader.close(); } } The lender code holds a FileWriter, the resource and we also expect an implementation of WriteBlock so that writeUsing method just calls the method on the WriteBlock interface which is enclosed within the managing the resource. One the client(lendee) side we provide an anonymous implementation of WriteBlock. Here is the lendee code, Iam just giving an method its up to the you to use it in the class which you may like. public void writeColumnNameToMetaFile(final String attrName, String fileName, final String[] colNames) throws IOException { IOResourceLender.writeUsing(fileName, new IOResourceLender.WriteBlock() { public void call(BufferedWriter out) throws IOException { StringBuilder buffer = new StringBuilder(); for (String string : colNames) { buffer.append(string); buffer.append(','); } out.append(attrName + ' = ' + buffer.toString()); out.newLine(); } }); } The example uses the loan pattern for a simple file IO operation. However this code could be further improved by providing abstract lenders and lendee.The code for this post is shared in the following gist https://gist.github.com/4481190 I welcome your comments and suggestions !!   Reference: Loan pattern in Java (a.k.a lender lendee pattern) from our JCG partner Prasanna Kumar at the Prassee on Scala blog. ...

SpiderMonkey to V8 for MongoDB and mongometer

With 10gen switching the default JavaScript engine for MongoDB 2.3/2.4 from SpiderMonkey to V8 I thought I’d take the opportunity to compare the relative performances of the releases using mongometer. Being a Security bod, I really should have looked at the Additional Authentication Features first… Hey ho. I’ll document the steps taken during the comparison, including the set up, so this can be repeated and validated – just in case anyone is interested – but mainly so I can remind myself of what I did; memory, sieve.       The set up I’m going to install 2.2.2 and 2.3.2 side-by-side on a dedicated machine. I’ll then use the latest version of the Java driver with mongometer. $ wget http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.3.2.tgz $ wget http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.3.2.tgz.md5 I got a 403 response for this request… $ wget http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.2.2.tgz $ wget http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.2.2.tgz.md5$ md5sum -c mongodb-linux-x86_64-2.2.2.tgz.md5 md5sum: mongodb-linux-x86_64-2.2.2.tgz.md5: no properly formatted MD5 checksum lines foundGrrr. An md5 file is supposed to be the checksum (then x2 spaces) and then the filename of the file being checksummed. I’ll have to eyeball them instead, well, eyeball the one that I could actually download… $ md5sum mongodb-linux-x86_64-2.2.2.tgz be0f5969b0ca23a0a383e4ca2ce50a39 mongodb-linux-x86_64-2.2.2.tgz $ cat mongodb-linux-x86_64-2.2.2.tgz.md5 be0f5969b0ca23a0a383e4ca2ce50a39 Configure $ tar -zxvf ~/mongodb-linux-x86_64-2.2.2.tgz $ sudo mkdir -p /usr/lib/mongodb/2.2.2 $ sudo mv mongodb-linux-x86_64-2.2.2/* /usr/lib/mongodb/2.2.2/ $ rm -r mongodb-linux-x86_64-2.2.2 $ sudo mkdir -p /data/db/2.2.2 $ sudo chown `id -un` /data/db/2.2.2 $ /usr/lib/mongodb/2.2.2/bin/mongod --port 27000 --dbpath /data/db/2.2.2 --logpath /data/db/2.2.2/mongod.log$ tar -zxvf ~/mongodb-linux-x86_64-2.3.2.tgz $ sudo mkdir -p /usr/lib/mongodb/2.3.2 $ sudo mv mongodb-linux-x86_64-2.3.2/* /usr/lib/mongodb/2.3.2/ $ rm -r mongodb-linux-x86_64-2.3.2 $ sudo mkdir -p /data/db/2.3.2 $ sudo chown `id -un` /data/db/2.3.2 $ /usr/lib/mongodb/2.3.2/bin/mongod --port 27001 --dbpath /data/db/2.3.2 --logpath /data/db/2.3.2/mongod.log Let’s check they are running. $ ps -ef | grep mongod 1795 /usr/lib/mongodb/2.2.2/bin/mongod --port 27000 --dbpath /data/db/2.2.2 --logpath /data/db/2.2.2/mongod.log 2059 /usr/lib/mongodb/2.3.2/bin/mongod --port 27001 --dbpath /data/db/2.3.2 --logpath /data/db/2.3.2/mongod.logNow, let’s kill one (gracefully) and move on to the interesting stuff. $ sudo kill -15 2059 $ ps -ef | grep mongod 1795 /usr/lib/mongodb/2.2.2/bin/mongod --port 27000 --dbpath /data/db/2.2.2 --logpath /data/db/2.2.2/mongod.log Now I’m jumping on to another box. $ wget https://github.com/downloads/mongodb/mongo-java-driver/mongo-2.10.1.jar $ cp mongo-2.10.1.jar /usr/lib/jmeter/2.8/lib/ext $ cp ~/IdeaProjects/mongometer/out/artifacts/mongometer_jar/mongometer.jar /usr/lib/jmeter/2.8/lib/ext $ /usr/lib/jmeter/2.8/bin/jmeter.sh The tests The tests are really rather basic; I’ll perform an insert into two different databases, and perform finds against those databases. Version 2.2.2 show dbs local 0.078125GB> show dbs jmeter 0.203125GB jmeter2 0.203125GB local 0.078125GB> use jmeter > db.jmeter.find().count() 1000 > db.dropDatabase()> use jmeter2 > db.jmeter.find().count() 1000 > db.dropDatabase()$ ps -ef | grep mongo 2690 /usr/lib/mongodb/2.2.2/bin/mongod --port 27000 --dbpath /data/db/2.2.2 --logpath /data/db/2.2.2/mongod.log$ sudo kill -15 2690 $ ps -ef | grep mongo Nothing. Let’s get the 2.3.2 instance up and running. $ /usr/lib/mongodb/2.3.2/bin/mongod --port 27001 --dbpath /data/db/2.3.2 --logpath /data/db/2.3.2/mongod.log$ ps -ef | grep mongo 2947 /usr/lib/mongodb/2.3.2/bin/mongod --port 27001 --dbpath /data/db/2.3.2 --logpath /data/db/2.3.2/mongod.log Version 2.3.2 > show dbs local 0.078125GB> show dbs jmeter 0.203125GB jmeter2 0.203125GB local 0.078125GB> use jmeter > db.jmeter.find().count() 1000 > db.dropDatabase()> use jmeter2 > db.jmeter.find().count() 1000 > db.dropDatabase() Conclusions I guess you should draw your own. I ran this a couple of times and am considering scripting it so the environments are cleaned down prior to each run, I could probably add more complex queries too. Perhaps if I find some time next weekend then I will.   Reference: SpiderMonkey to V8 for MongoDB and mongometer from our JCG partner Jan Ettles at the Exceptionally exceptional exceptions blog. ...

Spring MVC 3: Upload multiple files

It was just another long day at office with the database not available and one of the team members lagging by a week now. So, we had to work as a team to get it delivered. In Spring 3, it looked straight forward to upload a file. However, there was little help on offer, to upload multiple files from a jsp file.               There are three basic things which need to be done to upload multiple files are: a) The JSP needs to have the input[file] elements passed as an array. <td><input name="fileData[0]" id="image0" type="file" /></td> <td><input name="fileData[1]" id="image1" type="file" /></td> b) The ModelAttribute/Model object in Spring MVC needs to have a list of MultipartFile. import java.util.List; import org.springframework.web.multipart.commons.CommonsMultipartFile; public class UploadItem { private String filename; private List<CommonsMultipartFile> fileData; c) Configure Multipart Resolver bean in dispatcher-servlet.xml[applicationContext-servlet.xml] <!-- Configure the multipart resolver --> <bean id="multipartResolver" class="org.springframework.web.multipart.commons.CommonsMultipartResolver"> </bean> d) Logic to read the files from the Model and store it in a file location in the Controller layer. @RequestMapping(method = RequestMethod.POST) public String create(UploadItem uploadItem, BindingResult result, HttpServletRequest request, HttpServletResponse response, HttpSession session) { if (result.hasErrors()) { for (ObjectError error : result.getAllErrors()) { System.err.println("Error: " + error.getCode() + " - " + error.getDefaultMessage()); } return "/uploadfile"; } // Some type of file processing... System.err.println("-------------------------------------------"); try { for(MultipartFile file:uploadItem.getFileData()){ String fileName = null; InputStream inputStream = null; OutputStream outputStream = null; if (file.getSize() > 0) { inputStream = file.getInputStream(); if (file.getSize() > 20000) { System.out.println("File Size exceeded:::" + file.getSize()); return "/uploadfile"; } System.out.println("size::" + file.getSize()); fileName = request.getRealPath("") + "/images/" + file.getOriginalFilename(); outputStream = new FileOutputStream(fileName); System.out.println("fileName:" + file.getOriginalFilename()); int readBytes = 0; byte[] buffer = new byte[10000]; while ((readBytes = inputStream.read(buffer, 0, 10000)) != -1) { outputStream.write(buffer, 0, readBytes); } outputStream.close(); inputStream.close(); // .......................................... session.setAttribute("uploadFile", file.getOriginalFilename()); } //MultipartFile file = uploadItem.getFileData(); } } catch (Exception e) { e.printStackTrace(); } return "redirect:/forms/uploadfileindex"; } I have extended the example which is found @ RoseIndia to dynamically create the file nodes and post them to the Controller. Just download the source-code and replace the below jsp file and make other necessary changes: Upload.jsp <%@page contentType="text/html;charset=UTF-8"%> <%@page pageEncoding="UTF-8"%> <%@ page session="false"%> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form"%><html> <head> <META http-equiv="Content-Type" content="text/html;charset=UTF-8"> <title>Upload Example</title> <script language="JavaScript"> var count=0; function add(type) { //Create an input type dynamically. var table = document.getElementById("fileUploadTable"); var tr = document.createElement("tr"); var td = document.createElement("td"); var element = document.createElement("input");//Assign different attributes to the element. element.setAttribute("type", "file"); element.setAttribute("value", ""); element.setAttribute("name", "fileData["+type+"]"); //Append the element in page (in span). td.appendChild(element); tr.appendChild(td); table.appendChild(tr); } function Validate() { var image =document.getElementById("image").value; if(image!=''){ var checkimg = image.toLowerCase(); if (!checkimg.match(/(\.jpg|\.png|\.JPG|\.PNG|\.jpeg|\.JPEG)$/)){ alert("Please enter Image File Extensions .jpg,.png,.jpeg"); document.getElementById("image").focus(); return false; } } return true; }</script> </head> <body> <form:form modelAttribute="uploadItem" name="frm" method="post" enctype="multipart/form-data" onSubmit="return Validate();"> <fieldset><legend>Upload File</legend> <table > <tr> <input type="button" name="Add Image" onclick="add(count++)" value="Add Image"/> </tr> <tr> <table id="fileUploadTable"> <!--td><form:label for="fileData" path="fileData">File</form:label><br /> </td> <td><input name="fileData[0]" id="image0" type="file" /></td> <td><input name="fileData[1]" id="image1" type="file" /></td--> </table> </tr> <tr> <td><br /> </td> <td><input type="submit" value="Upload" /></td> </tr> </table> </fieldset> </form:form> </body> </html> UploadItem.java, generate getter and setter methods for the private List fileData;, UploadFileController.java and then just copy and paste the create(…) mentioned in the blog above. Note: If you are still facing issue with file upload in Spring MVC, please add a MultipartFilter. Refer here. <filter> <filter-name>multipartFilter</filter-name> <filter-class>org.springframework.web.multipart.support.MultipartFilter</filter-class> </filter> <filter-mapping> <filter-name>multipartFilter</filter-name> <url-pattern>/springrest/*</url-pattern> </filter-mapping> <bean id="filterMultipartResolver" class="org.springframework.web.multipart.commons.CommonsMultipartResolver"> <property name="maxUploadSize"> <value>10000000</value> </property> </bean>   Reference: Upload multiple files in Spring MVC 3 from our JCG partner Srinivas Ovn at the Bemused blog. ...

Spring Data JDBC generic DAO implementation – most lightweight ORM ever

I am thrilled to announce first version of my Spring Data JDBC repository project. The purpose of this open source library is to provide generic, lightweight and easy to use DAO implementation for relational databases based on JdbcTemplate from Spring framework, compatible with Spring Data umbrella of projects.               Design objectivesLightweight, fast and low-overhead. Only a handful of classes, no XML, annotations, reflection This is not full-blown ORM. No relationship handling, lazy loading, dirty checking, caching CRUD implemented in seconds For small applications where JPA is an overkill Use when simplicity is needed or when future migration e.g. to JPA is considered Minimalistic support for database dialect differences (e.g. transparent paging of results)Features Each DAO provides built-in support for:Mapping to/from domain objects through RowMapper abstraction Generated and user-defined primary keys Extracting generated key Compound (multi-column) primary keys Immutable domain objects Paging (requesting subset of results) Sorting over several columns (database agnostic) Optional support for many-to-one relationships Supported databases (continuously tested):MySQL PostgreSQL H2 HSQLDB Derby …and most likely most of the othersEasily extendable to other database dialects via SqlGenerator class. Easy retrieval of records by IDAPI Compatible with Spring Data PagingAndSortingRepository abstraction, all these methods are implemented for you: public interface PagingAndSortingRepository<T, ID extends Serializable> extends CrudRepository<T, ID> { T save(T entity); Iterable<T> save(Iterable<? extends T> entities); T findOne(ID id); boolean exists(ID id); Iterable<T> findAll(); long count(); void delete(ID id); void delete(T entity); void delete(Iterable<? extends T> entities); void deleteAll(); Iterable<T> findAll(Sort sort); Page<T> findAll(Pageable pageable); } Pageable and Sort parameters are also fully supported, which means you get paging and sorting by arbitrary properties for free. For example say you have userRepository extending PagingAndSortingRepository<User, String> interface (implemented for you by the library) and you request 5th page of USERS table, 10 per page, after applying some sorting: Page<User> page = userRepository.findAll( new PageRequest( 5, 10, new Sort( new Order(DESC, "reputation"), new Order(ASC, "user_name") ) ) ); Spring Data JDBC repository library will translate this call into (PostgreSQL syntax): SELECT * FROM USERS ORDER BY reputation DESC, user_name ASC LIMIT 50 OFFSET 10 …or even (Derby syntax): SELECT * FROM ( SELECT ROW_NUMBER() OVER () AS ROW_NUM, t.* FROM ( SELECT * FROM USERS ORDER BY reputation DESC, user_name ASC ) AS t ) AS a WHERE ROW_NUM BETWEEN 51 AND 60 No matter which database you use, you’ll get Page<User> object in return (you still have to provide RowMapper<User> yourself to translate from ResultSet to domain object. If you don’t know Spring Data project yet, Page<T> is a wonderful abstraction, not only encapsulating List<User>, but also providing metadata such as total number of records, on which page we currently are, etc. Reasons to useYou consider migration to JPA or even some NoSQL database in the future.Since your code will rely only on methods defined in PagingAndSortingRepository and CrudRepository from Spring Data Commons umbrella project you are free to switch from JdbcRepository implementation (from this project) to: JpaRepository, MongoRepository, GemfireRepository or GraphRepository. They all implement the same common API. Of course don’t expect that switching from JDBC to JPA or MongoDB will be as simple as switching imported JAR dependencies – but at least you minimize the impact by using same DAO API. You need a fast, simple JDBC wrapper library. JPA or even MyBatis is an overkill You want to have full control over generated SQL if needed You want to work with objects, but don’t need lazy loading, relationship handling, multi-level caching, dirty checking… You need CRUD and not much more You want to by DRY You are already using Spring or maybe even JdbcTemplate, but still feel like there is too much manual work You have very few database tablesGetting started For more examples and working code don’t forget to examine project tests. Prerequisites Maven coordinates: <dependency> <groupId>com.blogspot.nurkiewicz</groupId> <artifactId>jdbcrepository</artifactId> <version>0.1</version> </dependency> Unfortunately the project is not yet in maven central repository. For the time being you can install the library in your local repository by cloning it: $ git clone git://github.com/nurkiewicz/spring-data-jdbc-repository.git $ git checkout 0.1 $ mvn javadoc:jar source:jar install In order to start your project must have DataSource bean present and transaction management enabled. Here is a minimal MySQL configuration: @EnableTransactionManagement @Configuration public class MinimalConfig { @Bean public PlatformTransactionManager transactionManager() { return new DataSourceTransactionManager(dataSource()); } @Bean public DataSource dataSource() { MysqlConnectionPoolDataSource ds = new MysqlConnectionPoolDataSource(); ds.setUser("user"); ds.setPassword("secret"); ds.setDatabaseName("db_name"); return ds; } } Entity with auto-generated key Say you have a following database table with auto-generated key (MySQL syntax): CREATE TABLE COMMENTS ( id INT AUTO_INCREMENT, user_name varchar(256), contents varchar(1000), created_time TIMESTAMP NOT NULL, PRIMARY KEY (id) ); First you need to create domain object User mapping to that table (just like in any other ORM): public class Comment implements Persistable<Integer> { private Integer id; private String userName; private String contents; private Date createdTime; @Override public Integer getId() { return id; } @Override public boolean isNew() { return id == null; } //getters/setters/constructors/... } Apart from standard Java boilerplate you should notice implementing Persistable<Integer> where Integer is the type of primary key. Persistable<T> is an interface coming from Spring Data project and it’s the only requirement we place on your domain object. Finally we are ready to create our CommentRepository DAO: @Repository public class CommentRepository extends JdbcRepository<Comment, Integer> { public CommentRepository() { super(ROW_MAPPER, ROW_UNMAPPER, "COMMENTS"); } public static final RowMapper<Comment> ROW_MAPPER = //see below private static final RowUnmapper<Comment> ROW_UNMAPPER = //see below @Override protected Comment postCreate(Comment entity, Number generatedId) { entity.setId(generatedId.intValue()); return entity; } } First of all we use @Repository annotation to mark DAO bean. It enables persistence exception translation. Also such annotated beans are discovered by CLASSPATH scanning. As you can see we extend JdbcRepository<Comment, Integer> which is the central class of this library, providing implementations of all PagingAndSortingRepository methods. Its constructor has three required dependencies: RowMapper, RowUnmapper and table name. You may also provide ID column name, otherwise default "id" is used. If you ever used JdbcTemplate from Spring, you should be familiar with RowMapper interface. We need to somehow extract columns from ResultSet into an object. After all we don’t want to work with raw JDBC results. It’s quite straightforward: public static final RowMapper<Comment> ROW_MAPPER = new RowMapper<Comment>() { @Override public Comment mapRow(ResultSet rs, int rowNum) throws SQLException { return new Comment( rs.getInt("id"), rs.getString("user_name"), rs.getString("contents"), rs.getTimestamp("created_time") ); } }; RowUnmapper comes from this library and it’s essentially the opposite of RowMapper: takes an object and turns it into a Map. This map is later used by the library to construct SQL CREATE/UPDATE queries: private static final RowUnmapper<Comment> ROW_UNMAPPER = new RowUnmapper<Comment>() { @Override public Map<String, Object> mapColumns(Comment comment) { Map<String, Object> mapping = new LinkedHashMap<String, Object>(); mapping.put("id", comment.getId()); mapping.put("user_name", comment.getUserName()); mapping.put("contents", comment.getContents()); mapping.put("created_time", new java.sql.Timestamp(comment.getCreatedTime().getTime())); return mapping; } }; If you never update your database table (just reading some reference data inserted elsewhere) you may skip RowUnmapper parameter or use MissingRowUnmapper. Last piece of the puzzle is the postCreate() callback method which is called after an object was inserted. You can use it to retrieve generated primary key and update your domain object (or return new one if your domain objects are immutable). If you don’t need it, just don’t override postCreate(). Check out JdbcRepositoryGeneratedKeyTest for a working code based on this example. By now you might have a feeling that, compared to JPA or Hibernate, there is quite a lot of manual work. However various JPA implementations and other ORM frameworks are notoriously known for introducing significant overhead and manifesting some learning curve. This tiny library intentionally leaves some responsibilities to the user in order to avoid complex mappings, reflection, annotations… all the implicitness that is not always desired. This project is not intending to replace mature and stable ORM frameworks. Instead it tries to fill in a niche between raw JDBC and ORM where simplicity and low overhead are key features. Entity with manually assigned key In this example we’ll see how entities with user-defined primary keys are handled. Let’s start from database model: CREATE TABLE USERS ( user_name varchar(255), date_of_birth TIMESTAMP NOT NULL, enabled BIT(1) NOT NULL, PRIMARY KEY (user_name) ); …and User domain model: public class User implements Persistable<String> { private transient boolean persisted; private String userName; private Date dateOfBirth; private boolean enabled; @Override public String getId() { return userName; } @Override public boolean isNew() { return !persisted; } public User withPersisted(boolean persisted) { this.persisted = persisted; return this; } //getters/setters/constructors/... } Notice that special persisted transient flag was added. Contract of CrudRepository.save() from Spring Data project requires that an entity knows whether it was already saved or not (isNew()) method – there are no separate create() and update() methods. Implementing isNew() is simple for auto-generated keys (see Comment above) but in this case we need an extra transient field. If you hate this workaround and you only insert data and never update, you’ll get away with return true all the time from isNew(). And finally our DAO, UserRepository bean: @Repository public class UserRepository extends JdbcRepository<User, String> { public UserRepository() { super(ROW_MAPPER, ROW_UNMAPPER, "USERS", "user_name"); } public static final RowMapper<User> ROW_MAPPER = //... public static final RowUnmapper<User> ROW_UNMAPPER = //... @Override protected User postUpdate(User entity) { return entity.withPersisted(true); } @Override protected User postCreate(User entity, Number generatedId) { return entity.withPersisted(true); } } "USERS" and "user_name" parameters designate table name and primary key column name. I’ll leave the details of mapper and unmapper (see source code). But please notice postUpdate() and postCreate() methods. They ensure that once object was persisted, persisted flag is set so that subsequent calls to save() will update existing entity rather than trying to reinsert it. Check out JdbcRepositoryManualKeyTest for a working code based on this example. Compound primary key We also support compound primary keys (primary keys consisting of several columns). Take this table as an example: CREATE TABLE BOARDING_PASS ( flight_no VARCHAR(8) NOT NULL, seq_no INT NOT NULL, passenger VARCHAR(1000), seat CHAR(3), PRIMARY KEY (flight_no, seq_no) ); I would like you to notice the type of primary key in Peristable<T>: public class BoardingPass implements Persistable<Object[]> { private transient boolean persisted; private String flightNo; private int seqNo; private String passenger; private String seat; @Override public Object[] getId() { return pk(flightNo, seqNo); } @Override public boolean isNew() { return !persisted; } //getters/setters/constructors/... } Unfortunately we don’t support small value classes encapsulating all ID values in one object (like JPA does with @IdClass), so you have to live with Object[] array. Defining DAO class is similar to what we’ve already seen: public class BoardingPassRepository extends JdbcRepository<BoardingPass, Object[]> { public BoardingPassRepository() { this("BOARDING_PASS"); } public BoardingPassRepository(String tableName) { super(MAPPER, UNMAPPER, new TableDescription(tableName, null, "flight_no", "seq_no") ); } public static final RowMapper<BoardingPass> ROW_MAPPER = //... public static final RowUnmapper<BoardingPass> UNMAPPER = //... } Two things to notice: we extend JdbcRepository<BoardingPass, Object[]> and we provide two ID column names just as expected: "flight_no", "seq_no". We query such DAO by providing both flight_no and seq_no (necessarily in that order) values wrapped by Object[]: BoardingPass pass = repository.findOne(new Object[] {"FOO-1022", 42}); No doubts, this is cumbersome in practice, so we provide tiny helper method which you can statically import: import static com.blogspot.nurkiewicz.jdbcrepository.JdbcRepository.pk; //... BoardingPass foundFlight = repository.findOne(pk("FOO-1022", 42)); Check out JdbcRepositoryCompoundPkTest for a working code based on this example. Transactions This library is completely orthogonal to transaction management. Every method of each repository requires running transaction and it’s up to you to set it up. Typically you would place @Transactional on service layer (calling DAO beans). I don’t recommend placing @Transactional over every DAO bean. Caching Spring Data JDBC repository library is not providing any caching abstraction or support. However adding @Cacheable layer on top of your DAOs or services using caching abstraction in Spring is quite straightforward. See also: @Cacheable overhead in Spring. Contributions ..are always welcome. Don’t hesitate to submit bug reports and pull requests. Biggest missing feature now is support for MSSQL and Oracle databases. It would be terrific if someone could have a look at it. Testing This library is continuously tested using Travis (). Test suite consists of 265 tests (53 distinct tests each run against 5 different databases: MySQL, PostgreSQL, H2, HSQLDB and Derby. When filling bug reports or submitting new features please try including supporting test cases. Each pull request is automatically tested on a separate branch. Building After forking the official repository building is as simple as running: $ mvn install You’ll notice plenty of exceptions during JUnit test execution. This is normal. Some of the tests run against MySQL and PostgreSQL available only on Travis CI server. When these database servers are unavailable, whole test is simply skipped: Results : Tests run: 265, Failures: 0, Errors: 0, Skipped: 106 Exception stack traces come from root AbstractIntegrationTest. Design Library consists of only a handful of classes, highlighted in the diagram below:JdbcRepository is the most important class that implements all PagingAndSortingRepository methods. Each user repository has to extend this class. Also each such repository must at least implement RowMapper and RowUnmapper (only if you want to modify table data). SQL generation is delegated to SqlGenerator. PostgreSqlGenerator. and DerbySqlGenerator are provided for databases that don’t work with standard generator. License This project is released under version 2.0 of the Apache License (same as Spring framework).   Reference: Probability distribution for programmers from our JCG partner Tomasz Nurkiewicz at the NoBlogDefFound blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: