Featured FREE Whitepapers

What's New Here?


Complex Numbers in Scala

Overview I recently delivered an introductory talk about Scala at an internal geek’s event at SAP. In this talk, I used an example complex numbers class to illustrate important language concepts and features. In many respects, this is a classical example that can be found in many other introductory materials about Scala, for example in this Scala Tutorial for Java Programmers. Nevertheless, I thought it’s a wonderful example worthy of another try. During the talk, I started with a very simple one-liner and gradually added more capabilities to it, while at the same time introducing the language features that made them possible. I ended up with a more or less complete and usable complex numbers implementation in just a few lines of code, which nevertheless allowed things that would not be possible with other languages (Java), such as operator arithmetics, seamless conversion between complex and real numbers, and “free” equality and comparison. In this post, I would like to reproduce this part of my talk. If you are interested in Scala, but haven’t mastered the language yet, this can be a good introduction to the conciseness and power of this remarkable programming language. Starting Point Our starting point is quite simple: class Complex(val re: Double, val im: Double) The single line above is the entire class definition. It has two Double fields, which are public (as this is the default in Scala) and immutable (due to the val keyword). The above line also defines implicitly a default two-argument constructor, so that Complex instances can already be created and initialised. Let’s do this in the Scala interpreter: scala> val x = new Complex(1, 2) x: Complex = Complex@3997ca20 If you compare this class definition to the code that would be needed to achieve the same in Java, it becomes evident that Scala is much more concise and elegant here, letting you express your intent clearly in the fewest possible lines of code. Overriding Methods The default string representation of Complex above is rather unfriendly. It would have been much better if it contained the class members in a format suitable for a complex number. To achieve this, we will of course override the toString method which our class inherits from Any, the root of the Scala class hierarchy. class Complex(val re: Double, val im: Double) { override def toString = re + (if (im < 0) "-" + -im else "+" + im) + "*i" } Note that the override keyword is mandatory in Scala. It has to be used when you override something, otherwise you get a complier error. This is one of the many ways Scala helps you as a programmer to avoid silly mistakes, in this case accidental overrides. Now, if you create a Complex instance in the interpreter, you will get: scala> val x = new Complex(1, 2) x: Complex = 1.0+2.0*i Adding Methods and Operators Since complex numbers are numbers, one thing we would like to be able to do with them are arithmetic operations such as addition. One way to achieve this would be to define a new add method: class Complex(val re: Double, val im: Double) { def add(c: Complex) = new Complex(re + c.re, im + c.im) ... } With the above definition, we can add complex numbers by invoking our new method using the familiar notation: scala> val x = new Complex(1, 2) x: Complex = 1.0+2.0*iscala> val y = new Complex(3, 4) y: Complex = 3.0+4.0*iscala> x.add(y) res0: Complex = 4.0+6.0*i In Scala, we could also invoke our method, as well as in fact any method, using an operator notation, with the same result: scala> x add y res1: Complex = 4.0+6.0*i And since we have operator notation, we could as well call our method +, and not add. Yes, this is possible in Scala. class Complex(val re: Double, val im: Double) { def +(c: Complex) = new Complex(re + c.re, im + c.im) ... } Now, adding x and y can be expressed simply as: scala> x + y res2: Complex = 4.0+6.0*i If you are familiar with languages like C++, this may seem a lot like operator overloading. But in fact, it is not really correct to say that Scala has operator overloading. Instead, Scala doesn’t really have operators at all. Every operator-looking construct, including arithmetic operations on simple types, is in fact a method call. This is of course much more consistent and easier to use than traditional operator overloading, which treats operators as a special case. In the final version of our Complex class, we will add the operator methods -, *, and / for the other arithmetic operations. Overloading Constructors and Methods Complex numbers with a zero imaginary part are in fact real numbers, and so real numbers can be seen simply as a special type of complex numbers. Therefore it should be possible to seamlessly convert between these two kinds of numbers and mix them in arithmetic expressions. To achieve this in our example class, we will overload the existing constructor and + method so that they accept Double instead of Complex: class Complex(val re: Double, val im: Double) { def this(re: Double) = this(re, 0) ... def +(d: Double) = new Complex(re + d, im) ... } Now, we can create Complex instances by specifying just their real parts, and add real numbers to them: scala> val y = new Complex(2) y: Complex = 2.0+0.0*iscala> y + 2 res3: Complex = 4.0+0.0*i Constructor and method overloading in Scala is similar to what can be found in Java and other languages. Constructor overloading is somewhat more restrictive, however. To ensure consistency and help avoid common errors, every overloaded constructor has to call the default constructor in its first statement, and only the default constructor is allowed to call a superclass constructor. Implicit Conversions If instead of y + 2 above we execute 2 + y we will get an error, since none of the Scala simple types has a method + accepting Complex as an argument. To improve the situation, we can define an implicit conversion from Double to Complex: implicit def fromDouble(d: Double) = new Complex(d) With this conversion in place, adding a Complex instance to a double becomes possible: scala> 2 + y res3: Complex = 4.0+0.0*i Implicit conversions are a powerful mechanism to make incompatible types interoperate seamlessly with each other. It almost renders other similar features such as method overloading obsolete. In fact, with the above conversion, we don’t need to overload the + method anymore. There are indeed strong reasons to prefer implicit conversions to method overloading, as explained in Why Method Overloading Sucks in Scala. In the final version of our Complex class, we will add implicit conversions from the other simple types as well. Access Modifiers As a true object-oriented language, Scala offers powerful access control features which can help you ensure proper encapsulation. Among them are the familiar private and protected access modifiers which you can use on fields and methods to restrict their visibility. In our Complex class, we could use a private field to hold the absolute value, or modulus of a complex number: class Complex(val re: Double, val im: Double) { private val modulus = sqrt(pow(re, 2) + pow(im, 2)) ... } Trying to access modulus from the outside will of course result in an error. Unary Operators To allow clients to get the modulus of a Complex instance, we will add a new method that returns it. Since modulus is a very common operation, it would be nice to be able to invoke it again as an operator. However, this has to be a unary operator this time. Fortunately, Scala helpfully allows us to define this kind of operators as well: class Complex(val re: Double, val im: Double) { private val modulus = sqrt(pow(re, 2) + pow(im, 2)) ... def unary_! = modulus ... } Methods starting with unary_ can be invoked as unary operators: scala> val y = new Complex(3, 4) y: Complex = 3.0+4.0*iscala> !y res2: Double = 5.0 In the final version of our Complex class, we will add unary operators for the + and - signs and for the complex conjugate. Companion Objects Besides traditional classes, Scala also allows defining objects with the object keyword, which essentially defines a singleton class and its single instance at the same time. If an object has the same name as a class defined in the same source file, it becomes a companion object of that class. Companion objects have a special relationship to the classes they accompany, in particular they can access private methods and fields of that class. Scala has no static keyword, because the language creators felt that it contradicts true object orientation. Therefore, companion objects in Scala are the place to put members that you would define as static in other languages, for example constants, factory methods, and implicit conversions. Let’s define the following companion object for our Complex class: object Complex { val i = new Complex(0, 1) def apply(re: Double, im: Double) = new Complex(re, im) def apply(re: Double) = new Complex(re) implicit def fromDouble(d: Double) = new Complex(d) } Our companion object has the following members:i is a constant for the imaginary unit The two apply methods are factory methods which allow creating Complex instances by invoking Complex(...) instead of the less convenient new Complex(...). The implicit conversion fromDouble is the one introduced above.With the companion object in place, we can now write expressions such as: scala> 2 + i + Complex(1, 2) res3: Complex = 3.0+3.0*i Traits Strictly speaking, complex numbers are not comparable to each other. Nevertheless, for practical purposes it would be useful to introduce a natural ordering based on their modulus. We would like of course to be able to compare complex numbers with the same operators <, <=, >, and >= that are used to compare other numeric types. One way to achieve this would be to define all these 4 methods. However, this would introduce some boilerplate as the methods <=, >, and >= will of course all call the < method. In Scala, this can be avoided by using the powerful feature known as traits. Traits are similar to interfaces in Java, since they are used to define object types by specifying the signature of the supported methods. Unlike Java, Scala allows traits to be partially implemented, so it is possible to define default implementations for some methods, similarly to Java 8 default methods. In Scala, a class can extend, or mix-in multiple traits due to mixin class composition. For our example, we will mix-in the Ordered trait into our Complex class. This trait provides implementations of all 4 comparison operators, which all call the abstract method compare. Therefore, to get all comparison operations “for free” all we need to do is provide a concrete implementation of this method. class Complex(val re: Double, val im: Double) extends Ordered[Complex] { ... def compare(that: Complex) = !this compare !that ... } Now, we can compare complex numbers as desired: scala> Complex(1, 2) > Complex(3, 4) res4: Boolean = falsescala> Complex(1, 2) < Complex(3, 4) res5: Boolean = true Case Classes and Pattern Matching Interestingly, comparing Complex instances for equality still doesn’t work as expected: scala> Complex(1, 2) == Complex(1, 2) res6: Boolean = false This is because the == method invokes the equals method, which implements reference equality by default. One way to fix this would be to override the equals method appropriately for our class. Of course, overriding equals means overriding hashCode as well. Although that would be rather trivial, it would add an unwelcome bit of boilerplate. In Scala, we can skip all this if we define our class as a case class by adding the keyword case. This adds automatically several useful capabilities, among them the following:adequate equals and hashCode implementations a companion object with an apply factory method class parameters are implicitly defined as valcase class Complex(re: Double, im: Double) ... } Now, comparing for equality works as expected: scala> i == Complex(0, 1) res6: Boolean = true But the most important capability of case classes is that they can be used in pattern matching, another unique and powerful Scala feature. To illustrate it, let’s consider the following toString implementation: override def toString() = this match { case Complex.i => "i" case Complex(re, 0) => re.toString case Complex(0, im) => im.toString + "*i" case _ => asString } private def asString = re + (if (im < 0) "-" + -im else "+" + im) + "*i" The above code matches this against several patterns representing the constant i, a real number, a pure imaginary number, and everything else. Although it could be written without pattern matching as well, this way is shorter and easier to understand. Pattern matching becomes really invaluable if you need to process complex object trees, as it provides a much more elegant and concise alternative to the Visitor design pattern typically used in such cases. Wrap-up The final version of our Complex class looks as follows: import scala.math._case class Complex(re: Double, im: Double) extends Ordered[Complex] { private val modulus = sqrt(pow(re, 2) + pow(im, 2))// Constructors def this(re: Double) = this(re, 0)// Unary operators def unary_+ = this def unary_- = new Complex(-re, -im) def unary_~ = new Complex(re, -im) // conjugate def unary_! = modulus// Comparison def compare(that: Complex) = !this compare !that// Arithmetic operations def +(c: Complex) = new Complex(re + c.re, im + c.im) def -(c: Complex) = this + -c def *(c: Complex) = new Complex(re * c.re - im * c.im, im * c.re + re * c.im) def /(c: Complex) = { require(c.re != 0 || c.im != 0) val d = pow(c.re, 2) + pow(c.im, 2) new Complex((re * c.re + im * c.im) / d, (im * c.re - re * c.im) / d) }// String representation override def toString() = this match { case Complex.i => "i" case Complex(re, 0) => re.toString case Complex(0, im) => im.toString + "*i" case _ => asString } private def asString = re + (if (im < 0) "-" + -im else "+" + im) + "*i" }object Complex { // Constants val i = new Complex(0, 1)// Factory methods def apply(re: Double) = new Complex(re)// Implicit conversions implicit def fromDouble(d: Double) = new Complex(d) implicit def fromFloat(f: Float) = new Complex(f) implicit def fromLong(l: Long) = new Complex(l) implicit def fromInt(i: Int) = new Complex(i) implicit def fromShort(s: Short) = new Complex(s) }import Complex._ With this remarkably short and elegant implementation we can do all the things described above, and a few more:create instances with Complex(...) get the modulus with !x and the conjugate with ~x perform arithmetic operations with the usual operators +, -, *, and / mix complex, real, and integer numbers freely in arithmetic expressions compare for equality with == and != compare modulus-based with <, <=, >, and >= get the most natural string representationIf you are inclined for some experimentation, I would encourage you to paste the above code in the Scala interpreter (using :paste first) and play around with these capabilities to get a better feeling. Conclusion Scala is considered by many to be a rather complex language. Perhaps this is why it’s so suitable for complex numbers … Puns aside, where some people see complexity I see unmatched elegance and power. I hope that this post illustrated this nicely. I am myself still learning Scala and far from being an expert. Are you aware of better ways to implement the above capabilities? I would love to hear about that.   Reference: Complex Numbers in Scala from our JCG partner Stoyan Rachev at the Stoyan Rachev’s Blog blog. ...

java.util.concurrent.Future basics

Hereby I am starting a series of articles about future concept in programming languages (also known as promises or delays) with a working title: Back to the Future. Futures are very important abstraction, even more these day than ever due to growing demand for asynchronous, event-driven, parallel and scalable systems. In the first article we’ll discover most basic java.util.concurrent.Future<T> interface. Later on we will jump into other frameworks, libraries or even languages. Future<T> is pretty limited, but essential to understand, ekhm, future parts. In a single-threaded application when you call a method it returns only when the computations are done (IOUtils.toString() comes from Apache Commons IO):     public String downloadContents(URL url) throws IOException { try(InputStream input = url.openStream()) { return IOUtils.toString(input, StandardCharsets.UTF_8); } } //... final String contents = downloadContents(new URL("http://www.example.com")); downloadContents() looks harmless1, but it can take even arbitrary long time to complete. Moreover in order to reduce latency you might want to do other, independent processing in the meantime, while waiting for results. In the old days you would start a new Thread and somehow wait for results (shared memory, locks, dreadful wait()/notify() pair, etc.) With Future<T> it’s much more pleasant: public static Future<String> startDownloading(URL url) { //... } final Future<String> contentsFuture = startDownloading(new URL("http://www.example.com")); //other computation final String contents = contentsFuture.get(); We will implement startDownloading() soon. For now it’s important that you understand the principles. startDownloading() does not block, waiting for external website. Instead it returns immediately, returning a lightweight Future<String> object. This object is a promise that String will be available in the future. Don’t know when, but keep this reference and once it’s there, you’ll be able to retrieve it using Future.get(). In other words Future is a proxy or a wrapper around an object that is not yet there. Once the asynchronous computation is done, you can extract it. So what API does Future provide? Future.get() is the most important method. It blocks and waits until promised result is available (resolved). So if we really need that String, just call get() and wait. There is an overloaded version that accepts timeout so you won’t wait forever if something goes wild. TimeoutException is thrown if waiting for too long. In some use cases you might want to peek on the Future and continue if result is not yet available. This is possible with isDone(). Imagine a situation where your user waits for some asynchronous computation and you’d like to let him know that we are still waiting and do some computation in the meantime: final Future<String> contentsFuture = startDownloading(new URL("http://www.example.com")); while (!contentsFuture.isDone()) { askUserToWait(); doSomeComputationInTheMeantime(); } The last call to contentsFuture.get() is guaranteed to return immediately and not block because Future.isDone() returned true. If you follow the pattern above make sure you are not busy waiting, calling isDone() millions of time per second. Cancelling futures is the last aspect we have not covered yet. Imagine you started some asynchronous job and you can only wait for it given amount of time. If it’s not there after, say, 2 seconds, we give up and either propagate error or work around it. However if you are a good citizen, you should somehow tell this future object: I no longer need you, forget about it. You save processing resources by not running obsolete tasks. The syntax is simple: contentsFuture.cancel(true); //meh... We all love cryptic, boolean parameters, aren’t we? Cancelling comes in two flavours. By passing false to mayInterruptIfRunning parameter we only cancel tasks that didn’t yet started, when the Future represents results of computation that did not even began. But if our Callable.call() is already in the middle, we let it finish. However if we pass true, Future.cancel() will be more aggressive, trying to interrupt already running jobs as well. How? Think about all these methods that throw infamous InterruptedException, namely Thread.sleep(), Object.wait(), Condition.await(), and many others (including Future.get()). If you are blocking on any of such methods and someone decided to cancel your Callable, they will actually throw InterruptedException, signalling that someone is trying to interrupt currently running task. So we now understand what Future<T> is – a place-holder for something, that you will get in the future. It’s like keys to a car that was not yet manufactured. But how do you actually obtain an instance of Future<T> in your application? Two most common sources are thread pools and asynchronous methods (backed by thread pools for you). Thus our startDownloading() method can be rewritten to: private final ExecutorService pool = Executors.newFixedThreadPool(10); public Future<String> startDownloading(final URL url) throws IOException { return pool.submit(new Callable<String>() { @Override public String call() throws Exception { try (InputStream input = url.openStream()) { return IOUtils.toString(input, StandardCharsets.UTF_8); } } }); } A lot of syntax boilerplate, but the basic idea is simple: wrap long-running computations in Callable<String> and submit() them to a thread pool of 10 threads. Submitting returns some implementation of Future<String>, most likely somehow linked to your task and thread pool. Obviously your task is not executed immediately. Instead it is placed in a queue which is later (maybe even much later) polled by thread from a pool. Now it should be clear what these two flavours of cancel() mean – you can always cancel task that still resides in that queue. But cancelling already running task is a bit more complex. Another place where you can meet Future is Spring and EJB. For example in Spring framework you can simply annotate your method with @Async: @Async public Future<String> startDownloading(final URL url) throws IOException { try (InputStream input = url.openStream()) { return new AsyncResult<>( IOUtils.toString(input, StandardCharsets.UTF_8) ); } } Notice that we simply wrap our result in AsyncResult implementing Future. But the method itself does not deal with thread pool or asynchronous processing. Later on Spring will proxy all calls to startDownloading() and run them in a thread pool. The exact same feature is available through @Asynchronous annotation in EJB. So we learned a lot about java.util.concurrent.Future. Now it’s time to admit – this interface is quite limited, especially when compared to other languages. More on that later. 1 – are you unfamiliar with try-with-resources feature of Java 7? You’ll better switch to Java 7 now. Java 6 will no longer be maintained in two weeks.   Reference: java.util.concurrent.Future basics from our JCG partner Tomasz Nurkiewicz at the NoBlogDefFound blog. ...

JAXB tutorial – Getting Started

Note: Check out our JAXB Tutorial for Java XML Binding – The ULTIMATE Guide What is JAXB? JAXB stands for Java architecture for XML binding.It is used to convert XML to java object and java object to XML.JAXB defines an API for reading and writing Java objects to and from XML documents.Unlike SAX and DOM,we don’t need to be aware of XML parsing techniques.  There are two operations you can perform using JAXBMarshalling :Converting a java object to XML UnMarshalling :Converting a XML to java objectJAXB Tutorial We will create a java program to marshal and unmarshal. For Marshalling:For Unmarshalling:Java program: With the help of annotations and API provided by JAXB,converting a java object to XML and vice versa become very easy. 1.Country.java A Java object which will be used to convert to and from XML Create Country.java in src->org.arpit.javapostsforlearning.jaxb package org.arpit.javapostsforlearning.jaxb;import java.util.ArrayList;import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlElementWrapper; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlType;//Below annotation defines root element of XML file @XmlRootElement //You can define order in which elements will be created in XML file //Optional @XmlType(propOrder = { 'countryName', 'countryPopulation', 'listOfStates'}) public class Country {private String countryName; private double countryPopulation;private ArrayList<state> listOfStates; public Country() {} public String getCountryName() { return countryName; } @XmlElement public void setCountryName(String countryName) { this.countryName = countryName; } public double getCountryPopulation() { return countryPopulation; }@XmlElement public void setCountryPopulation(double countryPopulation) { this.countryPopulation = countryPopulation; }public ArrayList<state> getListOfStates() { return listOfStates; }// XmLElementWrapper generates a wrapper element around XML representation @XmlElementWrapper(name = 'stateList') // XmlElement sets the name of the entities in collection @XmlElement(name = 'state') public void setListOfStates(ArrayList<state> listOfStates) { this.listOfStates = listOfStates; }} @XmlRootElement:This annotation defines root element of XML file. @XmlType(propOrder = {‘list of attributes in order’}):This is used to define order of elements in XML file.This is optional. @XmlElement:This is used to define element in XML file.It sets name of entity. @XmlElementWrapper(name = ‘name to be given to that wrapper’):It generates a wrapper element around XML representation.E.g.In above example, it will generate <stateList> around each <state> element 2.State.java package org.arpit.javapostsforlearning.jaxb;import javax.xml.bind.annotation.XmlRootElement;//Below statement means that class 'Country.java' is the root-element of our example @XmlRootElement(namespace = 'org.arpit.javapostsforlearning.jaxb.Country') public class State {private String stateName; long statePopulation;public State() {} public State(String stateName, long statePopulation) { super(); this.stateName = stateName; this.statePopulation = statePopulation; }public String getStateName() { return stateName; }public void setStateName(String stateName) { this.stateName = stateName; }public long getStatePopulation() { return statePopulation; }public void setStatePopulation(long statePopulation) { this.statePopulation = statePopulation; } } 3.JAXBJavaToXml.java package org.arpit.javapostsforlearning.jaxb;import java.io.File; import java.util.ArrayList;import javax.xml.bind.JAXBContext; import javax.xml.bind.JAXBException; import javax.xml.bind.Marshaller;public class JAXBJavaToXml { public static void main(String[] args) {// creating country object Country countryIndia=new Country(); countryIndia.setCountryName('India'); countryIndia.setCountryPopulation(5000000);// Creating listOfStates ArrayList<state> stateList=new ArrayList<state>(); State mpState=new State('Madhya Pradesh',1000000); stateList.add(mpState); State maharastraState=new State('Maharastra',2000000); stateList.add(maharastraState);countryIndia.setListOfStates(stateList);try {// create JAXB context and initializing Marshaller JAXBContext jaxbContext = JAXBContext.newInstance(Country.class); Marshaller jaxbMarshaller = jaxbContext.createMarshaller();// for getting nice formatted output jaxbMarshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);//specify the location and name of xml file to be created File XMLfile = new File('C:\\arpit\\CountryRecord.xml');// Writing to XML file jaxbMarshaller.marshal(countryIndia, XMLfile); // Writing to console jaxbMarshaller.marshal(countryIndia, System.out);} catch (JAXBException e) { // some exception occured e.printStackTrace(); }} } After running above program,you will get following output Console output: <?xml version='1.0' encoding='UTF-8' standalone='yes'?> <country xmlns:ns2='org.arpit.javapostsforlearning.jaxb.Country'> <countryName>India</countryName> <countryPopulation>5000000.0</countryPopulation> <stateList> <state> <stateName>Madhya Pradesh</stateName> <statePopulation>1000000</statePopulation> </state> <state> <stateName>Maharastra</stateName> <statePopulation>2000000</statePopulation> </state> </stateList> </country> Now we will read above generated XML and retrieve country object from it. 4.JAXBXMLToJava.java package org.arpit.javapostsforlearning.jaxb;import java.io.File; import java.util.ArrayList; import javax.xml.bind.JAXBContext; import javax.xml.bind.JAXBException; import javax.xml.bind.Unmarshaller;public class JAXBXMLToJava { public static void main(String[] args) {try {// create JAXB context and initializing Marshaller JAXBContext jaxbContext = JAXBContext.newInstance(Country.class);Unmarshaller jaxbUnmarshaller = jaxbContext.createUnmarshaller();// specify the location and name of xml file to be read File XMLfile = new File('C:\\arpit\\CountryRecord.xml');// this will create Java object - country from the XML file Country countryIndia = (Country) jaxbUnmarshaller.unmarshal(XMLfile);System.out.println('Country Name: '+countryIndia.getCountryName()); System.out.println('Country Population: '+countryIndia.getCountryPopulation());ArrayList<state> listOfStates=countryIndia.getListOfStates();int i=0; for(State state:listOfStates) { i++; System.out.println('State:'+i+' '+state.getStateName()); }} catch (JAXBException e) { // some exception occured e.printStackTrace(); }} } After running above program,you will get following output: Console output: Country Name: India Country Population: 5000000.0 State:1 Madhya Pradesh State:2 Maharastra JAXB advantages:It is very simple to use than DOM or SAX parser We can marshal XML file to other data targets like inputStream,URL,DOM node. We can unmarshal XML file from other data targets. We don’t need to be aware of XML parsing techniques. We don’t need to access XML in tree structure always.JAXB disadvantages:JAXB is high layer API so it has less control on parsing than SAX or DOM. It has some overhead tasks so it is slower than SAX.Source code: Download   Reference: JAXB tutorial – Getting Started from our JCG partner Arpit Mandliya at the Java frameworks and design patterns for beginners blog. ...

Hibernate inheritance: Table per class hierarchy

In this tutorial we will see how to implement inheritance in hibernate.There are 3 ways in which you can implement inheritance in hibernate.In this post,we will see one of them i.e.one table per class hierarchy. Inheritance in hibernate: Java is object oriented language and inheritance is one of main functionalities of java.Relation model can implement ‘is a’ and ‘has a’ relationship but hibernate provides us way to implement class hierarchy in a different ways.     One table per class hierarchy: Lets say we have following class hierarchy.We have shape class as base class and Rectangle and Circle inherit from Shape class.In one table per class hierarchy,One table will be created for above hierarchy.i.e. SHAPE table will be created having following structure.As you can see only one table(SHAPE) is created having attributes of subclasses also. As per our above class diagram,we will create three classes-Shape.java,Rectangle.java and Circle.java 1.Shape.java This is our root class of entity class hierarchy. Create Shape.java in src->org.arpit.javapostsforlearning. package org.arpit.javapostsforlearning; import javax.persistence.Column; import javax.persistence.DiscriminatorColumn; import javax.persistence.DiscriminatorValue; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.Inheritance; import javax.persistence.InheritanceType; import javax.persistence.Table; import javax.persistence.DiscriminatorType;@Entity @Table(name='SHAPE') @Inheritance(strategy=InheritanceType.SINGLE_TABLE) @DiscriminatorColumn ( name='Discriminator', discriminatorType=DiscriminatorType.STRING ) @DiscriminatorValue(value='S') public class Shape {@Id @GeneratedValue @Column(name='Shape_Id') int shapeId; @Column(name='Shape_Name') String shapeName;public Shape() {} public Shape(String shapeName) { this.shapeName=shapeName; } //getters and setters} Shape is our root class so some annotations needs to be used with root class for implementing inheritance. @Inheritance: For implementing inheritance in hiberante,@Inheritance annotation is used.It defines inheritance strategy to be implement for entity class hierarchy.For one table per class hierarhcy,we have used Single_Table as inheritance strategy.This annotation is defined at root level or sub hierarchy level where different strategy is to be applied. @DiscriminatorColumn: This annotation is used to define discriminator column for Single_Table and joined strategy.It is used to distinguish between different class instances.This annotation is defined at root level or sub hierarchy level where different strategy is to be applied. If @DiscriminatorColumn annotation is not specified,then hibernate will create a column named as ‘DType’ and DiscriminatorType will be string. @DiscriminatorValue: This annotation defines value in discriminator column for that class.This can only be applied on entity concrete class.For example,If entry will be of shape instance in SHAPE table then ‘s’ will be value for that row in discriminator column.If this annotation is not specified and Discriminator column is used then provider specific values will be provided and if Discriminator type is String then discriminator value will be entity name.Discriminator value,if not defaulted need to specified on each enitity in hierarchy. 2.Rectangle.java This is our child class. Create Rectangle.java in src->org.arpit.javapostsforlearning. package org.arpit.javapostsforlearning;import javax.persistence.Column; import javax.persistence.Column; import javax.persistence.DiscriminatorValue; import javax.persistence.Entity;@Entity @DiscriminatorValue(value='R') public class Rectangle extends Shape{@Column(name='Rectangle_Length') int length; @Column(name='Rectangle_Breadth') int breadth; // getters and setterspublic Rectangle() {}public Rectangle(String shapeName,int length,int breadth) { super(shapeName); this.length=length; this.breadth=breadth; }// getters and setters } 3.Circle.java This is our second child class. Create Circle.javain src->org.arpit.javapostsforlearning. package org.arpit.javapostsforlearning;import javax.persistence.Column; import javax.persistence.Column; import javax.persistence.DiscriminatorValue; import javax.persistence.Entity;@Entity @DiscriminatorValue(value="R") public class Rectangle extends Shape{@Column(name="Rectangle_Length") int length; @Column(name="Rectangle_Breadth") int breadth; // getters and setters public Rectangle() { } public Rectangle(String shapeName,int length,int breadth) { super(shapeName); this.length=length; this.breadth=breadth; } // getters and setters }4.Hiberante.cfg.xml: Create a file named ‘hibernate.cfg.xml’ in src folder. <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE hibernate-configuration PUBLIC '-//Hibernate/Hibernate Configuration DTD 3.0//EN' 'http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd'><hibernate-configuration><session-factory><!-- Database connection settings --> <property name='connection.driver_class'>com.microsoft.sqlserver.jdbc.SQLServerDriver</property> <property name='connection.url'>jdbc:sqlserver://localhost:1433;database=UserInfo</property> <property name='connection.username'>sa</property> <property name='connection.password'></property><!-- JDBC connection pool (use the built-in) --> <property name='connection.pool_size'>1</property><!-- SQL dialect --> <property name='dialect'>org.hibernate.dialect.SQLServer2005Dialect</property><!-- Enable Hibernate's automatic session context management --> <property name='current_session_context_class'>thread</property><!-- Disable the second-level cache --> <property name='cache.provider_class'>org.hibernate.cache.NoCacheProvider</property><!-- Echo all executed SQL to stdout --> <property name='show_sql'>true</property><!-- Drop and re-create the database schema on startup --> <property name='hbm2ddl.auto'>create</property><mapping class='org.arpit.javapostsforlearning.Shape'></mapping> <mapping class='org.arpit.javapostsforlearning.Rectangle'></mapping> <mapping class='org.arpit.javapostsforlearning.Circle'></mapping></session-factory></hibernate-configuration> 5.Main Class: package org.arpit.javapostsforlearning;import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.cfg.Configuration; import org.hibernate.service.ServiceRegistry; import org.hibernate.service.ServiceRegistryBuilder;public class HibernateMain {public static void main(String[] args) {Shape shape=new Shape('Sqaure'); Rectangle rectangle=new Rectangle('Rectangle', 10, 20); Circle circle=new Circle('Circle', 4);Configuration configuration=new Configuration(); configuration.configure(); ServiceRegistry sr= new ServiceRegistryBuilder().applySettings(configuration.getProperties()).buildServiceRegistry(); SessionFactory sf=configuration.buildSessionFactory(sr); Session ss=sf.openSession();ss.beginTransaction(); ss.save(shape); ss.save(rectangle); ss.save(circle); ss.getTransaction().commit(); ss.close();} } 6.Run it: When you run it,you will get following output. Hibernate: create table SHAPE (Discriminator varchar(31) not null, Shape_Id int identity not null, Shape_Name varchar(255), Rectangle_Breadth int, Rectangle_Length int, Circle_Radius int, primary key (Shape_Id)) Feb 04, 2013 11:01:36 PM org.hibernate.tool.hbm2ddl.SchemaExport execute INFO: HHH000230: Schema export complete Hibernate: insert into SHAPE (Shape_Name, Discriminator) values (?, 'S') Hibernate: insert into SHAPE (Shape_Name, Rectangle_Breadth, Rectangle_Length, Discriminator) values (?, ?, ?, 'R') Hibernate: insert into SHAPE (Shape_Name, Circle_Radius, Discriminator) values (?, ?, 'C') 7.SQL output: SHAPE table in database.  Reference: Hibernate inheritance: Table per class hierarchy from our JCG partner Arpit Mandliya at the Java frameworks and design patterns for beginners blog. ...

Easy Mocking of Your Database

Test-driven development is something wonderful! Once you’ve established it in your organisation, you will start to:Greatly improve your quality (things break less often) Greatly improve your processes (things can be changed more easily) Greatly improve your developer atmosphere (things are more fun to do)The importance of doing the right test-driven development is to find a good ratio of what kind of code is to be covered…    by automated unit tests by automated integration tests by manual “smoke tests” by manual “acceptance tests” not at allFinding that ratio can be grounds for heated, religious discussions. I will soon blog about my own opinion on that subject. In this post, however, we will focus on the first kind of test: unit tests. Unit testing your data access When databases are involved, people will probably quickly jump to writing integration tests, because all they have to do is create a little Derby, H2 or HSQLDB (or other) test database, and run a couple of data-setup queries prior to the actual test. Their code module will then hopefully not notice the difference to a productive environment, and the whole system can be tested as a blackbox. The advantage of this is that your tests can be written in a way to verify your business requirements, your user stories, or whatever you call them. So far, the theory. When these database integration tests pile up, it starts to become increasingly difficult to shield them off one another. Avoiding inter-dependencies and at the same time, avoiding costly database setups is hard. You won’t be able to run the whole test-suite immediately after building / committing. You need nightly builds, weekly builds. But unit testing the data access layer isn’t that much easier! Because JDBC is an awful API to mock. There are so many different ways of configuring and executing queries through this highly stateful API, your unit tests quickly become unmanageable. There are a few libraries that help you with database testing. Just to name a few:MockRunner: This one has some JDBC-specific extensions that allow for simulating JDBC ResultSets, as well as for checking whether actual queries are executed jMock: An “ordinary” Java mocking library mockito: An “ordinary” Java mocking library DBUnit: This one doesn’t mock your database, it’s good for testing your database. Another use-case, but still worth mentioning hereSome of the above libraries will not get you around the fact that JDBC is an awkward API to mock, specifically if you need to support several (incompatible!) versions of JDBC at the same time. Some examples can be seen here:http://stackoverflow.com/questions/10128185/using-jmock-to-write-unit-test-for-a-simple-spring-jdbc-dao http://www.thedwick.com/2010/01/resultset-mocking-with-jmock http://www.turnleafdesign.com/mocking-jdbc-connections-with-mockrunnerMocking the database with jOOQ When you’re using jOOQ in your application, mocking your database just became really easy in jOOQ 3.0. jOOQ now also ships with a Mock JDBC Connection. Unlike with other frameworks, however, you only have to implement a single functional interface with jOOQ, and provide that implementation to your MockConnection: The MockDataProvider. Here’s a simple implementation example: MockDataProvider provider = new MockDataProvider() {// Your contract is to return execution results, given a context // object, which contains SQL statement(s), bind values, and some // other context values @Override public MockResult[] execute(MockExecuteContext context) throws SQLException {// Use ordinary jOOQ API to create an org.jooq.Result object. // You can also use ordinary jOOQ API to load CSV files or // other formats, here! Result<MyTableRecord> result = executor.newResult(MY_TABLE); result.add(executor.newRecord(MY_TABLE));// Now, return 1-many results, depending on whether this is // a batch/multi-result context return new MockResult[] { new MockResult(1, result) }; } };// Put your provider into a MockConnection and use that connection // in your application. In this case, with a jOOQ Executor: Connection connection = new MockConnection(provider); Executor create = new Executor(connection, dialect);// Done! just use regular jOOQ API. It will return the values // that you've specified in your MockDataProvider assertEquals(1, create.selectOne().fetch().size()); The above implementation acts as a callback for JDBC’s various executeXXX() methods. Through a very simple MockExecuteContext API, you can thus:Get access to the executed SQL and bind values (Use general jOOQ API to inline bind values into the SQL statement) Distinguish between regular SQL statements and both single-statement/multi-bind-value and multi-statement/no-bind-value batch executions Return one or several results using jOOQ’s org.jooq.Result objects (which you can easily import from CSV, XML, JSON, TEXT formats) Return “generated keys” results through the same API Let jOOQ’s MockStatement take care of the serialisation of your mock data through the JDBC APIThere is also an experimental implementation of a MockFileDatabase, a text-based mock database that uses the following format: # This is a sample test database for MockFileDatabase # Its syntax is inspired from H2's test script files# When this query is executed... select 'A' from dual; # ... then, return the following result > A > - > A @ rows: 1# Just list all possible query / result combinations select 'A', 'B' from dual; > A B > - - > A B @ rows: 1select 'TABLE1'.'ID1', 'TABLE1'.'NAME1' from 'TABLE1'; > ID1 NAME1 > --- ----- > 1 X > 2 Y @ rows: 2 MockFileDatabase implements MockDataProvider, so it’s dead-simple to provide your unit tests with sample data. Future versions of jOOQ will allow for:Regex pattern-matching SQL statements to provide mock results Load these results from other formats, such as jOOQ’s supported export formats Specify the behaviour of batch statements, multi-result statements, etc.Using jOOQ’s MockConnection in other contexts Things don’t stop here. As jOOQ’s MockConnection is the entry point for this mocking sub-API of jOOQ, you can also use it in other environments, such as when running JPA queries, Hibernate queries, iBatis or just your plain old legacy JDBC queries. jOOQ has just become your preferred JDBC mock framework!   Reference: Easy Mocking of Your Database from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog. ...

The Stigma of Tech Certifications (and their real value)

Every so often I will receive a résumé from a software engineer that includes a list of technical certifications. These days most candidates tend to have none listed, but over the years I’ve seen some include anywhere from one or two certs up to ten or more certs, and it seems the number of companies willing to certify tech professionals has continued to grow. Vendors like IBM and Oracle each offer over 100 certifications, while Brainbench lists almost 30 tests on Java topics alone. At prices ranging from the $50 neighborhood up to $200 and more, the technology certification industry seems quite lucrative for the testing companies. But what is it all about for engineers? What (if any) value do certifications have for your marketability, and could having a certification potentially result in the opposite of the intended effect and actually hurt your chances of being hired? When do certifications help? There are some situations when certifications are absolutely helpful, as is the case for job seekers in certain industries that generally require a specific cert. A certification that was achieved through some relatively intense training (and not just a single online test) will also usually have value, much like a four year degree tends to be valued above most training programs. If a technology is very new and having skill with it is incredibly rare, a certification is one way to demonstrate at least some level of qualification that others probably will not have. When and why can certifications actually hurt? Professionals that have very little industry experience but possess multiple certifications usually will get a double take from hiring managers and recruiters. These junior candidates are perceived as trying to substitute certifications for an intimate knowledge that is gained through using the technology regularly, and more senior level talent will note that the ability to pass a test does not always indicate the ability to code. Many of these job seekers would be much better off spending their time developing a portfolio of code to show prospective employers. Experienced candidates with multiple certifications may have some stigma attached to them due to their decision to both pursue them and then to subsequently list them. Some recruiters or managers may feel that these professionals are trying to compensate for having little depth in a technology or a lack of real-world accomplishments, and that the candidate wrongly assumes that a cert shows otherwise. Some that evaluate talent might get the impression that the candidate obtains certs in order to feel validated by (or even superior to) their peers, and that the cert is more driven by ego than a desire to learn. Lastly, there will be some who feel that over-certified technologists are ‘suckers’ that should have spent their (or the company’s) money and time more wisely. The greatest value of certifications Having spoken to hundreds of programmers certified in any number of technologies, I found that the majority claimed to find more value in the process of studying and test preparation than with the accomplishment of passing the test and getting certified. Pursuing a certification is one way to learn a new skill or to get back to the basics of a skill you already have. Certification tests are a great form of motivation to those that take them, due to the fact that there is:a time deadline – If you decide you want to learn a technology in your spare time, you probably don’t associate any particular date in mind for learning milestones. Certs are often scheduled for a specific date, which motivates the test taker to study right away. a time cost – Preparing for a test like this comes at the expense of other things in your life, so most that pursue certs understand the time investment required. a monetary cost – Shelling out $50 to $200 of your own money is an additional motivator. It’s not that much for most in the industry, but it is a lot to pay to fail a test. a risk of failure – If you are studying with others for a test, pride will also be motivating.As the pursuit of certification seems to be the greatest value, keep this simple fact in mind. Just because you get a certification doesn’t mean you have to list it on your résumé.   Reference: The Stigma of Tech Certifications (and their real value) from our JCG partner Dave Fecak at the Job Tips For Geeks blog. ...

A Bug is a Terrible Thing to Waste

Some development teams, especially Agile teams, don’t bother tracking bugs. Instead of using a bug tracking system, when testers find a bug, they talk to the developer and get it fixed, or they write a failing test that needs to be fixed and add it to the Continuous Integration test suite, or if they have to, they write up a bug story on a card and post it on the wall so the team knows about it and somebody will commit to fixing it. Other teams live by their bug tracking systems, using tools like Jira or Bugzilla or FogBugz to record bugs as well as changes and other work. There are arguments to be made for both of these approaches.       Arguments for tracking bugs – and for not tracking bugs In Agile Testing: A Practical Guide for Testers and Agile Teams, Lisa Crispin and Janet Gregory examine the pros and cons of using a defect tracking system. Using a system to track bugs can be the only effective way to manage problems for teams who can’t meet face-to-face – for example, distributed teams spread across different time zones. It can also be useful for teams who have inherited open bugs in a legacy system; and a necessary evil for teams who are forced to track bugs for compliance reasons. The information in a bug database is a potential knowledge base for testers and developers joining the team – they can review bugs that were found before in the area of the code that they are working on to understand what problems they should look out for. And bug data can be used to collect metrics and create trends on bugs – if you think that bug metrics are useful. But the Lean/Agile view is that using a defect tracking system mostly gets in the way and slows people down. The team should stay focused on finding bugs, fixing them, and then forget about them. Bugs are waste, and everything about them is waste – dead information, and dead time that is better spent delivering value. Worse, using a defect tracking system prevents testers and developers from talking with each other, and encourages testers to take a “Quality Police” mindset. Without a tool, people have to talk to each other, and have to learn to play nicely together. This is a short-term, tactical point of view, focused on what is needed to get the software out the door and working. It’s project-thinking, not product thinking. Bugs over the Long Term But if you’re working on a system over a long time like we are, if you’re managing a product or running a service, you know that it’s not that simple. You can’t just look at what’s in front of you, and where you want to be in a year or more. You also have to look back, at the work that was done before, at problems that happened before, at decisions that were made before, to understand why you are where you are today and where you may be heading in the future. Because some problems never go away. And other problems will come back unless you do something to stop it. And you’ll find out that other problems which you thought you had licked never really went away. The information from old bugs, what happened and what somebody did to fix them (or why they couldn’t fix them), which workarounds worked (and which didn’t) can help you understand and deal with the problems that you are seeing today, and help you to keep improving the system and how you build it and keep it running. Because you should understand the history of changes and fixes to the code if you’re going to change it. If you like the way the code is today, you might want to know how and why it got this way. If you don’t like it, you’ll want to know how and why it got this way – it’s arrogant to assume that you won’t make the same mistakes or be forced into the same kinds of situations. Revision control will tell you what was changed and when and who did it, the bug tracking system will tell you why. Because you need to know where you have instability and risk in the system. You need to identify defect-dense code or error-prone code – code that contains too many bugs, code that is costing you too much to maintain and causing too many problems, code that is too expensive to keep running the way that is today. Code that you should rewrite ASAP to improve stability and reduce your ongoing costs. But you can’t identify this code without knowing the history of problems in the system. Because you may need to prove to auditors or regulators and customers and investors that you are doing a responsible job of testing and finding bugs and fixing them and getting the fixes out. And because you want to know how effective the team is in finding, fixing and preventing bugs. Are you seeing fewer bugs today? Or more bugs? Are you seeing the same kinds of bugs – are you making the same mistakes? Or different mistakes? Do you need to track every Bug? As long as bugs are found early enough, there’s little value in tracking them. It’s when bugs escape that they need to be tracked: bugs that the developer didn’t find right away on their own, or in pairing, or through the standard automated checks and tests that are run in Continuous Integration. We don’t logdefects found in unit tests or other automated tests – unless for some reason the problem can’t or won’t be fixed right away; problems found in peer reviews – unless something in the review is considered significant and can’t get addressed immediately. Or a problem is found in a late review, after testing has already started, and the code will need to be retested. Or the reviewer finds something wrong in code that wasn’t changed, an old bug – it’s still a problem that needs to be looked at, but we may not be prepared to deal with it right now. All problems found in external reviews, like a security review or an audit, are logged; static analysis findings – most of the problems caught by these tools are simple coding mistakes that can be seen and fixed right away, and there’s also usually a fair amount of noise (false positives) that has to be filtered out. We run static analysis checks and review them daily, and only log findings if we agree that the finding is real but the developer isn’t prepared to fix it immediately (which almost never happens, unless we’re running a new tool against an existing code base for the first time). Many static analysis tools have their own systems for tracking static analysis findings any ways, so we can always go back and review outstanding issues later; bugs found when developers and testers decide to pair together to test changes early in development, when they are mostly exploring how something should work – we don’t usually log these bugs unless they can’t be / won’t be fixed (can’t be reproduced later for example).A Bug is a Terrible thing to Waste We log all other bugs regardless of whether they are found in production, in internal testing, partner testing, User Acceptance Testing, or external testing (such as a pen test). Because most of the time, when software is handed to a tester, it’s supposed to be working. If the tester finds bugs, especially serious ones, then this is important information to the tester, to the developer, and to the rest of the team. It can highlight risks. It can show where more testing and reviews need to be done. It can highlight deeper problems in the design, a lack of understanding that could cause other problems. If you believe that testing provides important information not just about the state of your software, but also on how you are designing and building it – then everyone needs to be able to see this information, and understand it over time. Some problems can’t be seen or fully understood right away, or in 1-week or 2-week sprint-sized chunks. It can take a while before you recognize that you have a serious weakness in the design or that something is broken in your approach to development or in your culture. You’ll need to experience a few problems before you start to find relationships between them and before you can look for their root cause. You’ll need data from the past in order to solve problems in the future. Tracking bugs isn’t a waste if you learn from bugs. Throwing the information on bugs away is the real waste.   Reference: A Bug is a Terrible Thing to Waste from our JCG partner Jim Bird at the Building Real Software blog. ...

Bumping Into Manager Rules

You might have met a manager on a bad manager day. Equally as frustrating is when you work for a manager who has rules about problem solving. I once worked for a manager who proudly said to me, “Don’t bring me a problem without bringing me a solution.” I blinked once and said, “Why would I bring you a problem I could solve?” He stopped, and said, “Ooh.” Some of you will recognize that as the programmer’s refrain. “Oooh,” is what you say when you realize the computer has done something you told it to do, but is not what you meant it to do. “Don’t bring me a problem without bringing me a solution” is an example of management incongruence. Not because a manager means to be. But because a manager might not know better. My manager wanted to challenge me. Believe me, I was challenged! I wasn’t being lazy. I wasn’t being stupid. I was stuck. I needed help. I didn’t know where to go for help. Even in agile teams, the manager might be the right person to go to. The manager might not be. The manager might not have the answer. But the manager might be the right person to free the impediment, to know who has the answer, or to help with problem-solving. This is why when managers have rules about problem solving, they make life difficult for everyone else. Managers don’t have to be perfect. They have to work work hard at staying congruent, which is different than being perfect. Much different than being perfect.This is a picture of what I mean by congruence. When the manager takes him or herself, the other person, and the context into account, the manager is congruent. When the manager stops taking the other person into account, the manager blames the other person. When you bump into manager rules such as “Don’t bring me a problem without a solution,” your manager is blaming you for not having a solution. When the manager stops taking him/herself into account, the manager placates. Managers who say, “Yes,” to all work and never say No and don’t manage the project portfolio placate the rest of the organization. Managers who ignore both themselves and the other person are super-reasonable. Remember Ever Have a Bad Manager Day? I was being super-reasonable, ignoring me and the other person and the fact that we were human. Hah! That didn’t last long. There are are other incongruent stances, but those are the big three. Does this mean managers can’t be human? Oh, no, they sure can be, and are! And, they need to watch out for these rules that make them less effective. Incongruent stances do not help managers manage. Incongruent stances and rules make it more difficult for managers to do a great job. If you would like to read more about bumping into manager rules, take a look at my next myth, Management Myth 14: I Must Always Have a Solution to the Problem. Let me know if you like my suggestions.   Reference: Bumping Into Manager Rules from our JCG partner Johanna Rothman at the Managing Product Development blog. ...

How expensive is a method call in Java

We have all been there. Looking at the poorly designed code while listening to the author’s explanations about how one should never sacrifice performance over design. And you just cannot convince the author to get rid of his 500-line methods because chaining method calls would destroy the performance. Well, it might have been true in 1996 or so. But since then JVM has evolved to be an amazing piece of software. One way to find out about it is to start looking more deeply into optimizations carried out by the virtual machine. The arsenal of techniques applied by the JVM is quite extensive, but lets look into one of them in more details. Namely method inlining. It is easiest to explain via the following sample:     int sum(int a, int b, int c, int d) {    return sum(sum(a, b),sum(c, d)); }   int sum(int a, int b) {    return a + b; } When this code is run, the JVM will figure out that it can replace it with a more effective, so called “inlined” code: int sum(int a, int b, int c, int d) {    return a + b + c + d; } You have to pay attention that this optimization is done by the virtual machine and not by the compiler. It is not transparent at the first place why this decision was made. After all – if you look at the sample code above – why postpone optimization when compilation can produce more efficient bytecode? But considering also other not-so obvious cases, JVM is the best place to carry out the optimization:JVM is equipped with runtime data besides static analysis. During runtime JVM can make better decisions based on what methods are executed most often, what loads are redundant, when is it safe to use copy propagation, etc. JVM has got information about the underlying architecture – number of cores, heap size and configuration and can thus make the best selection based on this information.But let us see those assumptions in practice. I have created a small test application which uses several different ways to add together 1024 integers.A relatively reasonable one, where the implementation just iterates over the array containing 1024 integers and sums the result together. This implementation is available in InlineSummarizer.java. Recursion based divide-and-conquer approach. I take the original 1024 – element array and recursively divide it into halves – the first recursion depth thus gives me two 512-element arrays, the second depth has four 256-element arrays and so forth. In order to sum together all the 1024 elements I introduce 1023 additional method invocations. This implementation is attached as RecursiveSummarizer.java. Naive divide-and-conquer approach. This one also divides the original 1024-element array, but via calling additional instance methods on the separated halves – namely I nest sum512(), sum256(), sum128(), …, sum2() calls until I have summarized all the elements. As with recursion, I introduce 1023 additional method invocations in the source code.And I have a test class to run all those samples. The first results are from unoptimized code:As seen from the above, the inlined code is the fastest. And the ones where we have introduced 1023 additional method invocations are slower by ~25,000ns. But this image has to be interpreted with a caveat – it is a snapshot from the runs where JIT has not yet fully optimized the code. In my mid-2010 MB Pro it took between 200 and 3000 runs depending on the implementation. The more realistic results are below. I have ran all the summarizer implementations for more than 1,000,000 times and discarded the runs where JIT has not yet managed to perform it’s magic.We can see that even though inlined code still performed best, the iterative approach also flew at a decent speed. But recursion is notably different – when iterative approach close in with just 20% overhead, RecursiveSummarizer takes 340% of the time the inlined code needs to complete. Apparently this is something one should be aware of – when you use recursion, the JVM is helpless and cannot inline method calls. So be aware of this limitation when using recursion. Recursion aside – method overheads are close to being non-existent. Just 205 ns difference between having 1023 additional method invocations in your source code. Remember, those were nanoseconds (10^-9 s) over there that we used for measurement. So thanks to JIT we can safely neglect most of the overhead introduced by method invocations. The next time when your coworker is hiding his lousy design decisions behind the statement that popping through a call stack is not efficient, let him go through a small JIT crash course first. And if you wish to be well-equipped to block his future absurd statements, subscribe to either our RSS or Twitter feed and we are glad to provide you future case studies. Full disclosure: the inspiration for the test case used in this article was triggered by Tomasz Nurkiewicz blog post.   Reference: How expensive is a method call in Java from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog. ...

How To Remove Friction From Your Version Control Experience

Last week, I spend several days fixing a bug that only surfaced in a distributed environment.I felt pressure to fix it quickly, because our continuous integration build was red, and we treat that as a “stop the line” event. Then I came across a post from Tomasz Nurkiewicz who claims that breaking the build is not a crime. Tomasz argues that a better way to organize software development is to make sure that breaking changes don’t affect your team mates. I agree. Broken Builds Create Friction Breaking changes from your co-workers are a form of friction, since they take away time and focus from your job. Tomasz’ setup has less friction than ours. But I feel we   can do better still. In a perfect Frictionless Development Environment (FDE), all friction is removed. So what would that look like with regard to version control? With current version control systems, there is lots of friction. I complained about Perforce before because of that. Git is much better, but even then there are steps that have to be performed that take away focus from the real goal you’re trying to achieve: solving the customer’s problem using software. For instance, you still have to create a new topic branch to work on. And you have to merge it with the main development line. In a perfect world, we wouldn’t have to do that. Frictionless Version Control So how would a Frictionless Development Environment do version control for us? Knowing when to create a branch is easy. All work happens on a topic branch, so every time you start to work on something, the FDE could create a new branch. The problem is knowing when to merge. But even this is not as hard as it seems. You’re done with your current work item (user story or whatever you want to call it) when it’s coded, all the tests pass, and the code is clean. So how would the FDE know when you’re done thinking of new tests for the story? Well, if you practice Behavior-Driven Development (BDD), you start out with defining the behavior of the story in automated tests. So the story is functionally complete when there is a BDD test for it, and all scenarios in that test pass. Now we’re left with figuring out when the code is clean. Most teams have a process for deciding this too. For instance, code is clean when static code analysis tools like PMD, CheckStyle, and FindBugs give no warnings. Some people will argue that we need a minimum amount of code coverage from our tests as well. Or that the code needs to be reviewed by a co-worker. Or that Fortify must not find security vulnerabilities. That’s fine. The basic point is that we can formally define a pipeline of processes that we want to run automatically. At each stage of the pipeline can we reject the work. Only when all stages complete successfully, are we done. And then the FDE can simply merge the branch with the main line, and delete it. Zero friction from version control. What do you think? Would you like to lubricate your version control experience? Do you think an automated branching strategy as outlined above would work?   Reference: How To Remove Friction From Your Version Control Experience from our JCG partner Remon Sinnema at the Secure Software Development blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: