Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

groovy-logo

Writing Groovy’s groovy.util.Node (XmlParser) Content as XML

Groovy‘s XmlParser makes it easy to parse an XML file, XML input stream, or XML string using one its overloaded parse methods (or parseText in the case of the String). The XML content parsed with any of these methods is made available as a groovy.util.Node instance. This blog post describes how to make the return trip and write the content of the Node back to XML in alternative formats such as a File or a String. Groovy’s MarkupBuilder provides a convenient approach for generating XML from Groovy code. For example, I demonstrated writing XML based on SQL query results in the post GroovySql and MarkupBuilder: SQL-to-XML. However, when one wishes to write/serialize XML from a Groovy Node, an easy and appropriate approach is to use XmlNodePrinter as demonstrated in Updating XML with XmlParser. The next code listing, parseAndPrintXml.groovy demonstrates use of XmlParser to parse XML from a provided file and use of XmlNodePrinter to write that Node parsed from the file to standard output as XML. parseAndPrintXml.groovy : Writing XML to Standard Output #!/usr/bin/env groovy// parseAndPrintXml.groovy // // Uses Groovy's XmlParser to parse provided XML file and uses Groovy's // XmlNodePrinter to print the contents of the Node parsed from the XML with // XmlParser to standard output.if (args.length < 1) { println "USAGE: groovy parseAndPrintXml.groovy <XMLFile>" System.exit(-1) }XmlParser xmlParser = new XmlParser() Node xml = xmlParser.parse(new File(args[0])) XmlNodePrinter nodePrinter = new XmlNodePrinter(preserveWhitespace:true) nodePrinter.print(xml) Putting aside the comments and code for checking command line arguments, there are really 4 lines (lines 15-18) in the above code listing of significance to this discussion. These four lines demonstrate instantiating an XmlParser (line 15), using the instance of XmlParser to “parse” a File instance based on a provided argument file name (line 16), instantiating an XmlNodePrinter (line 17), and using that XmlNodePrinter instance to “print” the parsed XML to standard output (line 18). Although writing XML to standard output can be useful for a user to review or to redirect output to another script or tool, there are times when it is more useful to have access to the parsed XML as a String. The next code listing is just a bit more involved than the last one and demonstrates use of XmlNodePrinter to write the parsed XML contained in an Node instance as a Java String. parseXmlToString.groovy : Writing XML to Java String #!/usr/bin/env groovy// parseXmlToString.groovy // // Uses Groovy's XmlParser to parse provided XML file and uses Groovy's // XmlNodePrinter to write the contents of the Node parsed from the XML with // XmlParser into a Java String.if (args.length < 1) { println "USAGE: groovy parseXmlToString.groovy <XMLFile>" System.exit(-1) }XmlParser xmlParser = new XmlParser() Node xml = xmlParser.parse(new File(args[0])) StringWriter stringWriter = new StringWriter() XmlNodePrinter nodePrinter = new XmlNodePrinter(new PrintWriter(stringWriter)) nodePrinter.setPreserveWhitespace(true) nodePrinter.print(xml) String xmlString = stringWriter.toString() println "XML as String:\n${xmlString}" As the just-shown code listing demonstrates, one can instantiate an instance of XmlNodePrinter that writes to a PrintWriter that was instantiated with a StringWriter. This StringWriter ultimately makes the XML available as a Java String. Writing XML from a groovy.util.Node to a File is very similar to writing it to a String with a FileWriter used instead of a StringWriter. This is demonstrated in the next code listing. parseAndSaveXml.groovy : Write XML to File #!/usr/bin/env groovy// parseAndSaveXml.groovy // // Uses Groovy's XmlParser to parse provided XML file and uses Groovy's // XmlNodePrinter to write the contents of the Node parsed from the XML with // XmlParser to file with provided name.if (args.length < 2) { println "USAGE: groovy parseAndSaveXml.groovy <sourceXMLFile> <targetXMLFile>" System.exit(-1) }XmlParser xmlParser = new XmlParser() Node xml = xmlParser.parse(new File(args[0])) FileWriter fileWriter = new FileWriter(args[1]) XmlNodePrinter nodePrinter = new XmlNodePrinter(new PrintWriter(fileWriter)) nodePrinter.setPreserveWhitespace(true) nodePrinter.print(xml) I don’t show it in this post, but the value of being able to write a Node back out as XML often comes after modifying that Node instance. Updating XML with XmlParser demonstrates the type of functionality that can be performed on a Node before serializing the modified instance back out.Reference: Writing Groovy’s groovy.util.Node (XmlParser) Content as XML from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
java-logo

Byteman – a swiss army knife for byte code manipulation

I am working with a bunch of communities in JBoss and there is so much interesting stuff to talk about, that I can’t wrap my head around every little bit myself. This is the main reason why I am very thankful to have the opportunity to welcome guest bloggers here from time to time. Today it is Jochen Mader, who  is part of the nerd herd at codecentric. He currently spends his professional time coding Vert.x-based middleware solutions, writing for different publications and talking at conferences. His free time belongs to his family, mtb and tabletop gaming. You can follow him on Twitter @codepitbull. There are tools you normally don’t want to use but are happy enough to know about them when the need arises. At least to me Byteman falls into this category. It’s my personal swiss army knife to deal with a Big Ball of Mud or one of those dreaded Heisenbugs. So grab a current Byteman-distribution, unzip it to somewhere on your machine and we are off to some dirty work. What is it Byteman is a byte code manipulation and injection tool kit. It allows us to intercept and replace arbitrary parts of Java code to make it behave differently or break it (on purpose): get all threads stuck in a certain place and let them continue at the same time (hello race condition)  throw Exceptions at unexpected locations  tracing through your code during execution  change return valuesand a lot more things. An example Let’s get right into some code to illustrate what Byteman can do for you. Here we have a wonderful Singleton and a (sadly) good example of code you might find in many places. public class BrokenSingleton {    private static volatile BrokenSingleton instance;    private BrokenSingleton() {     }    public static BrokenSingleton get() {         if (instance == null) {             instance = new BrokenSingleton();         }         return instance;     } } Let’s pretend we are the poor souls tasked with debugging some legacy code showing weird behaviour in production. After a while we discover this gem and our gut indicates something is wrong here. At first we might try something like this: public class BrokenSingletonMain {    public static void main(String[] args) throws Exception {         Thread thread1 = new Thread(new SingletonAccessRunnable());         Thread thread2 = new Thread(new SingletonAccessRunnable());         thread1.start();         thread2.start();         thread1.join();         thread2.join();     }    public static class SingletonAccessRunnable implements Runnable {         @Override         public void run() {             System.out.println(BrokenSingleton.get());         }     } } Running this there is a very small chance to see the actual problem happen. But most likely we won’t see anything unusual. The Singleton is initialized once and the application performs as expected. A lot of times people start brute forcing by increasing the number of threads, hoping to make the problem show itself. But I prefer a more structured approach. Enter Byteman. The DSL Byteman provides a convenient DSL to modify and trace application behaviour. We’ll start with tracing calls in my little example. Take a look at this piece of code. RULE trace entering CLASS de.codepitbull.byteman.BrokenSingleton METHOD get AT ENTRY IF true DO traceln("entered get-Method") ENDRULERULE trace read stacks CLASS de.codepitbull.byteman.BrokenSingleton METHOD get AFTER READ BrokenSingleton.instance IF true DO traceln("READ:\n" + formatStack()) ENDRULE The core building block of Byteman-scripts is the RULE. It consists of several components (example shamelessly ripped from the Byteman-Docs:  # rule skeleton  RULE <rule name>  CLASS <class name>  METHOD <method name>  BIND <bindings>  IF <condition>  DO <actions>  ENDRULE Each RULE needs to have unique __rule name__. The combination of CLASS and METHOD define where we want our modifications to apply. BIND allows us to bind variables to names we can use inside IF and DO. Using IF we can add conditions under which the rule fires. In DO the actual magic happens. ENDRULE, it ends the rule. Knwoing this my first rule is easily translated to: When somebody calls _de.codepitbull.byteman.BrokenSingleton.get()_ I want to print the String “entered get-Method” right before the method body is called (that’s what __AT ENTRY__ translates to). My second rule can be translated to: After reading (__AFTER READ__) the instance-Member of BrokenSingleton I want to see the current call-Stack. Grab the code and put it into a file called _check.btm_. Byteman provides a nice tool to verify your scripts. Use __<bytemanhome>/bin/bmcheck.sh -cp folder/containing/compiled/classes/to/test check.btm__ to see if your script compiles. Do this EVERY time you change it, it’s very easy to get a detail wrong and spend a long time figuring it out. Now that the script is saved and tested it’s time to use it with our application. The Agent Scripts are applied to running code through an agent. Open the run-Configuration for the __BrokenSingletonMain-class__ and add __-javaagent:<BYTEMAN_HOME>/lib/byteman.jar=script:check.btm__ to your JVM-parameters. This will register the agent and tell it to run _check.btm_. And while we are at it here are a few more options: If you ever need to manipulate some core java stuff use __-javaagent:<BYTEMAN_HOME>/lib/byteman.jar=script:appmain.btm,boot:<BYTEMAN_HOME>/lib/byteman.jar__ This will add Byteman to the boot classpath and allow us to manipulate classes like _Thread_, _String_ … I mean, if you ever wanted to such nasty things … It’s also possible to attach the agent to a running process. Us __jps__ to find the process id you want to attach to and run __<bytemanhome>/bin/bminstall.sh <pid>__ to install the agent. Afterwards run __<bytemanhome>/bin/bmsubmit.sh check.btm__ Back to our problem at hand. Running our application with the modified run-Configuration should result in output like this entered get-Method entered get-Method READ: Stack trace for thread Thread-0 de.codepitbull.byteman.BrokenSingleton.get(BrokenSingleton.java:14) de.codepitbull.byteman.BrokenSingletonMain$SingletonAccessRunnable.run(BrokenSingletonMain.java:20) java.lang.Thread.run(Thread.java:745)READ: Stack trace for thread Thread-1 de.codepitbull.byteman.BrokenSingleton.get(BrokenSingleton.java:14) de.codepitbull.byteman.BrokenSingletonMain$SingletonAccessRunnable.run(BrokenSingletonMain.java:20) java.lang.Thread.run(Thread.java:745) Congratulations you just manipulated byte code. The output isn’t very helpful yet but that’s something we are going to change. Messing with threads With our infrastructure now set up we can start digging deeper. We are quite sure about our problem being related to some multithreading issue. To test our hypothesis we have to get multiple threads into our critical section at the same time. This is close to impossible using pure Java, at least without applying extensive modifications to the code we want to debug. Using Byteman this is easily achieved. RULE define rendezvous CLASS de.codepitbull.byteman.BrokenSingleton METHOD get AT ENTRY IF NOT isRendezvous("rendezvous", 2) DO createRendezvous("rendezvous", 2, true); traceln("rendezvous created"); ENDRULE This rule defines a so called rendezvous. It allows us to specify a place where multiple threads have to arrive until they are allowed to procede (also known as a a barrier). And here the translation for the rule: When calling _BrokenSingleton.get()_ create a new rendezvous that will allow progress when 2 threads arrive. Make the rendezvous reusable and create it only if it doesn’t exist (the IF NOT part is critical as otherwise we would create a barrier on each call to _BrokenSingleton.get()_). After defining this barrier we still need to explicitly use it. RULE catch threads CLASS de.codepitbull.byteman.BrokenSingleton METHOD get AFTER READ BrokenSingleton.instance IF isRendezvous("rendezvous", 2) DO rendezvous("rendezvous"); ENDRULE Translation: After reading the _instance_-member inside _BrokenSingleton.get()_ wait at the rendezvous until a second thread arrives and continue together. We now stop both threads from _BrokenSingletonMain_ in the same lace, after the instance-null-check. That’s how to make a race condition reproducible. Both threads will continue thinking _instance_ is null, causing the constructor to fire twice. I leave the solution to this problem to you … Unit tests Something I discovered while writing this blog post is the possibility to run Byteman-scripts as part of my unit tests. Their JUNit- and TestNG-integration is easily integrated. Add the following dependency to your _pom.xml_ <dependency>     <groupId>org.jboss.byteman</groupId>       <artifactId>byteman-submit</artifactId>     <scope>test</scope>     <version>${byteman.version}</version> </dependency> Now Byteman-scripts can be executed inside your Unit-Tests like this: @RunWith(BMUnitRunner.class) public class BrokenSingletonTest {   @Test   @BMScript("check.btm")   public void testForRaceCondition() {     ...   } } Adding such tests to your suits increases the usefulness of Byteman quite a bit. There’s no better way preventing others from repeating your mistakes as making these scripts part of the build process. Closing words There is only so much room in a blog post and I also don’t want to start rewriting their documentation. It was a funny thing writing writing this post as I hadn’t used Byteman for quite a while. I don’t know how I managed to overlook the unit test integration. That will make me use it a lot more in the future. And now I suggest to browse their documentation and start injecting, there’s a lot to play around with.Reference: Byteman – a swiss army knife for byte code manipulation from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
software-development-2-logo

Do It Either Way, We’ll Refactor It Later

It often happens that a new piece of functionality is discussed within a team and different developers have a different preference over how it should be implemented. “But what if in the future…” is a typical argument, as well as “that way it’s going to be more extensible”. Well, usually it doesn’t matter. One should rather focus on how to write it well, readable, documented and tested, rather than to try to predict future use-cases. You can’t in most cases anyway, so spending time and effort in discussions which is the perfect way to implement something is fruitless. I’m not advocating a ‘quick and dirty’ approach, but don’t try to predict the future. Sometimes there is more than one approach to an implementation and you just can’t decide which one is better. Again, it doesn’t matter. Just build whichever you think would take less time, but don’t waste too much time in arguments. Why? Because of refactoring (if you are using a statically-typed language, that is). Whenever a new use-case arises, or you spot a problem with the initial design, you can simply change it. It’s quite a regular scenario – something turns out to be imperfect – improve it. Of course, in order to feel free in doing refactoring, one should have a lot of tests. So having tests is more important that implementing it right the first time. There is one exception, however – public APIs. You cannot change those, at least not easily. If you have a public API, make sure you design it in the best way, especially consider if any given method should be exposed or not. You cannot remove it afterwards. In other words, whenever a 3rd party depends on your decisions, make sure you get it right the first time. It’s still not completely possible, but it’s worth the long discussions. I generally refrain from long discussions about exact implementations and accept the opinions of others. Not because mine is wrong, but because it doesn’t matter. It’s better to save the overhead of guessing what will be needed in one year and just implement it. That, of course, has to be balanced with the overhead of refactoring – it’s not exactly free.Reference: Do It Either Way, We’ll Refactor It Later from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
java-logo

Java Concurrency Tutorial – Locking: Explicit locks

1. Introduction In many cases, using implicit locking is enough. Other times, we will need more complex functionalities. In such cases, java.util.concurrent.locks package provides us with lock objects. When it comes to memory synchronization, the internal mechanism of these locks is the same as with implicit locks. The difference is that explicit locks offer additional features. The main advantages or improvements over implicit synchronization are:    Separation of locks by read or write. Some locks allow concurrent access to a shared resource (ReadWriteLock). Different ways of acquiring a lock:Blocking: lock() Non-blocking: tryLock() Interruptible: lockInterruptibly()2. Classification of lock objects Lock objects implement one of the following two interfaces:Lock: Defines the basic functionalities that a lock object must implement. Basically, this means acquiring and releasing the lock. In contrast to implicit locks, this one allows the acquisition of a lock in a non-blocking or interruptible way (additionally to the blocking way). Main implementations:ReentrantLock ReadLock (used by ReentrantReadWriteLock) WriteLock (used by ReentrantReadWriteLock)ReadWriteLock: It keeps a pair of locks, one for read-only operations and another one for writing. The read lock can be acquired simultaneously by different reader threads (as long as the resource isn’t already acquired by a write lock), while the write lock is exclusive. In this way, we can have several threads reading the resource concurrently as long as there is not a writing operation. Main implementations:ReentrantReadWriteLockThe following class diagram shows the relation among the different lock classes:3. ReentrantLock This lock works the same way as the synchronized block; one thread acquires the lock as long as it is not already acquired by another thread, and it does not release it until unlock is invoked. If the lock is already acquired by another thread, then the thread trying to acquire it becomes blocked until the other thread releases it. We are going to start with a simple example without locking, and then we will add a reentrant lock to see how it works. public class NoLocking { public static void main(String[] args) { Worker worker = new Worker(); Thread t1 = new Thread(worker, "Thread-1"); Thread t2 = new Thread(worker, "Thread-2"); t1.start(); t2.start(); } private static class Worker implements Runnable { @Override public void run() { System.out.println(Thread.currentThread().getName() + " - 1"); System.out.println(Thread.currentThread().getName() + " - 2"); System.out.println(Thread.currentThread().getName() + " - 3"); } } } Since the code above is not synchronized, threads will be interleaved. Let’s see the output: Thread-2 - 1 Thread-1 - 1 Thread-1 - 2 Thread-1 - 3 Thread-2 - 2 Thread-2 - 3 Now, we will add a reentrant lock in order to serialize the access to the run method: public class ReentrantLockExample { public static void main(String[] args) { Worker worker = new Worker(); Thread t1 = new Thread(worker, "Thread-1"); Thread t2 = new Thread(worker, "Thread-2"); t1.start(); t2.start(); } private static class Worker implements Runnable { private final ReentrantLock lock = new ReentrantLock(); @Override public void run() { lock.lock(); try { System.out.println(Thread.currentThread().getName() + " - 1"); System.out.println(Thread.currentThread().getName() + " - 2"); System.out.println(Thread.currentThread().getName() + " - 3"); } finally { lock.unlock(); } } } } The above code will safely be executed without threads being interleaved. You may realize that we could have used a synchronized block and the effect would be the same. The question that arises now is what advantages does the reentrant lock provides us? The main advantages of using this type of lock are described below:Additional ways of acquiring the lock are provided by implementing Lock interface:lockInterruptibly: The current thread will try to acquire de lock and become blocked if another thread owns the lock, like with the lock() method. However, if another thread interrupts the current thread, the acquisition will be cancelled. tryLock: It will try to acquire the lock and return immediately, regardless of the lock status. This will prevent the current thread from being blocked if the lock is already acquired by another thread. You can also set the time the current thread will wait before returning (we will see an example of this). newCondition: Allows the thread which owns the lock to wait for a specified condition.Additional methods provided by the ReentrantLock class, primarily for monitoring or testing. For example, getHoldCount or isHeldByCurrentThread methods.Let’s look at an example using tryLock before moving on to the next lock class. 3.1 Trying lock acquisition In the following example, we have got two threads, trying to acquire the same two locks. One thread acquires lock2 and then it blocks trying to acquire lock1: public void lockBlocking() { LOGGER.info("{}|Trying to acquire lock2...", Thread.currentThread().getName()); lock2.lock(); try { LOGGER.info("{}|Lock2 acquired. Trying to acquire lock1...", Thread.currentThread().getName()); lock1.lock(); LOGGER.info("{}|Both locks acquired", Thread.currentThread().getName()); } finally { lock1.unlock(); lock2.unlock(); } } Another thread, acquires lock1 and then it tries to acquire lock2. public void lockWithTry() { LOGGER.info("{}|Trying to acquire lock1...", Thread.currentThread().getName()); lock1.lock(); try { LOGGER.info("{}|Lock1 acquired. Trying to acquire lock2...", Thread.currentThread().getName()); boolean acquired = lock2.tryLock(4, TimeUnit.SECONDS); if (acquired) { try { LOGGER.info("{}|Both locks acquired", Thread.currentThread().getName()); } finally { lock2.unlock(); } } else { LOGGER.info("{}|Failed acquiring lock2. Releasing lock1", Thread.currentThread().getName()); } } catch (InterruptedException e) { //handle interrupted exception } finally { lock1.unlock(); } } Using the standard lock method, this would cause a dead lock, since each thread would be waiting forever for the other to release the lock. However, this time we are trying to acquire it with tryLock specifying a timeout. If it doesn’t succeed after four seconds, it will cancel the action and release the first lock. This will allow the other thread to unblock and acquire both locks. Let’s see the full example: public class TryLock { private static final Logger LOGGER = LoggerFactory.getLogger(TryLock.class); private final ReentrantLock lock1 = new ReentrantLock(); private final ReentrantLock lock2 = new ReentrantLock(); public static void main(String[] args) { TryLock app = new TryLock(); Thread t1 = new Thread(new Worker1(app), "Thread-1"); Thread t2 = new Thread(new Worker2(app), "Thread-2"); t1.start(); t2.start(); } public void lockWithTry() { LOGGER.info("{}|Trying to acquire lock1...", Thread.currentThread().getName()); lock1.lock(); try { LOGGER.info("{}|Lock1 acquired. Trying to acquire lock2...", Thread.currentThread().getName()); boolean acquired = lock2.tryLock(4, TimeUnit.SECONDS); if (acquired) { try { LOGGER.info("{}|Both locks acquired", Thread.currentThread().getName()); } finally { lock2.unlock(); } } else { LOGGER.info("{}|Failed acquiring lock2. Releasing lock1", Thread.currentThread().getName()); } } catch (InterruptedException e) { //handle interrupted exception } finally { lock1.unlock(); } } public void lockBlocking() { LOGGER.info("{}|Trying to acquire lock2...", Thread.currentThread().getName()); lock2.lock(); try { LOGGER.info("{}|Lock2 acquired. Trying to acquire lock1...", Thread.currentThread().getName()); lock1.lock(); LOGGER.info("{}|Both locks acquired", Thread.currentThread().getName()); } finally { lock1.unlock(); lock2.unlock(); } } private static class Worker1 implements Runnable { private final TryLock app; public Worker1(TryLock app) { this.app = app; } @Override public void run() { app.lockWithTry(); } } private static class Worker2 implements Runnable { private final TryLock app; public Worker2(TryLock app) { this.app = app; } @Override public void run() { app.lockBlocking(); } } } If we execute the code it will result in the following output: 13:06:38,654|Thread-2|Trying to acquire lock2... 13:06:38,654|Thread-1|Trying to acquire lock1... 13:06:38,655|Thread-2|Lock2 acquired. Trying to acquire lock1... 13:06:38,655|Thread-1|Lock1 acquired. Trying to acquire lock2... 13:06:42,658|Thread-1|Failed acquiring lock2. Releasing lock1 13:06:42,658|Thread-2|Both locks acquired After the fourth line, each thread has acquired one lock and is blocked trying to acquire the other lock. At the next line, you can notice the four second lapse. Since we reached the timeout, the first thread fails to acquire the lock and releases the one it had already acquired, allowing the second thread to continue. 4. ReentrantReadWriteLock This type of lock keeps a pair of internal locks (a ReadLock and a WriteLock). As explained with the interface, this lock allows several threads to read from the resource concurrently. This is specially convenient when having  a resource that has frequent reads but few writes. As long as there isn’t a thread that needs to write, the resource will be concurrently accessed. The following example shows three threads concurrently reading from a shared resource. When a fourth thread needs to write, it will exclusively lock the resource, preventing reading threads from accessing it while it is writing. Once the write finishes and the lock is released, all reader threads will continue to access the resource concurrently: public class ReadWriteLockExample { private static final Logger LOGGER = LoggerFactory.getLogger(ReadWriteLockExample.class); final ReadWriteLock readWriteLock = new ReentrantReadWriteLock(); private Data data = new Data("default value"); public static void main(String[] args) { ReadWriteLockExample example = new ReadWriteLockExample(); example.start(); } private void start() { ExecutorService service = Executors.newFixedThreadPool(4); for (int i=0; i<3; i++) service.execute(new ReadWorker()); service.execute(new WriteWorker()); service.shutdown(); } class ReadWorker implements Runnable { @Override public void run() { for (int i = 0; i < 2; i++) { readWriteLock.readLock().lock(); try { LOGGER.info("{}|Read lock acquired", Thread.currentThread().getName()); Thread.sleep(3000); LOGGER.info("{}|Reading data: {}", Thread.currentThread().getName(), data.getValue()); } catch (InterruptedException e) { //handle interrupted } finally { readWriteLock.readLock().unlock(); } } } } class WriteWorker implements Runnable { @Override public void run() { readWriteLock.writeLock().lock(); try { LOGGER.info("{}|Write lock acquired", Thread.currentThread().getName()); Thread.sleep(3000); data.setValue("changed value"); LOGGER.info("{}|Writing data: changed value", Thread.currentThread().getName()); } catch (InterruptedException e) { //handle interrupted } finally { readWriteLock.writeLock().unlock(); } } } } The console output shows the result: 11:55:01,632|pool-1-thread-1|Read lock acquired 11:55:01,632|pool-1-thread-2|Read lock acquired 11:55:01,632|pool-1-thread-3|Read lock acquired 11:55:04,633|pool-1-thread-3|Reading data: default value 11:55:04,633|pool-1-thread-1|Reading data: default value 11:55:04,633|pool-1-thread-2|Reading data: default value 11:55:04,634|pool-1-thread-4|Write lock acquired 11:55:07,634|pool-1-thread-4|Writing data: changed value 11:55:07,634|pool-1-thread-3|Read lock acquired 11:55:07,635|pool-1-thread-1|Read lock acquired 11:55:07,635|pool-1-thread-2|Read lock acquired 11:55:10,636|pool-1-thread-3|Reading data: changed value 11:55:10,636|pool-1-thread-1|Reading data: changed value 11:55:10,636|pool-1-thread-2|Reading data: changed value As you can see, when writer thread acquires the write lock (thread-4), no other threads can access the resource. 5. Conclusion This post shows which are the main implementations of explicit locks and explains some of its improved features with respect to implicit locking. This post is part of the Java Concurrency Tutorial series. Check here to read the rest of the tutorial.You can find the source code at Github.Reference: Java Concurrency Tutorial – Locking: Explicit locks from our JCG partner Xavier Padro at the Xavier Padró’s Blog blog....
java-logo

Value-Based Classes

In Java 8 some classes got a small note in Javadoc stating they are value-based classes. This includes a link to a short explanation and some limitations about what not to do with them. This is easily overlooked and if you do that, it will likely break your code in subtle ways in future Java releases. To prevent that I wanted to cover value-based classes in their own post – even though I already mentioned the most important bits in other articles. Overview This post will first look at why value-based classes exist and why their use is limited before detailing those limitations (if you’re impatient, jump here). It will close with a note on FindBugs, which will soon be able to help you out. Background Let’s have a quick look at why value-based classes were introduced and which exist in the JDK. Why Do They Exist? A future version of Java will most likely contain value types. I will write about them in the coming weeks (so stay tuned) and will present them in some detail. And while they definitely have benefits, these are not covered in the present post, which might make the limitations seem pointless. Believe me, they aren’t! Or don’t believe me and see for yourself. For now let’s see what little I already wrote about value types: The gross simplification of that idea is that the user can define a new kind of type, different from classes and interfaces. Their central characteristic is that they will not be handled by reference (like classes) but by value (like primitives). Or, as Brian Goetz puts it in his introductory article State of the Values: Codes like a class, works like an int! It is important to add that value types will be immutable – as primitive types are today. In Java 8 value types are preceded by value-based classes. Their precise relation in the future is unclear but it could be similar to that of boxed and unboxed primitives (e.g. Integer and int). The relationship of existing types with future value types became apparent when Optional was designed. This was also when the limitations of value-based classes were specified and documented. What Value-Based Classes Exist? These are all the classes I found in the JDK to be marked as value-based:java.util: Optional, OptionalDouble, OptionalLong, OptionalInt java.time: Duration, Instant, LocalDate, LocalDateTime, LocalTime, MonthDay, OffsetDateTime, OffsetTime, Period, Year, YearMonth, ZonedDateTime, ZoneId, ZoneOffset java.time.chrono: HijrahDate, JapaneseDate, MinguaDate, ThaiBuddhistDateI can not guarantee that this list is complete as I found no official source listing them all.In addition there are non-JDK classes which should be considered value-based but do not say so. An example is Guava’s Optional. It is also safe to assume that most code bases will contain classes which are meant to be value-based. It is interesting to note that the existing boxing classes like Integer, Double and the like are not marked as being value-based. While it sounds desirable to do so – after all they are the prototypes for this kind of classes – this would break backwards compatibility because it would retroactively invalidate all uses which contravene the new limitations. Optional is new, and the disclaimers arrived on day 1. Integer, on the other hand, is probably hopelessly polluted, and I am sure that it would break gobs of important code if Integer ceased to be lockable (despite what we may think of such a practice.) Brian Goetz – Jan 6 2015 (formatting mine) Still, they are very similar so let’s call them “value-ish”. Characteristics At this point, it is unclear how value types will be implemented, what their exact properties will be and how they will interact with value-based classes. Hence the limitations imposed on the latter are not based on existing requirements but derived from some desired characteristics of value types. It is by no means clear whether these limitations suffice to establish a relationship with value types in the future. That being said, let’s continue with the quote from above: In Java 8 value types are preceded by value-based classes. Their precise relation in the future is unclear but it could be similar to that of boxed and unboxed primitives (e.g. Integer and int). Additionally, the compiler will likely be free to silently switch between the two to improve performance. Exactly that switching back and forth, i.e. removing and later recreating a reference, also forbids identity-based mechanisms to be applied to value-based classes. Implemented like this the JVM is freed from tracking the identity of value-based instances, which can lead to substantial performance improvements and other benefits. Identity The term identity is important in this context, so let’s have a closer look. Consider a mutable object which constantly changes its state (like a list being modified). Even though the object always “looks” different we would still say it’s the same object. So we distinguish between an object’s state and its identity. In Java, state equality is determined with equals (if appropriately implemented) and identity equality by comparing references. In other words, an object’s identity is defined by its reference. Now assume the JVM will treat value types and value-based classes as described above. In that case, neither will have a meaningful identity. Value types won’t have one to begin with, just like an int doesn’t. And the corresponding value-based classes are merely boxes for value types, which the JVM is free to destroy and recreate at will. So while there are of course references to individual boxes, there is no guarantee at all about how they boxes will exist. This means that even though a programmer might look at the code and follow an instance of a value-based class being passed here and there, the JVM might behave differently. It might remove the reference (thus destroying the object’s identity) and pass it as a value type. In case of an identity sensitive operation, it might then recreate a new reference. With regard to identity it is best to think of value-based classes like of integers: talking about different instances of “3” (the int) makes no sense and neither does talking about different instances of “11:42 pm” (the LocalTime). State If instances of value-based classes have no identity, their equality can only be determined by comparing their state (which is done by implementing equals). This has the important implication that two instances with equal state must be fully interchangeable, meaning replacing one such instance with another must not have any discernible effect. This indirectly determines what should be considered part of a value-based instance’s state. All fields whose type is a primitive or another value-based class can be part of it because they are also fully interchangeable (all “3”s and “11:42 pm”s behave the same). Regular classes are trickier. As operations might depend on their identity, a vale-based instance can not generally be exchanged for another if they both refer to equal but non-identical instances. As an example, consider locking on a String which is then wrapped in an Optional. At some other point another String is created with the same character sequence and also wrapped. Then these two Optionals are not interchangeable because even though both wrap equal character sequences, those String instances are not identical and one functions as a lock while the other one doesn’t. Strictly interpreted this means that instead of including the state of a reference field in its own state, a value-based class must only consider the reference itself. In the example above, the Optionals should only be considered equal if they actually point to the same string. This may be overly strict, though, as the given as well as other problematic examples are necessarily somewhat construed. And it is very counterintuitive to force value-based classes to ignore the state of “value-ish” classes like String and Integer. Value Type Boxes Being planned as boxes for value types adds some more requirements. These are difficult to explain without going deeper into value types so I’m not going to do that now. Limitations First, it is important to note, that in Java 8 all the limitations are purely artificial. The JVM does not know the first thing about this kind of classes and you can ignore all of the rules without anything going wrong – for now. But this might change dramatically when value types are introduced. As we have seen above, instances of value-based classes have no guaranteed identity, less leniency in defining equality and should fit the expected requirements of boxes for value types. This has two implications:The class must be built accordingly. Instances of the class must not be used for identity-based operations.This is the ground for the limitations stated in the Javadoc and they can hence be separated into limitations for the declaration of the class and the use of its instances. Declaration Site Straight from the documentation (numbering and formatting mine): Instances of a value-based class:are final and immutable (though may contain references to mutable objects); have implementations of equals, hashCode, and toString which are computed solely from the instance’s state and not from its identity or the state of any other object or variable; make no use of identity-sensitive operations such as reference equality ( ==) between instances, identity hash code of instances, or synchronization on an instances’s intrinsic lock; are considered equal solely based on equals(), not based on reference equality ( ==); do not have accessible constructors, but are instead instantiated through factory methods which make no committment as to the identity of returned instances; are freely substitutable when equal, meaning that interchanging any two instances x and y that are equal according to equals() in any computation or method invocation should produce no visible change in behavior.With what was discussed above most of these rules are obvious. Rule 1 is motivated by value-based classes being boxes for value types. For technical and design reasons those must be final and immutable and these requirements are transfered to their boxes. Rule 2 murkily addresses the concerns about how to define the state of a value-based class. The rule’s precise effect depends on the interpretation of “the instance’s state” and “any other variable”. One way to read it is to include “value-ish” classes in the state and regard typical reference types as other variables. Number 3 through 6 regard the missing identity. It is interesting to note, that Optional breaks rule 2 because it calls equals on the wrapped value. Similarly, all value-based classes from java.time and java.time.chrono break rule 3 by being serializable (which is an identity-based operation – see below). Use Site Again from the documentation: A program may produce unpredictable results if it attempts to distinguish two references to equal values of a value-based class, whether directly via reference equality or indirectly via an appeal to synchronization, identity hashing, serialization, or any other identity-sensitive mechanism. Considering the missing identity it is straight forward that references should not be distinguished. There is no explanation, though, why the listed examples are violating that rule, so let’s have a closer look. I made a list of all violations I could come up with and included a short explanation and concrete cases for each (vbi stands for instance of value-based class): Reference Comparison: This obviously distinguishes instances based on their identity. Serialization of vbi: It is desirable to make value types serializable and a meaningful definition for that seems straight-forward. But as it is today, serialization makes promises about object identity which conflict with the notion of identity-less value-based classes. In its current implementation, serialization also uses object identity when traversing the object graph. So for now, it must be regarded as an identity-based operation which should be avoided. Cases:non-transient field in serializable class direct serialization via ObjectOutputStream.writeObjectLocking on a vbi: Uses the object header to access the instance’s monitor – headers of value-based classes are free to be removed and recreated and primitive/value types have no headers. Cases:use in synchronized block calls to Object.wait, Object.notify or Object.notifyAllIdentity Hash Code: This hash code is required to be constant over an instance’s lifetime. With instances of value-based classes being free to be removed and recreated constancy can not be guaranteed in a sense which is meaningful to developers. Cases:argument to System.identityHashCode key in an IdentityHashMapComments highlighting other violations or improving upon the explanations are greatly appreciated! FindBugs Of course it is good to know all this but this doesn’t mean a tool which keeps you from overstepping the rules wouldn’t be really helpful. Being a heavy user of FindBugs I decided to ask the project to implement this and created a feature request. This ticket covers the use-site limitations and will help you uphold them for the JDK’s as well as your own value-based classes (marked with an annotation). Being curious about FindBugs and wanting to contribute I decided to set out and try to implement it myself. So if you’re asking why it takes so long to get that feature ready, now you know: It’s my fault. But talk is cheap so why don’t you join me and help out? I put a FindBugs clone up on GitHub and you can see the progress in this pull request. As soon as that is done I plan to implement the declaration-site rules as well, so you can be sure your value-based classes are properly written and ready when value types finally roll around. Reflection We have seen that value-based classes are the precursor of value types. With the changes coming to Java these instances will have no meaningful identity and limited possibilities to define their state which creates limitations both for their declaration and their use. These limitations were discussed in detail.Reference: Value-Based Classes from our JCG partner Nicolai Parlog at the CodeFx blog....
jboss-hibernate-logo

Hibernate locking patterns – How does OPTIMISTIC_FORCE_INCREMENT Lock Mode work

Introduction In my previous post, I explained how OPTIMISTIC Lock Mode works and how it can help us synchronize external entity state changes. In this post, we are going to unravel the OPTIMISTIC_FORCE_INCREMENT Lock Mode usage patterns. With LockModeType.OPTIMISTIC, the locked entity version is checked towards the end of the current running transaction, to make sure we don’t use a stale entity state. Because of the application-level validation nature, this strategy is susceptible to race-conditions, therefore requiring an additional pessimistic lock . The LockModeType.OPTIMISTIC_FORCE_INCREMENT not only it checks the expected locked entity version, but it also increments it. Both the check and the update happen in the same UPDATE statement, therefore making use of the current database transaction isolation level and the associated physical locking guarantees. It is worth noting that the locked entity version is bumped up even if the entity state hasn’t been changed by the current running transaction. A Centralized Version Control Use Case As an exercise, we are going to emulate a centralized Version Control System, modeled as follows:The Repository is our system root entity and each state change is represented by a Commit child entity. Each Commit may contain one or more Change components, which are propagated as a single atomic Unit of Work. The Repository version is incremented with each new Commit. For simplicity sake, we only verify the Repository entity version, although a more realistic approach would surely check each individual file version instead (to allow non-conflicting commits to proceed concurrently). Testing time First, we should check if the OPTIMISTIC_FORCE_INCREMENT Lock Mode suits our use case requirements: doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { Repository repository = (Repository) session.get(Repository.class, 1L); session.buildLockRequest(new LockOptions(LockMode.OPTIMISTIC_FORCE_INCREMENT)).lock(repository); Commit commit = new Commit(repository); commit.getChanges().add(new Change("README.txt", "0a1,5...")); commit.getChanges().add(new Change("web.xml", "17c17...")); session.persist(commit); return null; } }); This code generates the following output: #Alice selects the Repository and locks it using an OPTIMISTIC_FORCE_INCREMENT Lock Mode Query:{[select lockmodeop0_.id as id1_2_0_, lockmodeop0_.name as name2_2_0_, lockmodeop0_.version as version3_2_0_ from repository lockmodeop0_ where lockmodeop0_.id=?][1]}#Alice makes two changes and inserts a new Commit Query:{[select lockmodeop0_.id as id1_2_0_, lockmodeop0_.name as name2_2_0_, lockmodeop0_.version as version3_2_0_ from repository lockmodeop0_ where lockmodeop0_.id=?][1]} Query:{[insert into commit (id, repository_id) values (default, ?)][1]} Query:{[insert into commit_change (commit_id, diff, path) values (?, ?, ?)][1,0a1,5...,README.txt]} Query:{[insert into commit_change (commit_id, diff, path) values (?, ?, ?)][1,17c17...,web.xml]}#The Repository version is bumped up Query:{[update repository set version=? where id=? and version=?][1,1,0]} Our user has selected a Repository and issued a new Commit. At the end of her transaction, the Repository version is incremented as well (therefore recording the new Repository state change). Conflict detection In our next example, we are going to have two users (Alice and Bob) to concurrently commit changes. To avoid losing updates, both users acquire an explicit OPTIMISTIC_FORCE_INCREMENT Lock Mode. Before Alice gets the chance to commit, Bob has just finished his transaction and incremented the Repository version. Alice transaction will be rolled back, throwing an unrecoverable StaleObjectStateException.To emulate the conflict detection mechanism, we are going to use the following test scenario: doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { Repository repository = (Repository) session.get(Repository.class, 1L); session.buildLockRequest(new LockOptions(LockMode.OPTIMISTIC_FORCE_INCREMENT)).lock(repository);executeAndWait(new Callable<Void>() { @Override public Void call() throws Exception { return doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session _session) { Repository _repository = (Repository) _session.get(Repository.class, 1L); _session.buildLockRequest(new LockOptions(LockMode.OPTIMISTIC_FORCE_INCREMENT)).lock(_repository); Commit _commit = new Commit(_repository); _commit.getChanges().add(new Change("index.html", "0a1,2...")); _session.persist(_commit); return null; } }); } });Commit commit = new Commit(repository); commit.getChanges().add(new Change("README.txt", "0a1,5...")); commit.getChanges().add(new Change("web.xml", "17c17...")); session.persist(commit); return null; } }); The following output is generated: #Alice selects the Repository and locks it using an OPTIMISTIC_FORCE_INCREMENT Lock Mode Query:{[select lockmodeop0_.id as id1_2_0_, lockmodeop0_.name as name2_2_0_, lockmodeop0_.version as version3_2_0_ from repository lockmodeop0_ where lockmodeop0_.id=?][1]}#Bob selects the Repository and locks it using an OPTIMISTIC_FORCE_INCREMENT Lock Mode Query:{[select lockmodeop0_.id as id1_2_0_, lockmodeop0_.name as name2_2_0_, lockmodeop0_.version as version3_2_0_ from repository lockmodeop0_ where lockmodeop0_.id=?][1]}#Bob makes a change and inserts a new Commit Query:{[insert into commit (id, repository_id) values (default, ?)][1]} Query:{[insert into commit_change (commit_id, diff, path) values (?, ?, ?)][1,0a1,2...,index.html]}#The Repository version is bumped up to version 1 Query:{[update repository set version=? where id=? and version=?][1,1,0]}#Alice makes two changes and inserts a new Commit Query:{[insert into commit (id, repository_id) values (default, ?)][1]} Query:{[insert into commit_change (commit_id, diff, path) values (?, ?, ?)][2,0a1,5...,README.txt]} Query:{[insert into commit_change (commit_id, diff, path) values (?, ?, ?)][2,17c17...,web.xml]}#The Repository version is bumped up to version 1 and a conflict is raised Query:{[update repository set version=? where id=? and version=?][1,1,0]} INFO [main]: c.v.h.m.l.c.LockModeOptimisticForceIncrementTest - Failure: org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) : [com.vladmihalcea.hibernate.masterclass.laboratory.concurrency. LockModeOptimisticForceIncrementTest$Repository#1] This example exhibits the same behavior as the typical implicit optimistic locking mechanism. The only difference lies in the version change originator. While implicit locking only works for modifying entities, explicit locking can span to any managed entity instead (disregarding the entity state change requirement). Conclusion The OPTIMISTIC_FORCE_INCREMENT is therefore useful for propagating a child entity state change to an unmodified parent entity. This pattern can help us synchronize various entity types, by simply locking a common parent of theirs. When a child entity state change has to trigger a parent entity version incrementation, the explicit OPTIMISTIC_FORCE_INCREMENT lock mode is probably what you are after.Code available on GitHub.Reference: Hibernate locking patterns – How does OPTIMISTIC_FORCE_INCREMENT Lock Mode work from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
software-development-2-logo

Unit Test, System Test, Red Test, Green Test

We tend to categorize different types of tests according to what they cover. Unit tests cover small portions of code, usually a method or a class, while we mock the rest of their interaction. Integration tests cover several components in concert, and then mock the other boundaries. System tests and their bigger brothers, End-to-End tests cover more and more. Those categories are semantically correct, and they are relevant in the context of “how many should we have”. As in the testing pyramid:    The testing pyramid tells us that we should have more unit tests than integration tests, and more integration tests than end-to-end tests. There are a few reasons for that:We can cover more code with smaller tests. Mathematically, covering small parts of the code is of linear order, while longer workflows (that covers different options of interactions between components) becomes an exponential order. Unit tests are easier to write, because they require less setup. System tests require larger set-ups that are harder to write. While we can write both, the chance is that we’ll write more unit tests than system tests. Unit tests are less fragile than their larger counterparts. System and integration tests usually depend on environments and available resources, and if they disappear for some reason, the tests fail, although that is not what they are actually testing. i There are many paths in the code that can’t be covered by big tests, and can be covered by unit tests. You may question why this is so (I usually do), and usually the answer is YAGNI. But because there’s code, it better be tested, and unit tests do better job at that. When using Test First, TDD’s feedback cycle is much shorter than ATDD’s. TDD lends itself to producing more tests, and those come at the unit level.So you’re probably asking – if unit tests do such marvelous job, why do I need the other tests? The answer, my frienda, lies in the upside-down test-trust pyramid:Because unit tests don’t test the system as a whole, and tested system flows prove that the system works, we tend to trust them more. The longer the flow, the more trust it gains and our confidence in the tests grow. Oh. So end-to-end tests must be the best of the batch. Why not write more than those? We can revisit the former list, but instead, I want to talk about value. What is the value of a test? Automated tests (and tests in general) give us feedback. That’s their main value, but not the only one. You see, if every test passed, the value of the end-to-end test would be the largest. It gives us more confidence that our system works, compared to unit tests that cover the same code. But that changes if it fails. When we have a system test failing, we don’t know what to even look for: Is the problem functional? environmental? At which point of the flow did the problem occur? However, when a unit test, covering some of the code that the same failing system test goes through, we’ve got a more focused, isolated description of the problem, which is easier to solve. So what kind of tests should we write? Going back to the test pyramid, more unit tests than system tests. We’d like the confidence the green system tests give us, but we’d like to solve the problems with their smaller siblings. Remember: Green system tests give us more confidence. But the value of a red unit test is bigger than a red system test.Reference: Unit Test, System Test, Red Test, Green Test from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
software-development-2-logo

The Joel Test Updated For Programmers

A while back—the year 2000 to be exact—Joel Spolsky wrote a blog post entitled: “The Joel Test: 12 Steps to Better Code.” Many software engineers and developers use this test for evaluating a company to determine if a company is a good company to work for. In fact, many software development organizations use the Joel Test as a sort of self-test to determine what they need to work on. Here is what the Joel Test looks like, in case you aren’t familiar:     The Joel TestDo you use source control? Can you make a build in one step? Do you make daily builds? Do you have a bug database? Do you fix bugs before writing new code? Do you have an up-to-date schedule? Do you have a spec? Do programmers have quiet working conditions? Do you use the best tools money can buy? Do you have testers? Do new candidates write code during their interview? Do you do hallway usability testing?And here’s how it is applied, according to Joel: “The neat thing about The Joel Test is that it’s easy to get a quick yes or no to each question. You don’t have to figure out lines-of-code-per-day or average-bugs-per-inflection-point. Give your team 1 point for each “yes” answer. The bummer about The Joel Test is that you really shouldn’t use it to make sure that your nuclear power plant software is safe. A score of 12 is perfect, 11 is tolerable, but 10 or lower and you’ve got serious problems. The truth is that most software organizations are running with a score of 2 or 3, and they need serious help, because companies like Microsoft run at 12 full-time.” But what about a “Joel Test” for programmers? The Joel Test is great for software development shops and for programmers that are interested in quickly evaluating a company’s software development environment, but what about a Joel Test for actual programmers? Several people have asked me lately if I had an idea for a Joel Test to evaluate an actual programmer, rather than a software development organization, so I’ve decided to put together a list of what I think might be a good equivalent of the Joel Test for evaluating the skills of an actual software developer. Here is what I came up with. I’ll list them out and then dig a little deeper into each one, just as Joel did in his original post. The Simple Programmer TestCan you use source control effectively? Can you solve algorithm-type problems? Can you program in more than one language or technology? Do you do something to increase your education or skills every day? Do you name things appropriately? Can you communicate your ideas effectively? Do you understand basic design patterns? Do you know how to debug effectively? Do you test your own code? Do you share your knowledge? Do you use the best tools for your job? Can you build an actual application?Obviously, this is a simple test that doesn’t incorporate anywhere close to everything that makes a good software developer, but you can use this test to self-score—or perhaps when interviewing a candidate to get a good idea of general aptitude. For self-scoring, it’s of course important to honestly evaluate yourself based on these 12 criteria and only score yourself a point if you can confidently say you meet the criteria for that point. If you are honestly coming up with a score below an 8, don’t lose heart. It just means you have plenty of things to work on. I would say that anyone with an 8+ score should have a pretty easy time finding a good job in software development. 1. Can you use source control effectively? (I hate to use the word effectively, but I have chosen to use it throughout this list, simply because it really is one thing to be able to do something and another thing entirely to be able to do it effectively. So, while “effectively” is subjective, I’ll do my best to define it as objectively as possible in each description here.) Yes, just about any developer can check in and check out files from source control, but using source control effectively means more than just understanding the basics of pulling down the source code from a repository and making a commit. Different source control technologies have different ways of using them, but regardless of what source control technology you use, you should know how to use it to do more than just check out and check in code. To use source control effectively, you should understand concepts like branching and merging. You should know how to use your source control system to go back and look at revisions of code to compare them. You should know how to resolve conflicts when merging and understand how you can use branching or work-spaces to work in isolation or together with teammates when appropriate. Since source control is so important to just about every software developer, you should really be an expert at using whatever source control technology you are using and understand the basic concepts that apply to just about all source control systems. 2. Can you solve algorithm-type problems? I’m amazed by how many programmers can’t solve fairly simple algorithm problems—especially since these kinds of problems are almost always asked at any major coding interview. As a software developer, you should be able to solve a basic algorithm problem like: “Write a function to determine if two strings are anagrams of each other.” And you should be able to solve problems a lot more difficult than that—on a whiteboard. This is kind of the bread-and-butter for a programmer. Even though most real world problems don’t resemble the type of algorithm problems you are often asked at a job interview, the same types of problems do exist in the real world. In fact, the only way to really recognize them is to have some experience solving those types of problems in general. I’ve written many times about solving these types of problems and provided many different resources on the topics, so I’m not going to repeat myself here, but take a look at some of these posts, if you are interested in improving your skills in this area:Solving Problems, Breaking it Down Cracking the Coding Interview: 12 Things You Need to Know3. Can you program in more than one language or technology? The more and more you work in this industry, the more you will realize how pointless it is to be religious about technology and how beneficial it is to be well-versed in more than one programming language or technology stack. The best programmers are able to use the best tool for the job. They realize that different problems are best solved by different technologies. They don’t try and just use what they know, and they don’t become married to a particular technology or programming language just because it is what they know. A really good programmer will try to acquire a broad set of experiences and knowledge by learning more than one programming language and will invest some time in educating themselves in multiple technology stacks. Sure, specialization is important, but it is also important to be well-versed in multiple areas of technology, even if you don’t use that knowledge on a day-to-day basis. In my mind, the time period in my life when I really transitioned from an average developer to a good or really good developer was when I worked outside of my comfort zone and started doing Java development even though my inclination and most of my experience was in C# and .NET. After making the transition to a 2nd programming language and a completely different technology stack, I brought a much broader perspective to future projects I worked on and was no longer limited by the tunnel vision that working in a single technology can give you. 4. Do you do something to increase your education or skills every day? One of the first questions I ask any software developer who I am interviewing is what they do to keep up on technology and how they stay up-to-date. I’ve found that programmers who have some kind of self-education plan in place are the kinds of programmers who end up doing the best work, are the kinds of developers I like working with the most and are the most productive overall. In today’s environment of new technologies and advancements coming out literally every day, it is impossible to stay relevant if you do not have some kind of habit in place to keep up. You should be doing something every day—even if it is small—to advance your skills or learn something new. Just taking 15 minutes a day to read programming blogs (like this one), reading a technical book or doing something else to improve you skills, makes a huge difference over the course of a few years time. 5. Do you name things appropriately? One of the most difficult, yet important, parts of your job is to name things. A good software developer writes clean, and easily understandable code. It’s impossible to write clean and easily understandable code unless you are good at naming variables, methods, classes, and anything else that you create which requires a name. If I look at your code and see how you’ve chosen to name things, it tells me a lot about how you think and how much you understand the importance of writing code that, not just does what it’s supposed to do, but is easy to understand and maintain. A good book that will teach you how to name things properly is “Code Complete.” Another good one is “Clean Code.” 6. Can you communicate your ideas effectively? You can be the best software engineer in the world, but if you can’t effectively communicate your ideas, you won’t be very useful to a team. Communication is a critical skill that affects just about everything we do in the software development world. From writing emails, to explaining architecture ideas on whiteboards, to communicating with customers and stakeholders, software development involves a whole lot of communication. One great way to develop your communication skills is to write regularly. My own communication skills increased by leaps and bounds once I started regularly writing on this blog. I know many other software developers who have had similar results. 7. Do you understand basic design patterns? You don’t necessarily have to use design patterns very often to be a good software developer, but you should at least understand the most common design patterns that are used in the technologies and programming languages you work with. The design patterns that are used most commonly with object-oriented programming languages are going to be different than those which are used with functional languages—or rather many of them will be expressed in the use of the language itself—but, you should be aware of at least most of the most common design patterns. If you haven’t already read the gang of four book, “Design Patterns,” do so. If you want an easier to digest version, try “Head First Design Patterns.” 8. Do you know how to debug effectively? This is a tricky one. Many software developers think they know how to debug, but what they really know how to do is use a debugger. Don’t get me wrong, you need to know how to use a debugger, but debugging is more than just stepping through lines of code, looking for something that might be wrong. A good programmer knows that debugging involves starting with a hypothesis of what is wrong and then only using the debugger to prove or disprove that hypothesis. It’s possible to waste hours trying to debug a problem that could have been debugged in less than half the time by taking the right approach. Since most software developers spend more time debugging their code than writing new code, a developer who can debug effectively is extremely valuable. 9. Do you test your own code? Quality and testing is not the responsibility of tests and the QA department. Quality is everyone’s responsibility—especially a developer’s. A good programmer will take responsibility for the quality of their own code and make sure that they test that code before they hand it over to QA or anyone else. Someone else should always test your code, but you should test it as best as you can before you hand it over to someone else to test. I firmly believe part of being a good software developer is being a good tester. One of my favorite books on testing—that is a bit dated, but still a good read—is “Testing Computer Software.” 10. Do you share your knowledge? One of the hallmarks of an excellent developer is that they openly and freely share their knowledge. Not only does it help the team and other developers around them, but I firmly believe that you never really learn something until you teach it. The best programmers are always sharing their knowledge with others. They aren’t afraid of job security or that they might be giving away their secrets. The most valuable person on any team is the person who makes everyone else on the team more valuable, not the person who knows the most. Knowing a lot, but not sharing it doesn’t do anyone any good but yourself. 11. Do you use the best tools for your job? A really good programmer will always have a set of tools they use to be more effective at their job. It doesn’t matter what your preference of tools is, but you should have some set of tools that you deem are best for the job you are doing and you should have invested some time in learning those tools. A developer who really cares about what they do will take the time to find the tools that will help them do it better. Take Scott Hanselman, for example, he has an excellent list of developer tools (Windows based.) Your set of tools might be different, but tools are important. What is it they always say? The right tool for the right job. You can spend hours trying to turn a gasket on a pipe using different wrenches and pliers in your toolbox, or you can spend a couple of seconds accomplishing the same task by using a monkey wrench. (Trust me, I’ve learned that one the hard way.) 12. Can you build an actual application? Being able to write code is not enough. There are plenty of software developers out there who can make alterations to existing code bases and fix bugs, but there are far fewer software developers who could actually write an entire application from the ground up. A good software developer might not be able to develop a huge enterprise application or large software suite on their own, but they should be able to write at least some kind of simple application, by themselves. Being able to create an entire application—even if it is a small one—shows a fundamental understanding about how software works and how it is constructed. For a large portion of my career, I had no idea how to do this. I could fix bugs, I could alter some feature in an existing application, I could, perhaps, even add a new feature to an application, but I had no idea how to create my own application from the ground up. It was only after I had created some small side-projects on my own and actually built a real application that I really understood how all the pieces of a complex software system worked and how to put them together. Is this list complete? Should you beat someone over the head with it? No to both. These are just some guidelines you can use to see where you stand and what you can work on. Software development is a complex field. There is never going to be a checklist you can use to determine whether you or anyone else is a good developer, but I do believe a general set of guidelines is useful for getting a general idea of where you or someone you are interviewing stands and I believe this list can be used as a quick way to identify any weaknesses you may have that you might want to work on. If you liked this post and you’d like more career tips for software developers, be sure to sign up for my free, weekly emails. Also, I’ve love to hear from you about this test. Do you think it’s useful? Is there something I missed that I should have included on here? I had a list of about 30 items and narrowed it down to what I thought were the top 12.Reference: The Joel Test Updated For Programmers from our JCG partner John Sonmez at the Making the Complex Simple blog....
java-logo

Default methods and multiple inheritance

Recently Lukas JOOQ Eder posted and article about nested classes and their use. This is an interesting topic and his article is, as always, interesting and worth reading. There was only one slight statement I could not agree with and we had a brief reply chain leading to default method and why there can not be something like                 class Outer { <non-static> interface Inner { default void x() { System.out.println(Outer.this.toString()); } } Inner2 y() { return new Inner2(); } } class Inner2 implements Inner { } // This would now print Outer.toString() // to the console new Outer().y().x(); in Java. In the above code the default method of an inner interface would refer to the instance that is enclosing the interface, so to say. I believed that a “reply” was not the best communication form for this, as the original topic was different and here I go. What are default methods You probably know. If not google it, or read my articles Java 8 default methods: what can and can not do? and How not to use Java 8 default methods. If you googled you can see that default methods in Java 8 bring the Canaan, multiple inheritance is available. There is a very good discussion about it on stackoverflow with real professionals, who do know Java: Java has always had multiple inheritance of types. Default methods add multiple inheritance of behavior, but not of state. (Multiple inheritance of state in languages like C++ is where most of the trouble comes from.) – Brian Goetz Jun 21 ’14 at 2:05 In this article I will examine a little how to interpret and understand that statement. Types of inheritance The quote from Brian Goetz mentions:inheritance of types inheritance of behavior, and inheritance of state.Inheritance of types is very easy and well known for Java programmers. You define abstract methods in the interface, but you do not specify how they work, only the return value and the signature of the methods. With default methods Java 8 introduced inheritance of behavior without inheritance of state. But can you really have inheritance of behavior without inheritance of state? Not really. At least in Java 8 you can have inheritance of state though this is not recommended, not well performing (I mean: it may be slow) and also cumbersome and error prone to program. But you can, and I will show here how. (In addition to the thread local nonsense I published in the article I referred above.) I believe that Java 8 inventors wanted the default method to keep backward compatibility while implementing the functional interfaces (e.g.: streams) in the standard run time. I recently watched the series Fargo and I feel the language designers just obliviously answered “yes” to the question “Is that what you really want?” State inheritance with default methods Default methods can not access fields (except static fields, that are final anyway in interfaces, so let’s forget them for the while). Just like you can not access private fields of class A from a class B extending A. Or the other way around: you can not access the private fields of B from A. You can however have getters and setters in B and if you declare them as abstract methods in A you gain the access. Open sesame. Getters and setters are the solution. When you declare abstract methods in an interface for all the state fields you want to access from default methods you can access them. This way you get the very same result as if there was real state inheritance. The difference is the syntax: you use getter and setter methods instead of the field name, and you have to declare these in the interface. That way compile phase checks that the getters and setters are really there. You can see that things with Java 8 get really complicated. Mix that up with generics and you may not find a living soul who understands it all. Having a construct, like Outer.this.toString() from the sample code above would make it even more complex with no real leverage, probably. I believe I have some knowledge about what default methods are in Java 8 and what you can do with them. Having 10 years of Java and more than 30 years of programming experience, however, is not enough for me to tell how you should use default methods. I feel envy for the developers that still work with Java 1.6 or earlier in production code: they need not worry about default methods. (It was meant to be a joke.) Even though I try to give some advices. Recommendation Never mimic state inheritance in default methods. Hard to tell what it is in practice though. Calling a getter or setter is clearly is. Calling some abstract methods that are implemented in the implementing class may or may not be. If doubt: better do not. Never ever use the threadlocal trick I wrote in the other article. Use default methods for what Java language inventors used it: keep backward compatibility in your library interfaces. If you ever released a library and it contains an interface (how could otherwise it be, btw) do not change it… Think about client code using your library that implements the interface. From Java 8 you have the possibility to finish the sentence: do not change it incompatible. If there is a new method: create a default implementation so the code that already implemented the previous version remains compatible and there is no need to extend these classes.Reference: Default methods and multiple inheritance from our JCG partner Peter Verhas at the Java Deep blog....
osgi-alliance-logo

OSGi Service Test Helper: ServiceRegistrationRule

OSGi Service Tests can be an efficient means to avoid problems related to dangling service references. As promised in my post about writing simple service contribution verifications, this time I introduce a JUnit rule that assists in testing interactions between components. OSGi Service Tests for Component Interaction Assume we have a service that notifies related observers bound according to the whiteboard-pattern. Precisely we have a Service declaration and ServiceImpl as in the previous post. Additionaly we support ServiceListeners that should be notified on particular actions. To represent such an action we broaden the service interface of our example with a method declaration called Service#execute(): public interface Service { void execute(); } Beside the implementation of this execute method the contribution class have to provide the capabilities to bind and unbind ServiceListener references: public class ServiceImpl implements Service { public void execute() { [...] }public void bind( ServiceListener listener ) { [...] }public void unbind( ServiceListener listener ) { [...] } } As notification destination the callback type ServiceListeners provides a method declaration called ServiceListener#executed(): public interface ServiceListener { void executed(); } To complete the setup we have to register the service component, which we do again via declarative services. Note the additional 0..n reference declaration: <?xml version="1.0" encoding="UTF-8"?> <scr:component xmlns:scr="http://www.osgi.org/xmlns/scr/v1.1.0" immediate="true" name="Implementation of Service API"> <implementation class="com.codeaffine.example.core.ServiceImpl"/> <service< <provide interface="com.codeaffine.example.api.Service"/> </service> <reference bind="bind" unbind="unbind" cardinality="0..n" interface="com.codeaffine.example.api.ServiceListener" name="ServiceListener" policy="dynamic" /> </scr:component> Now the question is: How can we test that un-/binding of a listener works correctly and notifications are dispatched as expected? The basic idea is to register a ServiceListener spy and trigger Service#execute on the actual service implementation. The spy records calls to execute and allows to verify that binding and notification work as expected. Once we have ensured this, we can go on and deregister a primarily registered spy and verify that it is not notified about a subsequent action event. This makes sure unbinding works also as planned. However the test fixture for this scenario usually needs a bit of OSGi boilerplate. To reduce the clutter I have written a little JUnit rule that eases service registration and automatically performs a service registry cleanup after each test run. ServiceRegistrationRule As every other JUnit TestRule the ServiceRegistrationRule has to be provided as a public field in our PDE test. Note how the rule uses a parameterized constructor given the class instance of the test case. This reference is used to get hold of an appropriate BundleContext for service de-/registration. @Rule public final ServiceRegistrationRule serviceRegistration = new ServiceRegistrationRule( getClass() );private ServiceListener listener; private Service service;@Before public void setUp() { service = collectServices( Service.class, ServiceImpl.class ).get( 0 ); listener = mock( ServiceListener.class ); } The implicit test setup retrieves the registered service under test using the ServiceCollector I introduced in the last post. The listener DOC is created as spy using mockito. The first test scenario described above looks like this: @Test public void executeNotification() { serviceRegistration.register( ServiceListener.class, listener );service.execute();verify( listener ).executed(); } Pretty straight forward, isn’t it? Note that the ServiceRegistrationRule takes care of cleanup and removes the spy service from the service registry. To facilitate a test for the unbind scenario, the rule’s register method returns a handle to the service registration: @Test public void executeAfterListenerRemoval() { Registration registration = serviceRegistration.register( ServiceListener.class, listener ); registration.unregister();service.execute();verify( listener, never() ).executed(); } Line five (registration.unregister()) removes the listener spy from the service registry. This triggers an unbind and the listener gets never invoked. Of course a real world scenario could add additional tests for multiple listener registrations, exception handling and the like, but I think the concept has been made clear. Conclusion So far the ServiceRegistrationRule proves itself quite useful in our current project. It reduces boilerplate significantly and makes the tests cleaner and increases readability. The class is part of the com.codeaffine.osgi.test.util feature of the Xiliary P2 repository: http://fappel.github.io/xiliary In case you want to have a look at the code or file an issue you might also have a look at the Xiliary GitHub project: https://github.com/fappel/xiliary For everything else feel free to use the commenting section below. In a followup I will explain how to setup a maven-tycho build with integrated PDE-Tests as those described above. This is somewhat tricky as tycho does not allow to access the bundles built by the current reactor, so stay tuned..Reference: OSGi Service Test Helper: ServiceRegistrationRule from our JCG partner Rudiger Herrmann at the Code Affine blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close