Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Is pairing for everybody?

Pair programming is a great way to share knowledge. But every developer is different, does pairing work for everyone? Pairing helps a team normalise its knowledge – what one person knows, everyone else learns through pairing: keyboard shortcuts, techniques, practices, third party libraries as well as the details of the source code you’re working in. This pushes up the average level of the team and stops knowledge becoming siloed. Pairing also helps with discipline: it’s a lot harder to argue that you don’t need a unit test when there’s someone sitting next to you, literally acting as your conscience. It’s also a lot harder to just do the quick and dirty hack to get on to the next task, when the person sitting next to you has taken control of the keyboard to stop you committing war crimes against the source code. The biggest problem most teams face is basically one of communication: coordinating, in detail, the activities of a team of developers is difficult. Ideally, every developer would know everything that is going on across the team – but this clearly isn’t practical. Instead, we have to draw boundaries to make it easier to reason about the system as a whole, without knowing the whole system to the same level of detail. I’ll create an API, some boundary layer, and we each work to our own side of it. I’ll create the service, you sort out the user interface. I’ll sort out the network protocol, you sort out the application layer. You have to introduce an architectural boundary to simplify the communication and coordination. Your architecture immediately reflects the relationships of the developers building it. Whereas on teams that pair, these boundaries can be softer. They still happen, but the boundary becomes softer because as pairs rotate you see both sides of any boundary so it doesn’t become a black box you don’t know about and can’t change. One day I’m writing the user interface code, the next I’m writing the service layer that feeds it. This is how you spot inconsistencies and opportunities to fix the architecture and take advantage of implementation details on both sides. Otherwise this communication is hard. Continuous pair rotation means you can get close to the ideal that each developer knows, broadly, what is happening everywhere. However, let’s be honest: pairing isn’t for everyone. I’ve worked with some people who were great at pairing, who were a pleasure to work with. People who had no problem explaining their thought process and no ego to get bruised when you point out the fatal flaw in their idea. People who spot when you’ve lost the train of thought and pick up where you drifted off from. A good pairing session becomes very social. A team that is pairing can sound very noisy. It can be one of the hardest things to get used to when you start pairing: I seem to spend my entire day arguing and talking. When are we gonna get on and write some damned code? But that just highlights how little of the job is actually typing in source code. Most of the day is figuring out which change to make and where. A single line of code can take hours of arguing to get right and in the right place. But programming tends to attract people who are less sociable than others – and let’s face it, we’re a pretty anti-social bunch: I spend my entire day negotiating with a machine that works in 1s and 0s. Not for me the subtle nuances of human communication, it either compiles or it doesn’t. I don’t have to negotiate or try and out politick the compiler. I don’t have to deal with the compiler having “one of those days” (well, I say that, sometimes I swear…). I don’t have to take the compiler to one side and offer comforting words because its cat died. I don’t have to worry about hurting the compiler’s feelings because I made the same mistake for the hundredth time: “yes of course I’m listening to you, no I’m not just ignoring you. Of course I value your opinions, dear. But seriously, this is definitely an IList of TFoo!” So it’s no surprise that among the great variety of programmers you meet, some are extrovert characters who relish the social, human side of working in a team of people, building software. As well as the introvert characters who relish the quiet, private, intellectual challenge of crafting an elegant solution to a fiendish problem. And so to pairing: any team will end up with a mixture of characters. The extroverts will tend to enjoy pairing, while the introverts will tend to find it harder and seek to avoid it. This isn’t necessarily a question of education or persuasion, the benefits are relatively intangible and more introverted developers may find the whole process less enjoyable than working solo. It sounds trite: but happy developers are productive developers. There’s no point doing anything that makes some of your peers unhappy. All teams need to agree rules. For example, some people like eating really smelly food in an open plan office. Good teams tend to agree rules about this kind of behaviour; everyone agrees that small sacrifices for an individual make a big difference for team harmony. However, how do you resolve a difference of opinion with pairing? As a team decision, pairing is a bit all or nothing. Either we agree to pair on everything, so there’s no code ownership, regular rotation and we learn from each other. Or we don’t, and we each become responsible for our own dominion. We can’t agree that those that want to pair will go into the pairing room so as not to upset everyone else. One option is to simply require that everyone on your team has to love pairing. I don’t know about you: hiring good people is hard. The last thing I want to do is start excluding people who could otherwise be productive. Isn’t it better to at least have somebody doing something, even if they’re not pairing? Another option is to force developers to pair, even if they find it difficult or uncomfortable. But is that really going to be productive? Building resentment and unhappiness is not going to create a high performance team. Of course, the other extreme is just as likely to cause upset: if you stop all pairing, then those that want to will feel resentful and unhappy. And what about the middle ground? Can you have a team where some people pair while others work on their own? It seems inevitable that Conway’s law will come into play: the structure of the software will reflect the structure of the team. It’s very difficult for there to be overlap between developers working on their own and developers that are pairing. For exactly the same reason it’s difficult for a group of individual developers to overlap on the same area of code at the same time: you’ll necessarily introduce some architectural boundary to ease coordination. This means you still end up with a collection of silos, some owned by individual developers, some owned by a group of developers. Does this give you the best compromise? Or the worst of both worlds? What’s your experience? What have you tried? What worked, what didn’t?Reference: Is pairing for everybody? from our JCG partner David Green at the Actively Lazy blog....
java-logo

jinfo: Command-line Peeking at JVM Runtime Configuration

In several recent blogs (in my reviews of the books Java EE 7 Performance Tuning and Optimization and WildFly Performance Tuning in particular), I have referenced my own past blog posts on certain Oracle JDK command-line tools. I was aghast to discover that I had never exclusively addressed the nifty jinfo tool and this post sets to rectify that troubling situation. I suspect that the reasons I chose not to write about jinfo previously include limitations related to jinfo discussed in my post VisualVM: jinfo and So Much More. In the Java SE 8 version of jinfo running on my machine, the primary limitation of jinfo on Windows that I discussed in the post Acquiring JVM Runtime Information has been addressed. In particular, I noted in that post that the -flags option was not supported on Windows version of jinfo at that time. As the next screen snapshot proves, that is no longer the case (note the use of jps to acquire the Java process ID to instruct jinfo to query).As the above screen snapshot demonstrates, the jinfo -flags command and option show the flags the explicitly specified JVM options of the Java process being monitored. If I want to find out about other JVM flags that are in effect implicitly (automatically), I can run java -XX:+PrintFlagsFinal to see all default JVM options. I can then query for any one of these against a running JVM process to find out what that particular JVM is using (same default or overridden different value). The next screen snapshot demonstrates how a small portion of the output provided from running java -XX:+PrintFlagsFinal.Let’s suppose I notice a flag called PrintHeapAtGC in the above output and want to know if it’s set in my particular Java application (-XX:+PrintHeapAtGC means it’s set and -XX:-PrintHeapAtGC means it’s not set). I can have jinfo tell me what its setting is (note my choice to use jcmd instead of jps in this case to determine the Java process ID):Because of the subtraction sign (-) instead of an addition sign (+) after the colon and before “PrintHeapAtGC”, we know this is turned off for the Java process with the specified ID. It turns out that jinfo does more than let us look; it also let’s us touch. The next screen snapshot shows changing this option using jinfo.As the previous screen snapshot indicates, I can turn off and on the boolean-style JVM options by simply using the same command to view the flag’s setting but preceding the flag’s name with the addition sign (+) to turn it on or with the substraction sign (-) to turn it off. In the example just shown, I turned off the PrintGCDateStamps, turned it back on again, and monitored its setting between those changes. Not all JVM options are boolean conditions. In those cases, their new values are assigned to them by concatenating the equals sign (=) and new value after the flag name. It’s also important to note that the target JVM (the one you’re trying to peek at and touch with jinfo will not allow you to change all its JVM option settings). In such cases, you’ll likely see a stack trace with message “Command failed in target VM.” In addition to displaying a currently running JVM’s options and allowing the changing of some of these, jinfo also allows one to see system properties used by that JVM as name/value pairs. This is demonstrated in the next screen snapshot with a small fraction of the output shown.Perhaps the easiest way to run jinfo is to simply provide no arguments other than the PID of the Java process in question and have both JVM options (non-default and command-line) and system properties displayed. Running jinfo -help provides brief usage details. Other important details are found in the Oracle documentation on the jinfo tool. These details includes the common (when it comes to these tools) reminder that this tool is “experimental and unsupported” and “might not be available in future releases of the JDK.” We are also warned that jinfo on Windows requires availability of dbgeng.dll or installed Debugging Tools For Windows. Although I have referenced the handy jinfo command line tool previously in posts VisualVM: jinfo and So Much More and Acquiring JVM Runtime Information, it is a handy enough tool to justify a post of its very own. As a command-line tool, it enjoys benefits commonly associated with command-line tools such as being relatively lightweight, working well with scripts, and working in headless environments.Reference: jinfo: Command-line Peeking at JVM Runtime Configuration from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
jetbrains-intellijidea-logo

My Favorite IntelliJ IDEA Features

I have been a long time user (and customer) of IntelliJ IDEA. I think I have started using it around 2005 or 2006, version 5.0 at the time. I was an Eclipse user back then. A few of my colleagues recommended it to me, and at first I was not convinced, but after trying it out I was impressed. Now in 2014, IntelliJ IDEA is still my IDE of choice. The intent of this post is not to start an IDE war, but to focus on a few of IDEA features that sometimes other IDEA users are not aware of.       Darcula Theme The Darcula Theme changes your user interface to a dark look and feel. Well, maybe this is nothing new for you, but I would like to point two major advantages. First, it causes much less stress to your eyes. Give it a try! After a few hours using the dark look if you switch to the default one again you’re probably going to feel your eyes burning for a few minutes. Second, if you’re a mobility addict and you’re always running on battery, the dark look can also help your battery to last longer.Postfix completion Postfix completion is the feature that I always wanted and I didn’t even know it. Postfix completion allows you to change already typed expressions. How many times all of us have cursed for having to return back to add a missing cast? Or because we actually wanted to System.out the expression? Well, Postfix completion fixes that. For instance for the System.out, you type the expression: someVar You can now type: someVar.sout And the expression is transformed to: System.out.println(someVar); Check this awesome post in IntelliJ Blog for additional information about Postfix completion. Frameworks and Technologies Support In the Java world, you have a lot of frameworks and technologies available. Most likely you will come across to many of them in your developer work. Sometimes, it’s a nightmare to deal with the additional layer and the required configuration for everything to work correctly. Look at Maven for instance, it’s a pain to find which dependency to import when you need a class. IDEA Maven support, allows you to search for the class in your local repository and add the correct dependency to your pom.xml file. Just type the name of the class, press Alt + Enter and Add Maven Dependency:Pick the library you need. It’s added automatically to your pom.xml.You have support for Java EE, Spring, GWT, Maven and many others. Check here for a full list. Inject Language With Inject Language, it’s possible to have syntax, error highlighting and code completion for a large number of languages into String literals. I use GWT a lot, and this allows me to be able to write safe HTML into the String HTML parameters of the API, like this:Other examples include, SQL, CSS, Javascript, Groovy, Scala and many others. Try it out by yourself by pressing Alt + Enter on a String statement and then Inject Language. Presentation Mode Did you ever had the need to make a presentation about code using your IDE and the audience is not able to see it properly due to font size? And then you have to interrupt your presentation to adjust it. Sometimes you don’t even remember where to adjust it. Wouldn’t be easier to just have a dedicate presentation mode? Just go to View menu and then Enter Presentation Mode option. Conclusion I do believe that choosing an IDE is a matter of personal preference and you should stick with the one you feel more productive for the task that you have to complete. I still use Eclipse when I have to deal with BPM stuff. Some of these features also exist on the other IDE’s, but I have the impression by chatting with other developers that they don’t know about their existence. Explore your development environment and I’m pretty sure you will learn something new. I’m always learning new stuff in IntelliJ IDEA.Reference: My Favorite IntelliJ IDEA Features from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....
agile-logo

5 Things I’ve learnt being a scrum master

I’ve been a scrum master now for about 6 months. Having been involved in scrum previously as a product owner, as well as a developer, moving into this role has really opened my eyes to some of the more political and arguably awkward elements of trying to get rid of impediments. Stay calm when others aren’t Something that I think is really key about being a scrum master, you have to be thick skinned. You have to not only push back, but when people whine at you and bring office politics into play, it’s vital that you remember exactly what the end game is, to meet the sprint goal! I should also point out, this doesn’t mean ruling with an iron fist, as a scrum master, you still succeed with the team and fail with the team. You can tell when something isn’t working and using an agile methodology shouldn’t be a painful process, it should be motivating. You are neither a manager, nor just a developer It’s quite an interesting position to be in as from my experience, it’s not unrealistic to get your hands dirty in some code while being a scrum master. You have to be strong enough to fend people off from poaching your team members even if they’re higher up the food chain to yourself. You aren’t “in charge” of the team, but you do have a responsibility to push back on poachers. Don’t be afraid to say “no” If your product owner is telling you to put 60 points into a sprint, when you know the velocity you’ve been hitting for the past 4 sprints has consistently been 40 points, don’t be afraid to say “what you’re asking is unattainable”. It’s much better to be honest early on and push back. Blindly saying yes, promising to the customer and then having to deal with the consequences later on isn’t where anyone wants to be. Make sure stand ups are to the point This might be like telling grandma to suck eggs, but it’s vital that stand ups really are short, sharp and to the point. There’s nothing worse than listening to a stand up where everyone in the back of their mind is thinking “I wish s/he’d just get to the point!”. This is a situation where you have to stick to your guns and if people get offended, they get offended. You have to tell the person, “we don’t need to know any of the extra detail, we just need to know from a high level; what it is you’re going to achieve today, did you achieve what you set out to do yesterday and importantly, do you have any impediments”. Keep things moving Sometimes things in a sprint can get stale, tasks can get stuck in one condition, you need to keep them moving! As an example, If there’s a task that’s been coded for half a day but not released to testing, find out why. You never know, the CI server might be down, there might be a problem releasing, you need to get it out before it becomes an impediment. Keeping the task board fresh and a true representation of what’s actually happening in your sprint can really boost morale if you know there’s only a few tasks left and you’re literally “sprinting” to the finish!Reference: 5 Things I’ve learnt being a scrum master from our JCG partner David Gray at the Code Mumble blog....
software-development-2-logo

Test Attribute #10 – Isolation

This is last, final, and 10th entry in the ten commandments of test attributes that started here. And you should read all of them. We usually talk about isolation in terms of mocking. Meaning, when we want to test our code, and the code has dependencies, we use mocking to fake those dependencies, and allow us to test the code in isolation. That’s code isolation. But test isolation is different. An isolated test can run alone, in a suite, in any order, independent from the other tests and give consistent results. We’ve already identified in footprint the different environment dependencies that can affect the result, and of course, the tested code has something to do with it. Other tests can also create dependency, directly or not. In fact, sometimes we may be relying on the order of tests. To give an example, I summon the witness for the prosecution: The Singleton. Here’s some basic code using a singleton: public class Counter { private static Counter instance; private int count = 0; public static void Init() { instance = new Counter(); }public static Counter GetInstance() { return instance; }public int GetValue() { return count++; } } Pretty simple: The static instance is initialized in a call to Init. We can write these tests: [TestMethod]public void CounterInitialized_WorksInIsolation() { Counter.Init(); var result = Counter.GetInstance().GetValue(); Assert.AreEqual(0, result); }[TestMethod]public void CounterNotInitialized_ThrowsInIsolation() { var result = Counter.GetInstance().GetValue(); Assert.AreEqual(1, result); } Note that the second passes when running after the first. But if you run it alone it crashes, because the instance is not initialized. Of course, that’s the kind of thing that gives singletons a bad name. And now you need to jump through hoops in order to check the second case. By the way, we’re not just relying on the order of the tests – we’re relying on the  way the test runner runs them. It could be in the order we’ve written them, but not necessarily. While singletons mostly appear in the tested code, test dependency can occur because of the tests themselves. As long as you keep state in the test class, including mocking operations, there’s a chance that you’re depending on the order of the run. Do you know this trick? public class MyTests: BaseTest { ///... Why not put all common code in a base class, then derive the test class from it? Well, apart of making readabilty suffer, and debugging excruciating, we now have all kinds of test setup and behavior that are located in another shared place. It may be that the test itself does not suffer interference from other tests, but we’re introducing this risk by putting shared code in the base class. Plus, you’ll need to no more about initialization order. And what if the base class is using a singleton? Antics ensue. Test isolation issues show themselves very easily, because once they are out of order (ha-ha), you’ll get the red light. The problem is identifying the problem, because it may seem like an “irreproducible problem”. In order to avoid isolation problems:Check the code. If you can identify patterns of usage like singelton, be aware of that and put it to use: either initialize the singleton before the whole run, or restart it before every test. Rearrange. If there are additional dependencies (like our counter increase), start thinking about rearranging the tests. Because the way the code is written, you’re starting to test more than just small operations. Don’t inherit. Test base classes create interdependence and hurt isolation. Mocking. Use mocking to control any shared dependency. Clean up. Make sure that tests clean up after themselves. Or, instead before every run.Isolation issues in tests are very annoying, because especially in unit tests, they can be easily avoided. Know the code, understand the dependencies, and never rely on another test to set up the state needed for the current one.Reference: Test Attribute #10 – Isolation from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
java-logo

Java Concurrency Tutorial – Atomicity and race conditions

Atomicity is one of the key concepts in multi-threaded programs. We say a set of actions is atomic if they all execute as a single operation, in an indivisible manner. Taking for granted that a set of actions in a multi-threaded program will be executed serially may lead to incorrect results. The reason is due to thread interference, which means that if two threads execute several steps on the same data, they may overlap. The following Interleaving example shows two threads executing several actions (prints in a loop) and how they are overlapped:       public class Interleaving { public void show() { for (int i = 0; i < 5; i++) { System.out.println(Thread.currentThread().getName() + " - Number: " + i); } } public static void main(String[] args) { final Interleaving main = new Interleaving(); Runnable runner = new Runnable() { @Override public void run() { main.show(); } }; new Thread(runner, "Thread 1").start(); new Thread(runner, "Thread 2").start(); } } When executed, it will produce unpredictable results. As an example: Thread 2 - Number: 0 Thread 2 - Number: 1 Thread 2 - Number: 2 Thread 1 - Number: 0 Thread 1 - Number: 1 Thread 1 - Number: 2 Thread 1 - Number: 3 Thread 1 - Number: 4 Thread 2 - Number: 3 Thread 2 - Number: 4 In this case, nothing wrong happens since they are just printing numbers. However, when you need to share the state of an object (its data) without synchronization, this leads to the presence of race conditions. Race condition Your code will have a race condition if there’s a possibility to produce incorrect results due to thread interleaving. This section describes two types of race conditions:Check-then-act Read-modify-writeTo remove race conditions and enforce thread safety, we must make these actions atomic by using synchronization. Examples in the following sections will show what the effects of these race conditions are. Check-then-act race condition This race condition appears when you have a shared field and expect to serially execute the following steps:Get a value from a field. Do something based on the result of the previous check.The problem here is that when the first thread is going to act after the previous check, another thread may have interleaved and changed the value of the field. Now, the first thread will act based on a value that is no longer valid. This is easier seen with an example. UnsafeCheckThenAct is expected to change the field number once. Following calls to changeNumber method, should result in the execution of the else condition: public class UnsafeCheckThenAct { private int number; public void changeNumber() { if (number == 0) { System.out.println(Thread.currentThread().getName() + " | Changed"); number = -1; } else { System.out.println(Thread.currentThread().getName() + " | Not changed"); } } public static void main(String[] args) { final UnsafeCheckThenAct checkAct = new UnsafeCheckThenAct(); for (int i = 0; i < 50; i++) { new Thread(new Runnable() { @Override public void run() { checkAct.changeNumber(); } }, "T" + i).start(); } } } But since this code is not synchronized, it may (there's no guarantee) result in several modifications of the field: T13 | Changed T17 | Changed T35 | Not changed T10 | Changed T48 | Not changed T14 | Changed T60 | Not changed T6 | Changed T5 | Changed T63 | Not changed T18 | Not changed Another example of this race condition is lazy initialization. A simple way to correct this is to use synchronization. SafeCheckThenAct is thread-safe because it has removed the race condition by synchronizing all accesses to the shared field. public class SafeCheckThenAct { private int number; public synchronized void changeNumber() { if (number == 0) { System.out.println(Thread.currentThread().getName() + " | Changed"); number = -1; } else { System.out.println(Thread.currentThread().getName() + " | Not changed"); } } public static void main(String[] args) { final SafeCheckThenAct checkAct = new SafeCheckThenAct(); for (int i = 0; i < 50; i++) { new Thread(new Runnable() { @Override public void run() { checkAct.changeNumber(); } }, "T" + i).start(); } } } Now, executing this code will always produce the same expected result; only a single thread will change the field: T0 | Changed T54 | Not changed T53 | Not changed T62 | Not changed T52 | Not changed T51 | Not changed ...In some cases, there will be other mechanisms which perform better than synchronizing the whole method but I won’t discuss them in this post. Read-modify-write race condition Here we have another type of race condition which appears when executing the following set of actions:Fetch a value from a field. Modify the value. Store the new value to the field.In this case, there’s another dangerous possibility which consists in the loss of some updates to the field. One possible outcome is: Field’s value is 1. Thread 1 gets the value from the field (1). Thread 1 modifies the value (5). Thread 2 reads the value from the field (1). Thread 2 modifies the value (7). Thread 1 stores the value to the field (5). Thread 2 stores the value to the field (7).As you can see, update with the value 5 has been lost. Let’s see a code sample. UnsafeReadModifyWrite shares a numeric field which is incremented each time: public class UnsafeReadModifyWrite { private int number; public void incrementNumber() { number++; } public int getNumber() { return this.number; } public static void main(String[] args) throws InterruptedException { final UnsafeReadModifyWrite rmw = new UnsafeReadModifyWrite(); for (int i = 0; i < 1_000; i++) { new Thread(new Runnable() { @Override public void run() { rmw.incrementNumber(); } }, "T" + i).start(); } Thread.sleep(6000); System.out.println("Final number (should be 1_000): " + rmw.getNumber()); } } Can you spot the compound action which causes the race condition? I’m sure you did, but for completeness, I will explain it anyway. The problem is in the increment (number++). This may appear to be a single action but in fact, it is a sequence of three actions (get-increment-write). When executing this code, we may see that we have lost some updates: 2014-08-08 09:59:18,859|UnsafeReadModifyWrite|Final number (should be 10_000): 9996 Depending on your computer it will be very difficult to reproduce this update loss, since there’s no guarantee on how threads will interleave. If you can’t reproduce the above example, try UnsafeReadModifyWriteWithLatch, which uses a CountDownLatch to synchronize thread’s start, and repeats the test a hundred times. You should probably see some invalid values among all the results: Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 997 Final number (should be 1_000): 999 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 This example can be solved by making all three actions atomic. SafeReadModifyWriteSynchronized uses synchronization in all accesses to the shared field: public class SafeReadModifyWriteSynchronized { private int number; public synchronized void incrementNumber() { number++; } public synchronized int getNumber() { return this.number; } public static void main(String[] args) throws InterruptedException { final SafeReadModifyWriteSynchronized rmw = new SafeReadModifyWriteSynchronized(); for (int i = 0; i < 1_000; i++) { new Thread(new Runnable() { @Override public void run() { rmw.incrementNumber(); } }, "T" + i).start(); } Thread.sleep(4000); System.out.println("Final number (should be 1_000): " + rmw.getNumber()); } } Let’s see another example to remove this race condition. In this specific case, and since the field number is independent to other variables, we can make use of atomic variables. SafeReadModifyWriteAtomic uses atomic variables to store the value of the field: public class SafeReadModifyWriteAtomic { private final AtomicInteger number = new AtomicInteger(); public void incrementNumber() { number.getAndIncrement(); } public int getNumber() { return this.number.get(); } public static void main(String[] args) throws InterruptedException { final SafeReadModifyWriteAtomic rmw = new SafeReadModifyWriteAtomic(); for (int i = 0; i < 1_000; i++) { new Thread(new Runnable() { @Override public void run() { rmw.incrementNumber(); } }, "T" + i).start(); } Thread.sleep(4000); System.out.println("Final number (should be 1_000): " + rmw.getNumber()); } } Following posts will further explain mechanisms like locking or atomic variables. Conclusion This post explained some of the risks implied when executing compound actions in non-synchronized multi-threaded programs. To enforce atomicity and prevent thread interleaving, one must use some type of synchronization.You can take a look at the source code at github.Reference: Java Concurrency Tutorial - Atomicity and race conditions from our JCG partner Xavier Padro at the Xavier Padró's Blog blog....
software-development-2-logo

A Wonderful SQL Feature: Quantified Comparison Predicates (ANY, ALL)

Have you ever wondered about the use-case behind SQL’s ANY (also: SOME) and ALL keywords? You have probably not yet encountered these keywords in the wild. Yet they can be extremely useful. But first, let’s see how they’re defined in the SQL standard. The easy part:           8.7 <quantified comparison predicate>FunctionSpecify a quantified comparison.Format<quantified comparison predicate> ::= <row value constructor> <comp op> <quantifier> <table subquery><quantifier> ::= <all> | <some> <all> ::= ALL <some> ::= SOME | ANY Intuitively, such a quantified comparison predicate can be used as such: -- Is any person of age 42? 42 = ANY (SELECT age FROM person)-- Are all persons younger than 42? 42 > ALL (SELECT age FROM person) Let’s keep with the useful ones. Observe that you have probably written the above queries with a different syntax, as such: -- Is any person of age 42? 42 IN (SELECT age FROM person)-- Are all persons younger than 42? 42 > (SELECT MAX(age) FROM person) In fact, you’ve used the <in predicate>, or a greater than predicate with a <scalar subquery> and an aggregate function. The IN predicate It’s not a coincidence that you might have used the <in predicate> just like the above <quantified comparison predicate> using ANY. In fact, the <in predicate> is specified just like that: 8.4 <in predicate>Syntax Rules2) Let RVC be the <row value constructor> and let IPV be the <in predicate value>.3) The expressionRVC NOT IN IPVis equivalent toNOT ( RVC IN IPV )4) The expressionRVC IN IPVis equivalent toRVC = ANY IPV Precisely! Isn’t SQL beautiful? Note, the implicit consequences of 3) lead to a very peculiar behaviour of the NOT IN predicate with respect to NULL, which few developers are aware of. Now, it’s getting awesome So far, there is nothing out of the ordinary with these <quantified comparison predicate>. All of the previous examples can be emulated with “more idiomatic”, or let’s say, “more everyday” SQL. But the true awesomeness of <quantified comparison predicate> appears only when used in combination with <row value expression> where rows have a degree / arity of more than one: -- Is any person called "John" of age 42? (42, 'John') = ANY (SELECT age, first_name FROM person)-- Are all persons younger than 55? -- Or if they're 55, do they all earn less than 150'000.00? (55, 150000.00) > ALL (SELECT age, wage FROM person) See the above queries in action on PostgreSQL in this SQLFiddle. At this point, it is worth mentioning that few databases actually support…row value expressions, or… quantified comparison predicates with row value expressionsEven if specified in SQL-92, it looks as most databases still take their time to implement this feature 22 years later. Emulating these predicates with jOOQ But luckily, there is jOOQ to emulate these features for you. Even if you’re not using jOOQ in your project, the following SQL transformation steps can be useful if you want to express the above predicates. Let’s have a look at how this could be done in MySQL: -- This predicate (42, 'John') = ANY (SELECT age, first_name FROM person)-- ... is the same as this: EXISTS ( SELECT 1 FROM person WHERE age = 42 AND first_name = 'John' ) What about the other predicate? -- This predicate (55, 150000.00) > ALL (SELECT age, wage FROM person)-- ... is the same as these: ---------------------------- -- No quantified comparison predicate with -- Row value expressions available (55, 150000.00) > ( SELECT age, wage FROM person ORDER BY 1 DESC, 2 DESC LIMIT 1 )-- No row value expressions available at all NOT EXISTS ( SELECT 1 FROM person WHERE (55 < age) OR (55 = age AND 150000.00 <= wage) ) Clearly, the EXISTS predicate can be used in pretty much every database to emulate what we’ve seen before. If you just need this for a one-shot emulation, the above examples will be sufficient. If, however, you want to more formally use <row value expression> and <quantified comparison predicate>, you better get SQL transformation right. Read on about SQL transformation in this article here.Reference: A Wonderful SQL Feature: Quantified Comparison Predicates (ANY, ALL) from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
software-development-2-logo

The most important factor in software decay

Do you have big balls of mud? Here’s an experiment to amaze your friends. You probably listen to music on your phone via some sort of headset. The headset we shall consider here consists of two earbuds (in-ear pieces, rather than head-phones which cover the ears) connected via wires to a jack which plugs into the phone itself. Disconnect your headset from your phone. Show it to your friends. Figure 1 shows a typical example of such a headset.      As you hold the headset wires twixt thumb and finger, shake them about until your friends are bored to tears. Note that the wires may temporarily become tangled with one another but invariably return to the untangled state of figure 1. Now, carefully fold the headset into your trouser pocket and take a long walk, dragging a friend with you to witness that you do not touch the headset further. Finally, return to your friends and carefully extract the headset from your trouser pocket. TA-DA!! The wires will have mysteriously collapsed into a tangled disordered mess of kinks, knots and twists of the most bewildering complexity and inventiveness – all without your ever having touched them! Figure 2 shows a sorry headset fished from a trouser pocket.  Don’t tell anyone, but here’s the secret … Why does this happen? Why don’t the wires remain untangled? To answer this, we must look at the three actors that take to the stage in both scenarios, scenarios hereafter named the, “Out-of-pocket,” and the, “In-pocket,” scenarios. First is thermal fluctuation. This refers to the shaking of the headset, both explicitly by jiggling it up and down in the out-of-pocket scenario, and by the slower action of the stride in the in-pocket scenario. Both mechanisms subject the headset to pulses of energy causing its parts to move at random. Second is gravity. Gravity tends to pull all parts of the headset down and as the jack and earbuds reside at the wire ends (and as the headset is held roughly half-way along its length in figure 1) then the earbuds and jack tend to position themselves towards the bottom of figure 1. Third is spatial extension. Perhaps the greatest difference between the two scenarios is size of arena in which they operate. In the out-of-pocket scenario, the holding aloft of the headset allows gravity to stretch the headset wires to great extent. Knot-formation relies on two sections of wire coming into contact. Such contacts become simply less probable with increasing volume. In the confined space of the in-pocket scenario, with wire pressed on wire, knots become far more likely. (Friction also plays a role, with wind resistance easily overcome by gravity in the out-of-pocket scenario but with the cloth-on-cloth surface resistance of the in-pocket scenario holding wires in contact for longer than might otherwise occur, again increasing the probability of knotting.) Thus, consider the headset in the out-of-pocket scenario. Tugging on the headset will cause it to move, introducing several new bends and twists in the wires. Throughout, however, gravity constantly pulls the earbuds and wires downwards from the point at which they are held, so that as the energy of tug dissipates gravity will already have begun ironing out the loose bends. In the in-pocket scenario, however, each stride will deliver a weak energy impulse to the headset, enough to move the headset around just a tiny amount. Confined within the pocket volume and unsupported at any point from above, the wires do not stretch out under the influence of gravity but may even pool at the pocket’s bottom, where they will writhe over one another, producing optimal knot-formation conditions. Despite the tediousness of such a description, we can view these events from another perspective: the perspective of microstate versus macrostate. Now things become a little more interesting. Microstate and macrostate. We can think of a, “Microstate,” of the headset not, as might be expected, the state of a small part of the headset, but rather as its complete configuration: a precise description, at a single instant, of the position of its earbuds, jack, and of the entire length and disposition of its wires. In contrast to this detail, a, “Macrostate,” is a broad and undetailed description of the headset. For reasons that shall be explained in a moment, the, “Order,” of the headset interests us most – that is, how messy and tangled it appears – and hence we shall use the least amount of information possible to describe this order. We shall use the single bit of information – yes or no – that answers the question, “Does this headset look ordered?” We can say that from an initial microstate of the headset, A, only a certain set of microstates lies thermally accessible in that such microstates deviate from A to a degree allowable by the upcoming energy impulse. Given the randomness of this energy impulse, there is an equal probability of entering any one of those accessible microstates; let us say that the system happens to transition from microstate A to microstate B. For the out-of-pocket scenario, gravity constantly tries to pull the headset back from microstate B to microstate A (or something like it), so the set of thermally accessible microstates available from microstate B will be slightly biased because gravity will prevent the system reaching states as far from B as B is from A. For the in-pocket scenario, however, once the headset arrives in microstate B then the new set of accessible microstates will contain equal numbers of microstates that move back towards A as away from it. And given that the choice of the new microstate is random, then the in-pocket scenario will allow the headset to enter more microstates inaccessible to the out-of-pocket scenario. Imagine for a moment that you could count all the microstates in which the headset could find itself, that is, you could take a snap-shot of every possible position that the headset wires could assume. If we focus on just one wire, we might say that the wire looks ordered when it forms a straight line: it contains 0 bends. It still looks ordered with 1 bend, or perhaps 2, or 100. But above a certain number of bends it begins to look disordered. This simply accords with a casual understanding of that term. Yet how many bends can a wire support? Perhaps thousands. Or tens of thousands. Or millions. The point is that the vast majority of microstates of the wire will be what we call, “Disordered,” and only a tiny proportion will be, “Ordered.” Thus it is not that there are fewer ways to make a disordered headset when it is held aloft than when it sits in a pocket, but that putting a headset in a pocket allows it to randomly explore a far larger number of its available microstates and as the vast majority of these microstates correspond to the disordered macrostate then the in-pocket headset is overwhelmingly likely to end up disordered. What on earth has this got to do with software? Software structure, too, exhibits this microstate/macrostate duality. The package structure of a Java program reveals packages connected to other packages by dependencies. This structure at any given time represents one microstate of the system. Programmers add features and make updates, acting as random thermal fluctuations, changing the package structure slightly, nudging the system into new microstates. (Programmers do not, of course, make random updates to a code-base in that they do not pick a text character at random from the source code and flip it. Nevertheless, no one can predict accurately in advance a year’s worth of updates in any significant code-base. It is in this sense – in this unpredictability and essential patternlessness – that the programmer’s coding can be modeled as a random input to the program.) Pulling back from the detail of the individual microstates, we evaluate the macrostate of the system by asking, “Does this program look ordered?” If ordered, we say the program is, “Well structured;” otherwise it is, “Poorly structured.” Small programs, with few inter-package dependencies, usually appear ordered. As programs grow, however, there seems inevitably to come a point when dependencies between packages become dense, overstretched and start looping back on themselves: such package structures truly merit the description, “Disordered.” Brian Foote and Joseph Yoder memorably labeled any such disordered system a Big Ball of Mud, claiming, “A big ball of mud is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle.” They also claimed big balls of mud to be the de-facto software structure rather than the exception and our headset musings show why this may be so. With Java packages free to depend on one another without restriction, there are far more ways to produce a disordered set of inter-dependent packages than an ordered set of those same packages, so given the thermal fluctuation of programmer input the program will find itself propelled into a series of randomly selected microstates and as most microstates correspond to a disordered macrostate then most systems will end up disordered. Figure 3 shows the package structures of four programs depicted as four spoiklin diagrams in which each circle represents a package, each straight line a dependency from a package above to below and each curved line a dependency from a package below to above. Despite the rather miniaturized graphics, most programmers would consider only one of these programs to be well-structured.    All of which would remain a mere aesthetic curiosity except for one crucial point: the structure of a program bears directly on the predictability of update costs and often on the actual costs themselves. Well-structured programs clearly show which packages use the services provided by other packages, thus simplifying prediction of the costs involved in any particular package change. Poorly structured programs, on the other hand, suffocate beneath such choking dependencies that even identifying the impacted packages that might stem from a particular change becomes a chore, clouding cost predictability and often causing that cost to grow disproportionately large. In figure 3, program B was designed using radial encapsulation, which allows package dependencies only in the direction of the top level package. Such a restriction mirrors the holding aloft of a headset, allowing gravity to stretch it out and keep it spontaneously ordered. Such a restriction makes the forming of a big ball of mud as improbable as a headset held aloft suddenly collapsing into a sequence of knots. The precise mechanism, however, is unimportant. What matters is that the program was designed according to systemic principles that aggressively restrict the number of microstates into which the program might wander whilst ensuring that those inhabitable microstates correspond to an ordered, well-structured macrostate. No one designs poorly structured programs to be poorly structured. Instead, programs become poorly structured because the structuring principles according to which they are built do not prevent them from becoming so. Perhaps programmers should ask not, “Why are big balls of mud the de-facto program structure?” but, “Why are good structuring principles ignored?” Summary Some find words like, “Thermodynamics,” “Equilibrium,” and, “Entropy,” intimidating so posts exploring these concepts should delay their introduction.Reference: The most important factor in software decay from our JCG partner Edmund Kirwan at the A blog about software. blog....
java-logo

Autoboxing, Unboxing, and NoSuchMethodError

J2SE 5 introduced numerous features to the Java programming language. One of these features is autoboxing and unboxing, a feature that I use almost daily without even thinking about it. It is often convenient (especially when used with collections), but every once in a while it leads to some nasty surprises, “weirdness,” and “madness.” In this blog post, I look at a rare (but interesting to me) case of NoSuchMethodError resulting from mixing classes compiled with Java versions before autoboxing/unboxing with classes compiled with Java versions that include autoboxing/unboxing. The next code listing shows a simple Sum class that could have been written before J2SE 5. It has overloaded “add” methods that accept different primitive numeric data types and each instance of Sum> simply adds all types of numbers provided to it via any of its overloaded “add” methods. Sum.java (pre-J2SE 5 Version) import java.util.ArrayList;public class Sum { private double sum = 0;public void add(short newShort) { sum += newShort; }public void add(int newInteger) { sum += newInteger; }public void add(long newLong) { sum += newLong; }public void add(float newFloat) { sum += newFloat; }public void add(double newDouble) { sum += newDouble; }public String toString() { return String.valueOf(sum); } } Before unboxing was available, any clients of the above Sum class would need to provide primitives to these “add” methods or, if they had reference equivalents of the primitives, would need to convert the references to their primitive counterparts before calling one of the “add” methods. The onus was on the client code to do this conversion from reference type to corresponding primitive type before calling these methods. Examples of how this might be accomplished are shown in the next code listing. No Unboxing: Client Converting References to Primitives private static String sumReferences( final Long longValue, final Integer intValue, final Short shortValue) { final Sum sum = new Sum(); if (longValue != null) { sum.add(longValue.longValue()); } if (intValue != null) { sum.add(intValue.intValue()); } if (shortValue != null) { sum.add(shortValue.shortValue()); } return sum.toString(); } J2SE 5’s autoboxing and unboxing feature was intended to address this extraneous effort required in a case like this. With unboxing, client code could call the above “add” methods with references types corresponding to the expected primitive types and the references would be automatically “unboxed” to the primitive form so that the appropriate “add” methods could be invoked. Section 5.1.8 (“Unboxing Conversion”) of The Java Language Specification explains which primitives the supplied numeric reference types are converted to in unboxing and Section 5.1.7 (“Boxing Conversion”) of that same specification lists the references types that are autoboxed from each primitive in autoboxing.In this example, unboxing reduced effort on the client’s part in terms of converting reference types to their corresponding primitive counterparts before calling Sum‘s “add” methods, but it did not completely free the client from needing to process the number values before providing them. Because reference types can be null, it is possible for a client to provide a null reference to one of Sum‘s “add” methods and, when Java attempts to automatically unbox that null to its corresponding primitive, a NullPointerException is thrown. The next code listing adapts that from above to indicate how the conversion of reference to primitive is no longer necessary on the client side but checking for null is still necessary to avoid the NullPointerException. Unboxing Automatically Coverts Reference to Primitive: Still Must Check for Null private static String sumReferences( final Long longValue, final Integer intValue, final Short shortValue) { final Sum sum = new Sum(); if (longValue != null) { sum.add(longValue); } if (intValue != null) { sum.add(intValue); } if (shortValue != null) { sum.add(shortValue); } return sum.toString(); } Requiring client code to check their references for null before calling the “add” methods on Sum may be something we want to avoid when designing our API. One way to remove that need is to change the “add” methods to explicitly accept the reference types rather than the primitive types. Then, the Sum class could check for null before explicitly or implicitly (unboxing) dereferencing it. The revised Sum class with this changed and more client-friendly API is shown next. Sum Class with “add” Methods Expecting References Rather than Primitives import java.util.ArrayList;public class Sum { private double sum = 0;public void add(Short newShort) { if (newShort != null) { sum += newShort; } }public void add(Integer newInteger) { if (newInteger != null) { sum += newInteger; } }public void add(Long newLong) { if (newLong != null) { sum += newLong; } }public void add(Float newFloat) { if (newFloat != null) { sum += newFloat; } }public void add(Double newDouble) { if (newDouble != null) { sum += newDouble; } }public String toString() { return String.valueOf(sum); } } The revised Sum class is more client-friendly because it allows the client to pass a reference to any of its “add” methods without concern for whether the passed-in reference is null or not. However, the change of the Sum class’s API like this can lead to NoSuchMethodErrors if either class involved (the client class or one of the versions of the Sum class) is compiled with different versions of Java. In particular, if the client code uses primitives and is compiled with JDK 1.4 or earlier and the Sum class is the latest version shown (expecting references instead of primitives) and is compiled with J2SE 5 or later, a NoSuchMethodError like the following will be encountered (the “S” indicates it was the “add” method expecting a primitive short and the “V” indicates that method returned void). Exception in thread "main" java.lang.NoSuchMethodError: Sum.add(S)V at Main.main(Main.java:9) On the other hand, if the client is compiled with J2SE 5 or later and with primitive values being supplied to Sum as in the first example (pre-unboxing) and the Sum class is compiled in JDK 1.4 or earlier with “add” methods expecting primitives, a different version of the NoSuchMethodError is encountered. Note that the Short reference is cited here. Exception in thread "main" java.lang.NoSuchMethodError: Sum.add(Ljava/lang/Short;)V at Main.main(Main.java:9) There are several observations and reminders to Java developers that come from this.Classpaths are important:Java .class files compiled with the same version of Java (same -source and -target) would have avoided the particular problem in this post. Classpaths should be as lean as possible to reduce/avoid possibility of getting stray “old” class definitions. Build “clean” targets and other build operations should be sure to clean past artifacts thoroughly and builds should rebuild all necessary application classes.Autoboxing and Unboxing are well-intentioned and often highly convenient, but can lead to surprising issues if not kept in mind to some degree. In this post, the need to still check for null (or know that the object is non-null) is necessary remains in situations when implicit dereferencing will take place as a result of unboxing. It’s a matter of API style taste whether to allow clients to pass nulls and have the serving class check for null on their behalf. In an industrial application, I would have stated whether null was allowed or not for each “add” method parameter with @param in each method’s Javadoc comment. In other situations, one might want to leave it the responsibility of the caller to ensure any passed-in reference is non-null and would be content throwing a NullPointerException if the caller did not obey that contract (which should also be specified in the method’s Javadoc). Although we typically see NoSuchMethodError when a method is completely removed or when we access an old class before that method was available or when a method’s API has changed in terms of types or number of types. In a day when Java autoboxing and unboxing are largely taken for granted, it can be easy to think that changing a method from taking a primitive to taking the corresponding reference type won’t affect anything, but even that change can lead to an exception if not all classes involved are built on a version of Java supporting autoboxing and unboxing. One way to determine the version of Java against which a particular .class file was compiled is to use javap -verbose and to look in the javap output for the “major version:”. In the classes I used in my examples in this post (compiled against JDK 1.4 and Java SE 8), the “major version” entries were 48 and 52 respectively (the General Layout section of the Wikipedia entry on Java class file lists the major versions).Fortunately, the issue demonstrated with examples and text in this post is not that common thanks to builds typically cleaning all artifacts and rebuilding code on a relatively continuous basis. However, there are cases where this could occur and one of the most likely such situations is when using an old JAR file accidentally because it lies in wait on the runtime classpath.Reference: Autoboxing, Unboxing, and NoSuchMethodError from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
devops-logo

The Emergence of DevOps and the Fall of the Old Order

Software Engineering has always been dependent on IT operations to take care of the deployment of software to a production environment. In the various roles that I have been in, the role of IT operations has come in various monikers from “Data Center” to “Web Services”. An organisation delivering software used to be able to separate these roles cleanly. Software Engineering and IT Operations were able to work in a somewhat isolated manner, with neither having the need to really know the knowledge that the other hold in their respective domains. Software Engineering would communicate with IT operations through “Deployment Requests”. This is usually done after ensuring that adequate tests have been conducted on their software.   However, the traditional way of organising departments in a software delivery organisation is starting to seem obsolete. The reason is that software infrastructure have moved towards the direction of being “agile”. The same buzzword that had gripped the software development world has started to exert its effect on IT infrastructure. The evidence of this seismic shift is seen in the fastest growing (and disruptive) companies today. Companies like Netflix, Whatsapp and many tech companies have gone into what we would call “cloud” infrastructure that is dominated by Amazon Web Services. There is huge progress in the virtualization technologies of hardware resources. This have in turn allowed companies like AWS and Rackspace to convert their server farms into discrete units of computing resources that can be diced and parcelled and redistributed as a service to their customers in an efficient manner. It is inevitable that all this configurable “hardware” resources will eventually be some form of “software” resource that can be maximally utilized by businesses. This has in turn bred a whole new genre of skillset that is required to manage, control and deploy these Infrastructure As A Service (IaaS). Some of the tools used by these services include provisioning tools like Chef or Puppet. Together with the software apis provided by the IaaS vendors, infrastructure can be brought up or down as required. The availability of large quantities of computing resources without all the upfront costs associated with capital expenditures on hardware have led to an explosion in the number of startups trying to solve problems of all kinds imaginable and coupled with the prevalence of powerful mobile devices have led to a digital renaissance for many industries. However, this renaissance has also led to the demand for a different kind of software organisation. As someone who has been part of software engineering and development, I am witness to the rapid evolution of profession. The increasing scale of data and processing needs requires a complete shift in paradigm from the old software delivery organisation to a new one that melds software engineering and IT operations together. This is where the role of a “DevOps” come into the picture. Recruiting DevOps in an organisation and restructuring the IT operations around such roles enable businesses to be Agile. Some businesses whose survival depends on the availability of their software on the Internet will find it imperative to model their software delivery organisation around DevOps. Having the ability to capitalise on software automation to deploy infrastructure within minutes allows a business to scale up quickly. Being able to practise continuous delivery of software allow features to get into the market quickly and allows a feedback loop in which a business can improve itself. We are witness to a new world order and software delivery organisations that cannot successfully transition to this Brave New World will find themselves falling behind quickly especially when a competitor is able to scale and deliver software faster, reliably and with less personnel.Reference: The Emergence of DevOps and the Fall of the Old Order from our JCG partner Lim Han at the Developers Corner blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close