Featured FREE Whitepapers

What's New Here?

agile-logo

5 Things I’ve learnt being a scrum master

I’ve been a scrum master now for about 6 months. Having been involved in scrum previously as a product owner, as well as a developer, so moving into this role has really opened my eyes to some of the more political and arguably awkward elements of trying to get rid of impediments. Stay calm when others aren’t Something that I think is really key about being a scrum master, you have to be thick skinned. You have to not only push back, but when people whine at you, bring office politics into play, it’s vital that you remember exactly what the end game is, to meet the sprint goal! I should also point out, this doesn’t mean ruling with an iron fist, as a scrum master, you still succeed with the team and fail with the team.  You can tell when something isn’t working and using an agile methodology shouldn’t be a painful process, it should be motivating. You are neither a manager, nor just a developer It’s quite an interesting position to be in as from my experience, it’s not unrealistic to get your hands dirty in some code while being a scrum master.  You have to be strong enough to fend people off from poaching your team members, even if they’re higher up the food chain to yourself.  You aren’t “in charge” of the team, but you do have a responsibility to push back on poachers. Don’t be afraid to say “no” If your product owner is telling you to put 60 points into a sprint, when you know the velocity you’ve been hitting for the past 4 sprints has consistently been 40 points, don’t be afraid to say “what you’re asking is unattainable”.  It’s much better to be honest early on and push back.  Blindly saying yes, promising to the customer and then having to deal with the consequences later on isn’t where anyone wants to be. Make sure stand ups are to the point This might be like telling grandma to suck eggs, but it’s vital that stand ups really are short, sharp and to the point.  There’s nothing worse than listening to a stand up where everyone in the back of their mind is thinking “I wish s/he’d just get to the point!”.  This is a situation where you have to stick to your guns and if people get offended, they get offended.  You have to tell the person, “we don’t need to know any of the extra detail, we just need to know from a high level what it is you’re going to achieve today, did you achieve what you set out to do yesterday and importantly, do you have any impediments”. Keep things moving Sometimes things in a sprint can get stale, tasks can get stuck in one condition, you need to keep them moving!  As an example, If there’s a task that’s been coded for half a day but not released to testing, find out why.  You never know, the CI server might be done, there might be a problem releasing, you need to get it out before it becomes an impediment.  Keeping the task board fresh and a true representation of what’s actually happening in your sprint can really boost morale if you know there’s only a few tasks left and you’re literally “sprinting” to the finish!Reference: 5 Things I’ve learnt being a scrum master from our JCG partner David Gray at the Code Mumble blog....
software-development-2-logo

Test Attribute #10 – Isolation

This is last, final, and 10th entry in the ten commandments of test attributes that started here. And you should read all of them. We usually talk about isolation in terms of mocking. Meaning, when we want to test our code, and the code has dependencies, we use mocking to fake those dependencies, and allow us to test the code in isolation. That’s code isolation. But test isolation is different. An isolated test can run alone, in a suite, in any order, independent from the other tests and give consistent results. We’ve already identified in footprint the different environment dependencies that can affect the result, and of course, the tested code has something to do with it. Other tests can also create dependency, directly or not. In fact, sometimes we may be relying on the order of tests. To give an example, I summon the witness for the prosecution: The Singleton. Here’s some basic code using a singleton: public class Counter { private static Counter instance; private int count = 0; public static void Init() { instance = new Counter(); }public static Counter GetInstance() { return instance; }public int GetValue() { return count++; } } Pretty simple: The static instance is initialized in a call to Init. We can write these tests: [TestMethod]public void CounterInitialized_WorksInIsolation() { Counter.Init(); var result = Counter.GetInstance().GetValue(); Assert.AreEqual(0, result); }[TestMethod]public void CounterNotInitialized_ThrowsInIsolation() { var result = Counter.GetInstance().GetValue(); Assert.AreEqual(1, result); } Note that the second passes when running after the first. But if you run it alone it crashes, because the instance is not initialized. Of course, that’s the kind of thing that gives singletons a bad name. And now you need to jump through hoops in order to check the second case. By the way, we’re not just relying on the order of the tests – we’re relying on the  way the test runner runs them. It could be in the order we’ve written them, but not necessarily. While singletons mostly appear in the tested code, test dependency can occur because of the tests themselves. As long as you keep state in the test class, including mocking operations, there’s a chance that you’re depending on the order of the run. Do you know this trick? public class MyTests: BaseTest { ///... Why not put all common code in a base class, then derive the test class from it? Well, apart of making readabilty suffer, and debugging excruciating, we now have all kinds of test setup and behavior that are located in another shared place. It may be that the test itself does not suffer interference from other tests, but we’re introducing this risk by putting shared code in the base class. Plus, you’ll need to no more about initialization order. And what if the base class is using a singleton? Antics ensue. Test isolation issues show themselves very easily, because once they are out of order (ha-ha), you’ll get the red light. The problem is identifying the problem, because it may seem like an “irreproducible problem”. In order to avoid isolation problems:Check the code. If you can identify patterns of usage like singelton, be aware of that and put it to use: either initialize the singleton before the whole run, or restart it before every test. Rearrange. If there are additional dependencies (like our counter increase), start thinking about rearranging the tests. Because the way the code is written, you’re starting to test more than just small operations. Don’t inherit. Test base classes create interdependence and hurt isolation. Mocking. Use mocking to control any shared dependency. Clean up. Make sure that tests clean up after themselves. Or, instead before every run.Isolation issues in tests are very annoying, because especially in unit tests, they can be easily avoided. Know the code, understand the dependencies, and never rely on another test to set up the state needed for the current one.Reference: Test Attribute #10 – Isolation from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
java-logo

Java Concurrency Tutorial – Atomicity and race conditions

Atomicity is one of the key concepts in multi-threaded programs. We say a set of actions is atomic if they all execute as a single operation, in an indivisible manner. Taking for granted that a set of actions in a multi-threaded program will be executed serially may lead to incorrect results. The reason is due to thread interference, which means that if two threads execute several steps on the same data, they may overlap. The following Interleaving example shows two threads executing several actions (prints in a loop) and how they are overlapped:       public class Interleaving { public void show() { for (int i = 0; i < 5; i++) { System.out.println(Thread.currentThread().getName() + " - Number: " + i); } } public static void main(String[] args) { final Interleaving main = new Interleaving(); Runnable runner = new Runnable() { @Override public void run() { main.show(); } }; new Thread(runner, "Thread 1").start(); new Thread(runner, "Thread 2").start(); } } When executed, it will produce unpredictable results. As an example: Thread 2 - Number: 0 Thread 2 - Number: 1 Thread 2 - Number: 2 Thread 1 - Number: 0 Thread 1 - Number: 1 Thread 1 - Number: 2 Thread 1 - Number: 3 Thread 1 - Number: 4 Thread 2 - Number: 3 Thread 2 - Number: 4 In this case, nothing wrong happens since they are just printing numbers. However, when you need to share the state of an object (its data) without synchronization, this leads to the presence of race conditions. Race condition Your code will have a race condition if there’s a possibility to produce incorrect results due to thread interleaving. This section describes two types of race conditions:Check-then-act Read-modify-writeTo remove race conditions and enforce thread safety, we must make these actions atomic by using synchronization. Examples in the following sections will show what the effects of these race conditions are. Check-then-act race condition This race condition appears when you have a shared field and expect to serially execute the following steps:Get a value from a field. Do something based on the result of the previous check.The problem here is that when the first thread is going to act after the previous check, another thread may have interleaved and changed the value of the field. Now, the first thread will act based on a value that is no longer valid. This is easier seen with an example. UnsafeCheckThenAct is expected to change the field number once. Following calls to changeNumber method, should result in the execution of the else condition: public class UnsafeCheckThenAct { private int number; public void changeNumber() { if (number == 0) { System.out.println(Thread.currentThread().getName() + " | Changed"); number = -1; } else { System.out.println(Thread.currentThread().getName() + " | Not changed"); } } public static void main(String[] args) { final UnsafeCheckThenAct checkAct = new UnsafeCheckThenAct(); for (int i = 0; i < 50; i++) { new Thread(new Runnable() { @Override public void run() { checkAct.changeNumber(); } }, "T" + i).start(); } } } But since this code is not synchronized, it may (there's no guarantee) result in several modifications of the field: T13 | Changed T17 | Changed T35 | Not changed T10 | Changed T48 | Not changed T14 | Changed T60 | Not changed T6 | Changed T5 | Changed T63 | Not changed T18 | Not changed Another example of this race condition is lazy initialization. A simple way to correct this is to use synchronization. SafeCheckThenAct is thread-safe because it has removed the race condition by synchronizing all accesses to the shared field. public class SafeCheckThenAct { private int number; public synchronized void changeNumber() { if (number == 0) { System.out.println(Thread.currentThread().getName() + " | Changed"); number = -1; } else { System.out.println(Thread.currentThread().getName() + " | Not changed"); } } public static void main(String[] args) { final SafeCheckThenAct checkAct = new SafeCheckThenAct(); for (int i = 0; i < 50; i++) { new Thread(new Runnable() { @Override public void run() { checkAct.changeNumber(); } }, "T" + i).start(); } } } Now, executing this code will always produce the same expected result; only a single thread will change the field: T0 | Changed T54 | Not changed T53 | Not changed T62 | Not changed T52 | Not changed T51 | Not changed ...In some cases, there will be other mechanisms which perform better than synchronizing the whole method but I won’t discuss them in this post. Read-modify-write race condition Here we have another type of race condition which appears when executing the following set of actions:Fetch a value from a field. Modify the value. Store the new value to the field.In this case, there’s another dangerous possibility which consists in the loss of some updates to the field. One possible outcome is: Field’s value is 1. Thread 1 gets the value from the field (1). Thread 1 modifies the value (5). Thread 2 reads the value from the field (1). Thread 2 modifies the value (7). Thread 1 stores the value to the field (5). Thread 2 stores the value to the field (7).As you can see, update with the value 5 has been lost. Let’s see a code sample. UnsafeReadModifyWrite shares a numeric field which is incremented each time: public class UnsafeReadModifyWrite { private int number; public void incrementNumber() { number++; } public int getNumber() { return this.number; } public static void main(String[] args) throws InterruptedException { final UnsafeReadModifyWrite rmw = new UnsafeReadModifyWrite(); for (int i = 0; i < 1_000; i++) { new Thread(new Runnable() { @Override public void run() { rmw.incrementNumber(); } }, "T" + i).start(); } Thread.sleep(6000); System.out.println("Final number (should be 1_000): " + rmw.getNumber()); } } Can you spot the compound action which causes the race condition? I’m sure you did, but for completeness, I will explain it anyway. The problem is in the increment (number++). This may appear to be a single action but in fact, it is a sequence of three actions (get-increment-write). When executing this code, we may see that we have lost some updates: 2014-08-08 09:59:18,859|UnsafeReadModifyWrite|Final number (should be 10_000): 9996 Depending on your computer it will be very difficult to reproduce this update loss, since there’s no guarantee on how threads will interleave. If you can’t reproduce the above example, try UnsafeReadModifyWriteWithLatch, which uses a CountDownLatch to synchronize thread’s start, and repeats the test a hundred times. You should probably see some invalid values among all the results: Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 997 Final number (should be 1_000): 999 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 Final number (should be 1_000): 1000 This example can be solved by making all three actions atomic. SafeReadModifyWriteSynchronized uses synchronization in all accesses to the shared field: public class SafeReadModifyWriteSynchronized { private int number; public synchronized void incrementNumber() { number++; } public synchronized int getNumber() { return this.number; } public static void main(String[] args) throws InterruptedException { final SafeReadModifyWriteSynchronized rmw = new SafeReadModifyWriteSynchronized(); for (int i = 0; i < 1_000; i++) { new Thread(new Runnable() { @Override public void run() { rmw.incrementNumber(); } }, "T" + i).start(); } Thread.sleep(4000); System.out.println("Final number (should be 1_000): " + rmw.getNumber()); } } Let’s see another example to remove this race condition. In this specific case, and since the field number is independent to other variables, we can make use of atomic variables. SafeReadModifyWriteAtomic uses atomic variables to store the value of the field: public class SafeReadModifyWriteAtomic { private final AtomicInteger number = new AtomicInteger(); public void incrementNumber() { number.getAndIncrement(); } public int getNumber() { return this.number.get(); } public static void main(String[] args) throws InterruptedException { final SafeReadModifyWriteAtomic rmw = new SafeReadModifyWriteAtomic(); for (int i = 0; i < 1_000; i++) { new Thread(new Runnable() { @Override public void run() { rmw.incrementNumber(); } }, "T" + i).start(); } Thread.sleep(4000); System.out.println("Final number (should be 1_000): " + rmw.getNumber()); } } Following posts will further explain mechanisms like locking or atomic variables. Conclusion This post explained some of the risks implied when executing compound actions in non-synchronized multi-threaded programs. To enforce atomicity and prevent thread interleaving, one must use some type of synchronization.You can take a look at the source code at github.Reference: Java Concurrency Tutorial - Atomicity and race conditions from our JCG partner Xavier Padro at the Xavier Padró's Blog blog....
software-development-2-logo

A Wonderful SQL Feature: Quantified Comparison Predicates (ANY, ALL)

Have you ever wondered about the use-case behind SQL’s ANY (also: SOME) and ALL keywords? You have probably not yet encountered these keywords in the wild. Yet they can be extremely useful. But first, let’s see how they’re defined in the SQL standard. The easy part:           8.7 <quantified comparison predicate>FunctionSpecify a quantified comparison.Format<quantified comparison predicate> ::= <row value constructor> <comp op> <quantifier> <table subquery><quantifier> ::= <all> | <some> <all> ::= ALL <some> ::= SOME | ANY Intuitively, such a quantified comparison predicate can be used as such: -- Is any person of age 42? 42 = ANY (SELECT age FROM person)-- Are all persons younger than 42? 42 > ALL (SELECT age FROM person) Let’s keep with the useful ones. Observe that you have probably written the above queries with a different syntax, as such: -- Is any person of age 42? 42 IN (SELECT age FROM person)-- Are all persons younger than 42? 42 > (SELECT MAX(age) FROM person) In fact, you’ve used the <in predicate>, or a greater than predicate with a <scalar subquery> and an aggregate function. The IN predicate It’s not a coincidence that you might have used the <in predicate> just like the above <quantified comparison predicate> using ANY. In fact, the <in predicate> is specified just like that: 8.4 <in predicate>Syntax Rules2) Let RVC be the <row value constructor> and let IPV be the <in predicate value>.3) The expressionRVC NOT IN IPVis equivalent toNOT ( RVC IN IPV )4) The expressionRVC IN IPVis equivalent toRVC = ANY IPV Precisely! Isn’t SQL beautiful? Note, the implicit consequences of 3) lead to a very peculiar behaviour of the NOT IN predicate with respect to NULL, which few developers are aware of. Now, it’s getting awesome So far, there is nothing out of the ordinary with these <quantified comparison predicate>. All of the previous examples can be emulated with “more idiomatic”, or let’s say, “more everyday” SQL. But the true awesomeness of <quantified comparison predicate> appears only when used in combination with <row value expression> where rows have a degree / arity of more than one: -- Is any person called "John" of age 42? (42, 'John') = ANY (SELECT age, first_name FROM person)-- Are all persons younger than 55? -- Or if they're 55, do they all earn less than 150'000.00? (55, 150000.00) > ALL (SELECT age, wage FROM person) See the above queries in action on PostgreSQL in this SQLFiddle. At this point, it is worth mentioning that few databases actually support…row value expressions, or… quantified comparison predicates with row value expressionsEven if specified in SQL-92, it looks as most databases still take their time to implement this feature 22 years later. Emulating these predicates with jOOQ But luckily, there is jOOQ to emulate these features for you. Even if you’re not using jOOQ in your project, the following SQL transformation steps can be useful if you want to express the above predicates. Let’s have a look at how this could be done in MySQL: -- This predicate (42, 'John') = ANY (SELECT age, first_name FROM person)-- ... is the same as this: EXISTS ( SELECT 1 FROM person WHERE age = 42 AND first_name = 'John' ) What about the other predicate? -- This predicate (55, 150000.00) > ALL (SELECT age, wage FROM person)-- ... is the same as these: ---------------------------- -- No quantified comparison predicate with -- Row value expressions available (55, 150000.00) > ( SELECT age, wage FROM person ORDER BY 1 DESC, 2 DESC LIMIT 1 )-- No row value expressions available at all NOT EXISTS ( SELECT 1 FROM person WHERE (55 < age) OR (55 = age AND 150000.00 <= wage) ) Clearly, the EXISTS predicate can be used in pretty much every database to emulate what we’ve seen before. If you just need this for a one-shot emulation, the above examples will be sufficient. If, however, you want to more formally use <row value expression> and <quantified comparison predicate>, you better get SQL transformation right. Read on about SQL transformation in this article here.Reference: A Wonderful SQL Feature: Quantified Comparison Predicates (ANY, ALL) from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
software-development-2-logo

The most important factor in software decay

Do you have big balls of mud? Here’s an experiment to amaze your friends. You probably listen to music on your phone via some sort of headset. The headset we shall consider here consists of two earbuds (in-ear pieces, rather than head-phones which cover the ears) connected via wires to a jack which plugs into the phone itself. Disconnect your headset from your phone. Show it to your friends. Figure 1 shows a typical example of such a headset.      As you hold the headset wires twixt thumb and finger, shake them about until your friends are bored to tears. Note that the wires may temporarily become tangled with one another but invariably return to the untangled state of figure 1. Now, carefully fold the headset into your trouser pocket and take a long walk, dragging a friend with you to witness that you do not touch the headset further. Finally, return to your friends and carefully extract the headset from your trouser pocket. TA-DA!! The wires will have mysteriously collapsed into a tangled disordered mess of kinks, knots and twists of the most bewildering complexity and inventiveness – all without your ever having touched them! Figure 2 shows a sorry headset fished from a trouser pocket.  Don’t tell anyone, but here’s the secret … Why does this happen? Why don’t the wires remain untangled? To answer this, we must look at the three actors that take to the stage in both scenarios, scenarios hereafter named the, “Out-of-pocket,” and the, “In-pocket,” scenarios. First is thermal fluctuation. This refers to the shaking of the headset, both explicitly by jiggling it up and down in the out-of-pocket scenario, and by the slower action of the stride in the in-pocket scenario. Both mechanisms subject the headset to pulses of energy causing its parts to move at random. Second is gravity. Gravity tends to pull all parts of the headset down and as the jack and earbuds reside at the wire ends (and as the headset is held roughly half-way along its length in figure 1) then the earbuds and jack tend to position themselves towards the bottom of figure 1. Third is spatial extension. Perhaps the greatest difference between the two scenarios is size of arena in which they operate. In the out-of-pocket scenario, the holding aloft of the headset allows gravity to stretch the headset wires to great extent. Knot-formation relies on two sections of wire coming into contact. Such contacts become simply less probable with increasing volume. In the confined space of the in-pocket scenario, with wire pressed on wire, knots become far more likely. (Friction also plays a role, with wind resistance easily overcome by gravity in the out-of-pocket scenario but with the cloth-on-cloth surface resistance of the in-pocket scenario holding wires in contact for longer than might otherwise occur, again increasing the probability of knotting.) Thus, consider the headset in the out-of-pocket scenario. Tugging on the headset will cause it to move, introducing several new bends and twists in the wires. Throughout, however, gravity constantly pulls the earbuds and wires downwards from the point at which they are held, so that as the energy of tug dissipates gravity will already have begun ironing out the loose bends. In the in-pocket scenario, however, each stride will deliver a weak energy impulse to the headset, enough to move the headset around just a tiny amount. Confined within the pocket volume and unsupported at any point from above, the wires do not stretch out under the influence of gravity but may even pool at the pocket’s bottom, where they will writhe over one another, producing optimal knot-formation conditions. Despite the tediousness of such a description, we can view these events from another perspective: the perspective of microstate versus macrostate. Now things become a little more interesting. Microstate and macrostate. We can think of a, “Microstate,” of the headset not, as might be expected, the state of a small part of the headset, but rather as its complete configuration: a precise description, at a single instant, of the position of its earbuds, jack, and of the entire length and disposition of its wires. In contrast to this detail, a, “Macrostate,” is a broad and undetailed description of the headset. For reasons that shall be explained in a moment, the, “Order,” of the headset interests us most – that is, how messy and tangled it appears – and hence we shall use the least amount of information possible to describe this order. We shall use the single bit of information – yes or no – that answers the question, “Does this headset look ordered?” We can say that from an initial microstate of the headset, A, only a certain set of microstates lies thermally accessible in that such microstates deviate from A to a degree allowable by the upcoming energy impulse. Given the randomness of this energy impulse, there is an equal probability of entering any one of those accessible microstates; let us say that the system happens to transition from microstate A to microstate B. For the out-of-pocket scenario, gravity constantly tries to pull the headset back from microstate B to microstate A (or something like it), so the set of thermally accessible microstates available from microstate B will be slightly biased because gravity will prevent the system reaching states as far from B as B is from A. For the in-pocket scenario, however, once the headset arrives in microstate B then the new set of accessible microstates will contain equal numbers of microstates that move back towards A as away from it. And given that the choice of the new microstate is random, then the in-pocket scenario will allow the headset to enter more microstates inaccessible to the out-of-pocket scenario. Imagine for a moment that you could count all the microstates in which the headset could find itself, that is, you could take a snap-shot of every possible position that the headset wires could assume. If we focus on just one wire, we might say that the wire looks ordered when it forms a straight line: it contains 0 bends. It still looks ordered with 1 bend, or perhaps 2, or 100. But above a certain number of bends it begins to look disordered. This simply accords with a casual understanding of that term. Yet how many bends can a wire support? Perhaps thousands. Or tens of thousands. Or millions. The point is that the vast majority of microstates of the wire will be what we call, “Disordered,” and only a tiny proportion will be, “Ordered.” Thus it is not that there are fewer ways to make a disordered headset when it is held aloft than when it sits in a pocket, but that putting a headset in a pocket allows it to randomly explore a far larger number of its available microstates and as the vast majority of these microstates correspond to the disordered macrostate then the in-pocket headset is overwhelmingly likely to end up disordered. What on earth has this got to do with software? Software structure, too, exhibits this microstate/macrostate duality. The package structure of a Java program reveals packages connected to other packages by dependencies. This structure at any given time represents one microstate of the system. Programmers add features and make updates, acting as random thermal fluctuations, changing the package structure slightly, nudging the system into new microstates. (Programmers do not, of course, make random updates to a code-base in that they do not pick a text character at random from the source code and flip it. Nevertheless, no one can predict accurately in advance a year’s worth of updates in any significant code-base. It is in this sense – in this unpredictability and essential patternlessness – that the programmer’s coding can be modeled as a random input to the program.) Pulling back from the detail of the individual microstates, we evaluate the macrostate of the system by asking, “Does this program look ordered?” If ordered, we say the program is, “Well structured;” otherwise it is, “Poorly structured.” Small programs, with few inter-package dependencies, usually appear ordered. As programs grow, however, there seems inevitably to come a point when dependencies between packages become dense, overstretched and start looping back on themselves: such package structures truly merit the description, “Disordered.” Brian Foote and Joseph Yoder memorably labeled any such disordered system a Big Ball of Mud, claiming, “A big ball of mud is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle.” They also claimed big balls of mud to be the de-facto software structure rather than the exception and our headset musings show why this may be so. With Java packages free to depend on one another without restriction, there are far more ways to produce a disordered set of inter-dependent packages than an ordered set of those same packages, so given the thermal fluctuation of programmer input the program will find itself propelled into a series of randomly selected microstates and as most microstates correspond to a disordered macrostate then most systems will end up disordered. Figure 3 shows the package structures of four programs depicted as four spoiklin diagrams in which each circle represents a package, each straight line a dependency from a package above to below and each curved line a dependency from a package below to above. Despite the rather miniaturized graphics, most programmers would consider only one of these programs to be well-structured.    All of which would remain a mere aesthetic curiosity except for one crucial point: the structure of a program bears directly on the predictability of update costs and often on the actual costs themselves. Well-structured programs clearly show which packages use the services provided by other packages, thus simplifying prediction of the costs involved in any particular package change. Poorly structured programs, on the other hand, suffocate beneath such choking dependencies that even identifying the impacted packages that might stem from a particular change becomes a chore, clouding cost predictability and often causing that cost to grow disproportionately large. In figure 3, program B was designed using radial encapsulation, which allows package dependencies only in the direction of the top level package. Such a restriction mirrors the holding aloft of a headset, allowing gravity to stretch it out and keep it spontaneously ordered. Such a restriction makes the forming of a big ball of mud as improbable as a headset held aloft suddenly collapsing into a sequence of knots. The precise mechanism, however, is unimportant. What matters is that the program was designed according to systemic principles that aggressively restrict the number of microstates into which the program might wander whilst ensuring that those inhabitable microstates correspond to an ordered, well-structured macrostate. No one designs poorly structured programs to be poorly structured. Instead, programs become poorly structured because the structuring principles according to which they are built do not prevent them from becoming so. Perhaps programmers should ask not, “Why are big balls of mud the de-facto program structure?” but, “Why are good structuring principles ignored?” Summary Some find words like, “Thermodynamics,” “Equilibrium,” and, “Entropy,” intimidating so posts exploring these concepts should delay their introduction.Reference: The most important factor in software decay from our JCG partner Edmund Kirwan at the A blog about software. blog....
java-logo

Autoboxing, Unboxing, and NoSuchMethodError

J2SE 5 introduced numerous features to the Java programming language. One of these features is autoboxing and unboxing, a feature that I use almost daily without even thinking about it. It is often convenient (especially when used with collections), but every once in a while it leads to some nasty surprises, “weirdness,” and “madness.” In this blog post, I look at a rare (but interesting to me) case of NoSuchMethodError resulting from mixing classes compiled with Java versions before autoboxing/unboxing with classes compiled with Java versions that include autoboxing/unboxing. The next code listing shows a simple Sum class that could have been written before J2SE 5. It has overloaded “add” methods that accept different primitive numeric data types and each instance of Sum> simply adds all types of numbers provided to it via any of its overloaded “add” methods. Sum.java (pre-J2SE 5 Version) import java.util.ArrayList;public class Sum { private double sum = 0;public void add(short newShort) { sum += newShort; }public void add(int newInteger) { sum += newInteger; }public void add(long newLong) { sum += newLong; }public void add(float newFloat) { sum += newFloat; }public void add(double newDouble) { sum += newDouble; }public String toString() { return String.valueOf(sum); } } Before unboxing was available, any clients of the above Sum class would need to provide primitives to these “add” methods or, if they had reference equivalents of the primitives, would need to convert the references to their primitive counterparts before calling one of the “add” methods. The onus was on the client code to do this conversion from reference type to corresponding primitive type before calling these methods. Examples of how this might be accomplished are shown in the next code listing. No Unboxing: Client Converting References to Primitives private static String sumReferences( final Long longValue, final Integer intValue, final Short shortValue) { final Sum sum = new Sum(); if (longValue != null) { sum.add(longValue.longValue()); } if (intValue != null) { sum.add(intValue.intValue()); } if (shortValue != null) { sum.add(shortValue.shortValue()); } return sum.toString(); } J2SE 5′s autoboxing and unboxing feature was intended to address this extraneous effort required in a case like this. With unboxing, client code could call the above “add” methods with references types corresponding to the expected primitive types and the references would be automatically “unboxed” to the primitive form so that the appropriate “add” methods could be invoked. Section 5.1.8 (“Unboxing Conversion”) of The Java Language Specification explains which primitives the supplied numeric reference types are converted to in unboxing and Section 5.1.7 (“Boxing Conversion”) of that same specification lists the references types that are autoboxed from each primitive in autoboxing.In this example, unboxing reduced effort on the client’s part in terms of converting reference types to their corresponding primitive counterparts before calling Sum‘s “add” methods, but it did not completely free the client from needing to process the number values before providing them. Because reference types can be null, it is possible for a client to provide a null reference to one of Sum‘s “add” methods and, when Java attempts to automatically unbox that null to its corresponding primitive, a NullPointerException is thrown. The next code listing adapts that from above to indicate how the conversion of reference to primitive is no longer necessary on the client side but checking for null is still necessary to avoid the NullPointerException. Unboxing Automatically Coverts Reference to Primitive: Still Must Check for Null private static String sumReferences( final Long longValue, final Integer intValue, final Short shortValue) { final Sum sum = new Sum(); if (longValue != null) { sum.add(longValue); } if (intValue != null) { sum.add(intValue); } if (shortValue != null) { sum.add(shortValue); } return sum.toString(); } Requiring client code to check their references for null before calling the “add” methods on Sum may be something we want to avoid when designing our API. One way to remove that need is to change the “add” methods to explicitly accept the reference types rather than the primitive types. Then, the Sum class could check for null before explicitly or implicitly (unboxing) dereferencing it. The revised Sum class with this changed and more client-friendly API is shown next. Sum Class with “add” Methods Expecting References Rather than Primitives import java.util.ArrayList;public class Sum { private double sum = 0;public void add(Short newShort) { if (newShort != null) { sum += newShort; } }public void add(Integer newInteger) { if (newInteger != null) { sum += newInteger; } }public void add(Long newLong) { if (newLong != null) { sum += newLong; } }public void add(Float newFloat) { if (newFloat != null) { sum += newFloat; } }public void add(Double newDouble) { if (newDouble != null) { sum += newDouble; } }public String toString() { return String.valueOf(sum); } } The revised Sum class is more client-friendly because it allows the client to pass a reference to any of its “add” methods without concern for whether the passed-in reference is null or not. However, the change of the Sum class’s API like this can lead to NoSuchMethodErrors if either class involved (the client class or one of the versions of the Sum class) is compiled with different versions of Java. In particular, if the client code uses primitives and is compiled with JDK 1.4 or earlier and the Sum class is the latest version shown (expecting references instead of primitives) and is compiled with J2SE 5 or later, a NoSuchMethodError like the following will be encountered (the “S” indicates it was the “add” method expecting a primitive short and the “V” indicates that method returned void). Exception in thread "main" java.lang.NoSuchMethodError: Sum.add(S)V at Main.main(Main.java:9) On the other hand, if the client is compiled with J2SE 5 or later and with primitive values being supplied to Sum as in the first example (pre-unboxing) and the Sum class is compiled in JDK 1.4 or earlier with “add” methods expecting primitives, a different version of the NoSuchMethodError is encountered. Note that the Short reference is cited here. Exception in thread "main" java.lang.NoSuchMethodError: Sum.add(Ljava/lang/Short;)V at Main.main(Main.java:9) There are several observations and reminders to Java developers that come from this.Classpaths are important:Java .class files compiled with the same version of Java (same -source and -target) would have avoided the particular problem in this post. Classpaths should be as lean as possible to reduce/avoid possibility of getting stray “old” class definitions. Build “clean” targets and other build operations should be sure to clean past artifacts thoroughly and builds should rebuild all necessary application classes.Autoboxing and Unboxing are well-intentioned and often highly convenient, but can lead to surprising issues if not kept in mind to some degree. In this post, the need to still check for null (or know that the object is non-null) is necessary remains in situations when implicit dereferencing will take place as a result of unboxing. It’s a matter of API style taste whether to allow clients to pass nulls and have the serving class check for null on their behalf. In an industrial application, I would have stated whether null was allowed or not for each “add” method parameter with @param in each method’s Javadoc comment. In other situations, one might want to leave it the responsibility of the caller to ensure any passed-in reference is non-null and would be content throwing a NullPointerException if the caller did not obey that contract (which should also be specified in the method’s Javadoc). Although we typically see NoSuchMethodError when a method is completely removed or when we access an old class before that method was available or when a method’s API has changed in terms of types or number of types. In a day when Java autoboxing and unboxing are largely taken for granted, it can be easy to think that changing a method from taking a primitive to taking the corresponding reference type won’t affect anything, but even that change can lead to an exception if not all classes involved are built on a version of Java supporting autoboxing and unboxing. One way to determine the version of Java against which a particular .class file was compiled is to use javap -verbose and to look in the javap output for the “major version:”. In the classes I used in my examples in this post (compiled against JDK 1.4 and Java SE 8), the “major version” entries were 48 and 52 respectively (the General Layout section of the Wikipedia entry on Java class file lists the major versions).Fortunately, the issue demonstrated with examples and text in this post is not that common thanks to builds typically cleaning all artifacts and rebuilding code on a relatively continuous basis. However, there are cases where this could occur and one of the most likely such situations is when using an old JAR file accidentally because it lies in wait on the runtime classpath.Reference: Autoboxing, Unboxing, and NoSuchMethodError from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
devops-logo

The Emergence of DevOps and the Fall of the Old Order

Software Engineering has always been dependent on IT operations to take care of the deployment of software to a production environment. In the various roles that I have been in, the role of IT operations has come in various monikers from “Data Center” to “Web Services”. An organisation delivering software used to be able to separate these roles cleanly. Software Engineering and IT Operations were able to work in a somewhat isolated manner, with neither having the need to really know the knowledge that the other hold in their respective domains. Software Engineering would communicate with IT operations through “Deployment Requests”. This is usually done after ensuring that adequate tests have been conducted on their software.   However, the traditional way of organising departments in a software delivery organisation is starting to seem obsolete. The reason is that software infrastructure have moved towards the direction of being “agile”. The same buzzword that had gripped the software development world has started to exert its effect on IT infrastructure. The evidence of this seismic shift is seen in the fastest growing (and disruptive) companies today. Companies like Netflix, Whatsapp and many tech companies have gone into what we would call “cloud” infrastructure that is dominated by Amazon Web Services. There is huge progress in the virtualization technologies of hardware resources. This have in turn allowed companies like AWS and Rackspace to convert their server farms into discrete units of computing resources that can be diced and parcelled and redistributed as a service to their customers in an efficient manner. It is inevitable that all this configurable “hardware” resources will eventually be some form of “software” resource that can be maximally utilized by businesses. This has in turn bred a whole new genre of skillset that is required to manage, control and deploy these Infrastructure As A Service (IaaS). Some of the tools used by these services include provisioning tools like Chef or Puppet. Together with the software apis provided by the IaaS vendors, infrastructure can be brought up or down as required. The availability of large quantities of computing resources without all the upfront costs associated with capital expenditures on hardware have led to an explosion in the number of startups trying to solve problems of all kinds imaginable and coupled with the prevalence of powerful mobile devices have led to a digital renaissance for many industries. However, this renaissance has also led to the demand for a different kind of software organisation. As someone who has been part of software engineering and development, I am witness to the rapid evolution of profession. The increasing scale of data and processing needs requires a complete shift in paradigm from the old software delivery organisation to a new one that melds software engineering and IT operations together. This is where the role of a “DevOps” come into the picture. Recruiting DevOps in an organisation and restructuring the IT operations around such roles enable businesses to be Agile. Some businesses whose survival depends on the availability of their software on the Internet will find it imperative to model their software delivery organisation around DevOps. Having the ability to capitalise on software automation to deploy infrastructure within minutes allows a business to scale up quickly. Being able to practise continuous delivery of software allow features to get into the market quickly and allows a feedback loop in which a business can improve itself. We are witness to a new world order and software delivery organisations that cannot successfully transition to this Brave New World will find themselves falling behind quickly especially when a competitor is able to scale and deliver software faster, reliably and with less personnel.Reference: The Emergence of DevOps and the Fall of the Old Order from our JCG partner Lim Han at the Developers Corner blog....
gradle-logo

Publish JAR artifact using Gradle to Artifactory

So I have wasted (invested) a day or two just to find out how to publish a JAR using Gradle to a locally running Artifactory server. I used Gradle Artifactory plugin to do the publishing. I was lost in endless loop of including various versions of various plugins and executing all sorts of tasks. Yes, I’ve read documentation before. It’s just wrong. Perhaps it got better in the meantime. Executing following has uploaded build info only. No artifact (JAR) has been published.       $ gradle artifactoryPublish :artifactoryPublish Deploying build info to: http://localhost:8081/artifactory/api/build Build successfully deployed. Browse it in Artifactory under http://localhost:8081/artifactory/webapp/builds/scala-gradle-artifactory/1408198981123/2014-08-16T16:23:00.927+0200/BUILD SUCCESSFULTotal time: 4.681 secs This guy has saved me, I wanted to kiss him: StackOverflow – upload artifact to artifactory using gradle I assume that you already have Gradle and Artifactory installed. I had a Scala project, but that doesn’t matter. Java should be just fine. I ran Artifactory locally on port 8081. I have also created a new user named devuser who has permissions to deploy artifacts. Long story short, this is my final build.gradle script file: buildscript { repositories { maven { url 'http://localhost:8081/artifactory/plugins-release' credentials { username = "${artifactory_user}" password = "${artifactory_password}" } name = "maven-main-cache" } } dependencies { classpath "org.jfrog.buildinfo:build-info-extractor-gradle:3.0.1" } }apply plugin: 'scala' apply plugin: 'maven-publish' apply plugin: "com.jfrog.artifactory"version = '1.0.0-SNAPSHOT' group = 'com.buransky'repositories { add buildscript.repositories.getByName("maven-main-cache") }dependencies { compile 'org.scala-lang:scala-library:2.11.2' }tasks.withType(ScalaCompile) { scalaCompileOptions.useAnt = false }artifactory { contextUrl = "${artifactory_contextUrl}" publish { repository { repoKey = 'libs-snapshot-local' username = "${artifactory_user}" password = "${artifactory_password}" maven = true} defaults { publications ('mavenJava') } } }publishing { publications { mavenJava(MavenPublication) { from components.java } } } I have stored Artifactory context URL and credentials in ~/.gradle/gradle.properties file and it looks like this: artifactory_user=devuser artifactory_password=devuser artifactory_contextUrl=http://localhost:8081/artifactory Now when I run the same task again, it’s what I wanted. Both Maven POM file and JAR archive are deployed to Artifactory: $ gradle artifactoryPublish :generatePomFileForMavenJavaPublication :compileJava UP-TO-DATE :compileScala UP-TO-DATE :processResources UP-TO-DATE :classes UP-TO-DATE :jar UP-TO-DATE :artifactoryPublish Deploying artifact: http://localhost:8081/artifactory/libs-snapshot-local/com/buransky/scala-gradle-artifactory/1.0.0-SNAPSHOT/scala-gradle-artifactory-1.0.0-SNAPSHOT.pom Deploying artifact: http://localhost:8081/artifactory/libs-snapshot-local/com/buransky/scala-gradle-artifactory/1.0.0-SNAPSHOT/scala-gradle-artifactory-1.0.0-SNAPSHOT.jar Deploying build info to: http://localhost:8081/artifactory/api/build Build successfully deployed. Browse it in Artifactory under http://localhost:8081/artifactory/webapp/builds/scala-gradle-artifactory/1408199196550/2014-08-16T16:26:36.232+0200/BUILD SUCCESSFULTotal time: 5.807 secs Happyend:  Reference: Publish JAR artifact using Gradle to Artifactory from our JCG partner Rado Buransky at the Rado Buransky’s Blog blog....
jboss-hibernate-logo

The dark side of Hibernate AUTO flush

Introduction Now that I described the the basics of JPA and Hibernate flush strategies, I can continue unraveling the surprising behavior of Hibernate’s AUTO flush mode. Not all queries trigger a Session flush Many would assume that Hibernate always flushes the Session before any executing query. While this might have been a more intuitive approach, and probably closer to the JPA’s AUTO FlushModeType, Hibernate tries to optimize that. If the current executed query is not going to hit the pending SQL INSERT/UPDATE/DELETE statements then the flush is not strictly required. As stated in the reference documentation, the AUTO flush strategy may sometimes synchronize the current persistence context prior to a query execution. It would have been more intuitive if the framework authors had chosen to name it FlushMode.SOMETIMES. JPQL/HQL and SQL Like many other ORM solutions, Hibernate offers a limited Entity querying language (JPQL/HQL) that’s very much based on SQL-92 syntax. The entity query language is translated to SQL by the current database dialect and so it must offer the same functionality across different database products. Since most database systems are SQL-92 complaint, the Entity Query Language is an abstraction of the most common database querying syntax. While you can use the Entity Query Language in many use cases (selecting Entities and even projections), there are times when its limited capabilities are no match for an advanced querying request. Whenever we want to make use of some specific querying techniques, such as:Window functions Pivot table Common Table Expressionswe have no other option, but to run native SQL queries. Hibernate is a persistence framework. Hibernate was never meant to replace SQL. If some query is better expressed in a native query, then it’s not worth sacrificing application performance on the altar of database portability. AUTO flush and HQL/JPQL First we are going to test how the AUTO flush mode behaves when an HQL query is about to be executed. For this we define the following unrelated entities:The test will execute the following actions:A Person is going to be persisted. Selecting User(s) should not trigger a the flush. Querying for Person, the AUTO flush should trigger the entity state transition synchronization (A person INSERT should be executed prior to executing the select query).Product product = new Product(); session.persist(product); assertEquals(0L, session.createQuery("select count(id) from User").uniqueResult()); assertEquals(product.getId(), session.createQuery("select p.id from Product p").uniqueResult()); Giving the following SQL output: [main]: o.h.e.i.AbstractSaveEventListener - Generated identifier: f76f61e2-f3e3-4ea4-8f44-82e9804ceed0, using strategy: org.hibernate.id.UUIDGenerator Query:{[select count(user0_.id) as col_0_0_ from user user0_][]} Query:{[insert into product (color, id) values (?, ?)][12,f76f61e2-f3e3-4ea4-8f44-82e9804ceed0]} Query:{[select product0_.id as col_0_0_ from product product0_][]} As you can see, the User select hasn’t triggered the Session flush. This is because Hibernate inspects the current query space against the pending table statements. If the current executing query doesn’t overlap with the unflushed table statements, the a flush can be safely ignored. HQL can detect the Product flush even for:Sub-selects session.persist(product); assertEquals(0L, session.createQuery( "select count(*) " + "from User u " + "where u.favoriteColor in (select distinct(p.color) from Product p)").uniqueResult()); Resulting in a proper flush call: Query:{[insert into product (color, id) values (?, ?)][Blue,2d9d1b4f-eaee-45f1-a480-120eb66da9e8]} Query:{[select count(*) as col_0_0_ from user user0_ where user0_.favoriteColor in (select distinct product1_.color from product product1_)][]}Or theta-style joins session.persist(product); assertEquals(0L, session.createQuery( "select count(*) " + "from User u, Product p " + "where u.favoriteColor = p.color").uniqueResult()); Triggering the expected flush : Query:{[insert into product (color, id) values (?, ?)][Blue,4af0b843-da3f-4b38-aa42-1e590db186a9]} Query:{[select count(*) as col_0_0_ from user user0_ cross join product product1_ where user0_.favoriteColor=product1_.color][]}The reason why it works is because Entity Queries are parsed and translated to SQL queries. Hibernate cannot reference a non existing table, therefore it always knows the database tables an HQL/JPQL query will hit. So Hibernate is only aware of those tables we explicitly referenced in our HQL query. If the current pending DML statements imply database triggers or database level cascading, Hibernate won’t be aware of those. So even for HQL, the AUTO flush mode can cause consistency issues. AUTO flush and native SQL queries When it comes to native SQL queries, things are getting much more complicated. Hibernate cannot parse SQL queries, because it only supports a limited database query syntax. Many database systems offer proprietary features that are beyond Hibernate Entity Query capabilities. Querying the Person table, with a native SQL query is not going to trigger the flush, causing an inconsistency issue: Product product = new Product(); session.persist(product); assertNull(session.createSQLQuery("select id from product").uniqueResult()); DEBUG [main]: o.h.e.i.AbstractSaveEventListener - Generated identifier: 718b84d8-9270-48f3-86ff-0b8da7f9af7c, using strategy: org.hibernate.id.UUIDGenerator Query:{[select id from product][]} Query:{[insert into product (color, id) values (?, ?)][12,718b84d8-9270-48f3-86ff-0b8da7f9af7c]} The newly persisted Product was only inserted during transaction commit, because the native SQL query didn’t triggered the flush. This is major consistency problem, one that’s hard to debug or even foreseen by many developers. That’s one more reason for always inspecting auto-generated SQL statements. The same behaviour is observed even for named native queries: @NamedNativeQueries( @NamedNativeQuery(name = "product_ids", query = "select id from product") ) assertNull(session.getNamedQuery("product_ids").uniqueResult()); So even if the SQL query is pre-loaded, Hibernate won’t extract the associated query space for matching it against the pending DML statements. Overruling the current flush strategy Even if the current Session defines a default flush strategy, you can always override it on a query basis. Query flush mode The ALWAYS mode is going to flush the persistence context before any query execution (HQL or SQL). This time, Hibernate applies no optimization and all pending entity state transitions are going to be synchronized with the current database transaction. assertEquals(product.getId(), session.createSQLQuery("select id from product").setFlushMode(FlushMode.ALWAYS).uniqueResult()); Instructing Hibernate which tables should be syncronized You could also add a synchronization rule on your current executing SQL query. Hibernate will then know what database tables need to be syncronzied prior to executing the query. This is also useful for second level caching as well. assertEquals(product.getId(), session.createSQLQuery("select id from product").addSynchronizedEntityClass(Product.class).uniqueResult()); Conclusion The AUTO flush mode is tricky and fixing consistency issues on a query basis is a maintainer’s nightmare. If you decide to add a database trigger, you’ll have to check all Hibernate queries to make sure they won’t end up running against stale data. My suggestion is to use the ALWAYS flush mode, even if Hibernate authors warned us that: this strategy is almost always unnecessary and inefficient. Inconsistency is much more of an issue that some occasional premature flushes. While mixing DML operations and queries may cause unnecessary flushing this situation is not that difficult to mitigate. During a session transaction, it’s best to execute queries at the beginning (when no pending entity state transitions are to be synchronized) and towards the end of the transaction (when the current persistence context is going to be flushed anyway). The entity state transition operations should be pushed towards the end of the transaction, trying to avoid interleaving them with query operations (therefore preventing a premature flush trigger).Reference: The dark side of Hibernate AUTO flush from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
java-logo

Understanding volatile via example

We have spent last couple of months stabilizing the lock detection functionality in Plumbr. During this we have stumbled into many tricky concurrency issues. Many of the issues are unique, but one particular type of issues keeps repeatedly appearing. You might have guessed it – misuse of the volatile keyword. We have detected and solved bunch of issues where the extensive usage of volatile made arbitrary parts of the application slower, extended locks holding time and eventually bringing the JVM to its knees. Or vice versa – granting too liberal access policy has triggered some nasty concurrency issues. I guess every Java developer recalls the first steps in the language. Days and days spent with manuals and tutorials. Those tutorials all had the list of keywords, among which the volatile was one of the scariest. As days passed and more and more code was written without the need for this keyword, many of us forgot the existence of volatile. Until the production systems started either corrupting data or dying in unpredictable manner. Debugging such cases forced some of us to actually understand the concept. But I bet it was not a pleasant lesson to have, so maybe I can save some of you some time by shedding light upon the concept via a simple example. Example of volatile in action The example is simulating a bank office. The type of bank office where you pick a queue number from a ticketing machine and then wait for the invite when the queue in front of you has been processed. To simulate such office, we have created the following example, consisting of two threads. First of the two threads is implemented as CustomerInLine. This is a thread doing nothing but waiting until the value in NEXT_IN_LINE matches customer’s ticket. Ticket number is hardcoded to be #4. When the time arrives (NEXT_IN_LINE>=4), the thread announces that the waiting is over and finishes. This simulates a customer arriving to the office with some customers already in queue. The queuing implementation is in Queue class which runs a loop calling for the next customer and then simulating work with the customer by sleeping 200ms for each customer. After calling the next customer, the value stored in class variable NEXT_IN_LINE is increased by one. public class Volatility {static int NEXT_IN_LINE = 0;public static void main(String[] args) throws Exception { new CustomerInLine().start(); new Queue().start(); }static class CustomerInLine extends Thread { @Override public void run() { while (true) { if (NEXT_IN_LINE >= 4) { break; } } System.out.format("Great, finally #%d was called, now it is my turn\n",NEXT_IN_LINE); } }static class Queue extends Thread { @Override public void run() { while (NEXT_IN_LINE < 11) { System.out.format("Calling for the customer #%d\n", NEXT_IN_LINE++); try { Thread.sleep(200); } catch (InterruptedException e) { e.printStackTrace(); } } } } } So, when running this simple program you might expect the output of the program being similar to the following: Calling for the customer #1 Calling for the customer #2 Calling for the customer #3 Calling for the customer #4 Great, finally #4 was called, now it is my turn Calling for the customer #5 Calling for the customer #6 Calling for the customer #7 Calling for the customer #8 Calling for the customer #9 Calling for the customer #10 As it appears, the assumption is wrong. Instead, you will see the Queue processing through the list of 10 customers and the hapless thread simulating customer #4 never alerts it has seen the invite. What happened and why is the customer still sitting there waiting endlessly? Analyzing the outcome What you are facing here is a JIT optimization applied to the code caching the access to the NEXT_IN_LINE variable. Both threads get their own local copy and the CustomerInLine thread never sees the Queue actually increasing the value of the thread. If you now think this is some kind of horrible bug in the JVM then you are not fully correct – compilers are allowed to do this to avoid rereading the value each time. So you gain a performance boost, but at a cost – if other threads change the state, the thread caching the copy does not know it and operates using the outdated value. This is precisely the case for volatile. With this keyword in place, the compiler is warned that a particular state is volatile and the code is forced to reread the value each time when the loop is executed. Equipped with this knowledge, we have a simple fix in place – just change the declaration of the NEXT_IN_LINE to the following and your customers will not be left sitting in queue forever: static volatile int NEXT_IN_LINE = 0; For those, who are happy with just understanding the use case for volatile, you are good to go. Just be aware of the extra cost attached – when you start declaring everything to be volatile you are forcing the CPU to forget about local caches and to go straight into main memory, slowing down your code and clogging the memory bus. Volatile under the hood For those who wish to understand the issue in more details, stay with me. To see what is happening underneath, lets turn on the debugging to see the assembly code generated from the bytecode by the JIT. This is achieved by specifying the following JVM options: -XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly Running the program with those options turned on both with volatile turned on and off, gives us the following important insight: Running the code without the volatile keyword, shows us that on instruction 0x00000001085c1c5a we have comparison between two values. When comparison fails we continue through 0x00000001085c1c60 to 0x00000001085c1c66 which jumps back to 0x00000001085c1c60 and an infinite loop is born. 0x00000001085c1c56: mov 0x70(%r10),%r11d 0x00000001085c1c5a: cmp $0x4,%r11d 0x00000001085c1c5e: jge 0x00000001085c1c68 ; OopMap{off=64} ;*if_icmplt ; - Volatility$CustomerInLine::run@4 (line 14) 0x00000001085c1c60: test %eax,-0x1c6ac66(%rip) # 0x0000000106957000 ;*if_icmplt ; - Volatility$CustomerInLine::run@4 (line 14) ; {poll} 0x00000001085c1c66: jmp 0x00000001085c1c60 ;*getstatic NEXT_IN_LINE ; - Volatility$CustomerInLine::run@0 (line 14) 0x00000001085c1c68: mov $0xffffff86,%esi With the volatile keyword in place, we can see that on instruction 0x000000010a5c1c40 we load value to a register, on 0x000000010a5c1c4a compare it to our guard value of 4. If comparison fails, we jump back from 0x000000010a5c1c4e to 0x000000010a5c1c40, loading value again for the new check. This ensures that we will see changed value of NEXT_IN_LINE variable. 0x000000010a5c1c36: data32 nopw 0x0(%rax,%rax,1) 0x000000010a5c1c40: mov 0x70(%r10),%r8d ; OopMap{r10=Oop off=68} ;*if_icmplt ; - Volatility$CustomerInLine::run@4 (line 14) 0x000000010a5c1c44: test %eax,-0x1c1cc4a(%rip) # 0x00000001089a5000 ; {poll} 0x000000010a5c1c4a: cmp $0x4,%r8d 0x000000010a5c1c4e: jl 0x000000010a5c1c40 ;*if_icmplt ; - Volatility$CustomerInLine::run@4 (line 14) 0x000000010a5c1c50: mov $0x15,%esi Now, hopefully the explanation will save you from couple of nasty bugs.Reference: Understanding volatile via example from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close