Featured FREE Whitepapers

What's New Here?


The most important factor in software decay

Do you have big balls of mud? Here’s an experiment to amaze your friends. You probably listen to music on your phone via some sort of headset. The headset we shall consider here consists of two earbuds (in-ear pieces, rather than head-phones which cover the ears) connected via wires to a jack which plugs into the phone itself. Disconnect your headset from your phone. Show it to your friends. Figure 1 shows a typical example of such a headset.      As you hold the headset wires twixt thumb and finger, shake them about until your friends are bored to tears. Note that the wires may temporarily become tangled with one another but invariably return to the untangled state of figure 1. Now, carefully fold the headset into your trouser pocket and take a long walk, dragging a friend with you to witness that you do not touch the headset further. Finally, return to your friends and carefully extract the headset from your trouser pocket. TA-DA!! The wires will have mysteriously collapsed into a tangled disordered mess of kinks, knots and twists of the most bewildering complexity and inventiveness – all without your ever having touched them! Figure 2 shows a sorry headset fished from a trouser pocket.  Don’t tell anyone, but here’s the secret … Why does this happen? Why don’t the wires remain untangled? To answer this, we must look at the three actors that take to the stage in both scenarios, scenarios hereafter named the, “Out-of-pocket,” and the, “In-pocket,” scenarios. First is thermal fluctuation. This refers to the shaking of the headset, both explicitly by jiggling it up and down in the out-of-pocket scenario, and by the slower action of the stride in the in-pocket scenario. Both mechanisms subject the headset to pulses of energy causing its parts to move at random. Second is gravity. Gravity tends to pull all parts of the headset down and as the jack and earbuds reside at the wire ends (and as the headset is held roughly half-way along its length in figure 1) then the earbuds and jack tend to position themselves towards the bottom of figure 1. Third is spatial extension. Perhaps the greatest difference between the two scenarios is size of arena in which they operate. In the out-of-pocket scenario, the holding aloft of the headset allows gravity to stretch the headset wires to great extent. Knot-formation relies on two sections of wire coming into contact. Such contacts become simply less probable with increasing volume. In the confined space of the in-pocket scenario, with wire pressed on wire, knots become far more likely. (Friction also plays a role, with wind resistance easily overcome by gravity in the out-of-pocket scenario but with the cloth-on-cloth surface resistance of the in-pocket scenario holding wires in contact for longer than might otherwise occur, again increasing the probability of knotting.) Thus, consider the headset in the out-of-pocket scenario. Tugging on the headset will cause it to move, introducing several new bends and twists in the wires. Throughout, however, gravity constantly pulls the earbuds and wires downwards from the point at which they are held, so that as the energy of tug dissipates gravity will already have begun ironing out the loose bends. In the in-pocket scenario, however, each stride will deliver a weak energy impulse to the headset, enough to move the headset around just a tiny amount. Confined within the pocket volume and unsupported at any point from above, the wires do not stretch out under the influence of gravity but may even pool at the pocket’s bottom, where they will writhe over one another, producing optimal knot-formation conditions. Despite the tediousness of such a description, we can view these events from another perspective: the perspective of microstate versus macrostate. Now things become a little more interesting. Microstate and macrostate. We can think of a, “Microstate,” of the headset not, as might be expected, the state of a small part of the headset, but rather as its complete configuration: a precise description, at a single instant, of the position of its earbuds, jack, and of the entire length and disposition of its wires. In contrast to this detail, a, “Macrostate,” is a broad and undetailed description of the headset. For reasons that shall be explained in a moment, the, “Order,” of the headset interests us most – that is, how messy and tangled it appears – and hence we shall use the least amount of information possible to describe this order. We shall use the single bit of information – yes or no – that answers the question, “Does this headset look ordered?” We can say that from an initial microstate of the headset, A, only a certain set of microstates lies thermally accessible in that such microstates deviate from A to a degree allowable by the upcoming energy impulse. Given the randomness of this energy impulse, there is an equal probability of entering any one of those accessible microstates; let us say that the system happens to transition from microstate A to microstate B. For the out-of-pocket scenario, gravity constantly tries to pull the headset back from microstate B to microstate A (or something like it), so the set of thermally accessible microstates available from microstate B will be slightly biased because gravity will prevent the system reaching states as far from B as B is from A. For the in-pocket scenario, however, once the headset arrives in microstate B then the new set of accessible microstates will contain equal numbers of microstates that move back towards A as away from it. And given that the choice of the new microstate is random, then the in-pocket scenario will allow the headset to enter more microstates inaccessible to the out-of-pocket scenario. Imagine for a moment that you could count all the microstates in which the headset could find itself, that is, you could take a snap-shot of every possible position that the headset wires could assume. If we focus on just one wire, we might say that the wire looks ordered when it forms a straight line: it contains 0 bends. It still looks ordered with 1 bend, or perhaps 2, or 100. But above a certain number of bends it begins to look disordered. This simply accords with a casual understanding of that term. Yet how many bends can a wire support? Perhaps thousands. Or tens of thousands. Or millions. The point is that the vast majority of microstates of the wire will be what we call, “Disordered,” and only a tiny proportion will be, “Ordered.” Thus it is not that there are fewer ways to make a disordered headset when it is held aloft than when it sits in a pocket, but that putting a headset in a pocket allows it to randomly explore a far larger number of its available microstates and as the vast majority of these microstates correspond to the disordered macrostate then the in-pocket headset is overwhelmingly likely to end up disordered. What on earth has this got to do with software? Software structure, too, exhibits this microstate/macrostate duality. The package structure of a Java program reveals packages connected to other packages by dependencies. This structure at any given time represents one microstate of the system. Programmers add features and make updates, acting as random thermal fluctuations, changing the package structure slightly, nudging the system into new microstates. (Programmers do not, of course, make random updates to a code-base in that they do not pick a text character at random from the source code and flip it. Nevertheless, no one can predict accurately in advance a year’s worth of updates in any significant code-base. It is in this sense – in this unpredictability and essential patternlessness – that the programmer’s coding can be modeled as a random input to the program.) Pulling back from the detail of the individual microstates, we evaluate the macrostate of the system by asking, “Does this program look ordered?” If ordered, we say the program is, “Well structured;” otherwise it is, “Poorly structured.” Small programs, with few inter-package dependencies, usually appear ordered. As programs grow, however, there seems inevitably to come a point when dependencies between packages become dense, overstretched and start looping back on themselves: such package structures truly merit the description, “Disordered.” Brian Foote and Joseph Yoder memorably labeled any such disordered system a Big Ball of Mud, claiming, “A big ball of mud is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle.” They also claimed big balls of mud to be the de-facto software structure rather than the exception and our headset musings show why this may be so. With Java packages free to depend on one another without restriction, there are far more ways to produce a disordered set of inter-dependent packages than an ordered set of those same packages, so given the thermal fluctuation of programmer input the program will find itself propelled into a series of randomly selected microstates and as most microstates correspond to a disordered macrostate then most systems will end up disordered. Figure 3 shows the package structures of four programs depicted as four spoiklin diagrams in which each circle represents a package, each straight line a dependency from a package above to below and each curved line a dependency from a package below to above. Despite the rather miniaturized graphics, most programmers would consider only one of these programs to be well-structured.    All of which would remain a mere aesthetic curiosity except for one crucial point: the structure of a program bears directly on the predictability of update costs and often on the actual costs themselves. Well-structured programs clearly show which packages use the services provided by other packages, thus simplifying prediction of the costs involved in any particular package change. Poorly structured programs, on the other hand, suffocate beneath such choking dependencies that even identifying the impacted packages that might stem from a particular change becomes a chore, clouding cost predictability and often causing that cost to grow disproportionately large. In figure 3, program B was designed using radial encapsulation, which allows package dependencies only in the direction of the top level package. Such a restriction mirrors the holding aloft of a headset, allowing gravity to stretch it out and keep it spontaneously ordered. Such a restriction makes the forming of a big ball of mud as improbable as a headset held aloft suddenly collapsing into a sequence of knots. The precise mechanism, however, is unimportant. What matters is that the program was designed according to systemic principles that aggressively restrict the number of microstates into which the program might wander whilst ensuring that those inhabitable microstates correspond to an ordered, well-structured macrostate. No one designs poorly structured programs to be poorly structured. Instead, programs become poorly structured because the structuring principles according to which they are built do not prevent them from becoming so. Perhaps programmers should ask not, “Why are big balls of mud the de-facto program structure?” but, “Why are good structuring principles ignored?” Summary Some find words like, “Thermodynamics,” “Equilibrium,” and, “Entropy,” intimidating so posts exploring these concepts should delay their introduction.Reference: The most important factor in software decay from our JCG partner Edmund Kirwan at the A blog about software. blog....

Autoboxing, Unboxing, and NoSuchMethodError

J2SE 5 introduced numerous features to the Java programming language. One of these features is autoboxing and unboxing, a feature that I use almost daily without even thinking about it. It is often convenient (especially when used with collections), but every once in a while it leads to some nasty surprises, “weirdness,” and “madness.” In this blog post, I look at a rare (but interesting to me) case of NoSuchMethodError resulting from mixing classes compiled with Java versions before autoboxing/unboxing with classes compiled with Java versions that include autoboxing/unboxing. The next code listing shows a simple Sum class that could have been written before J2SE 5. It has overloaded “add” methods that accept different primitive numeric data types and each instance of Sum> simply adds all types of numbers provided to it via any of its overloaded “add” methods. Sum.java (pre-J2SE 5 Version) import java.util.ArrayList;public class Sum { private double sum = 0;public void add(short newShort) { sum += newShort; }public void add(int newInteger) { sum += newInteger; }public void add(long newLong) { sum += newLong; }public void add(float newFloat) { sum += newFloat; }public void add(double newDouble) { sum += newDouble; }public String toString() { return String.valueOf(sum); } } Before unboxing was available, any clients of the above Sum class would need to provide primitives to these “add” methods or, if they had reference equivalents of the primitives, would need to convert the references to their primitive counterparts before calling one of the “add” methods. The onus was on the client code to do this conversion from reference type to corresponding primitive type before calling these methods. Examples of how this might be accomplished are shown in the next code listing. No Unboxing: Client Converting References to Primitives private static String sumReferences( final Long longValue, final Integer intValue, final Short shortValue) { final Sum sum = new Sum(); if (longValue != null) { sum.add(longValue.longValue()); } if (intValue != null) { sum.add(intValue.intValue()); } if (shortValue != null) { sum.add(shortValue.shortValue()); } return sum.toString(); } J2SE 5’s autoboxing and unboxing feature was intended to address this extraneous effort required in a case like this. With unboxing, client code could call the above “add” methods with references types corresponding to the expected primitive types and the references would be automatically “unboxed” to the primitive form so that the appropriate “add” methods could be invoked. Section 5.1.8 (“Unboxing Conversion”) of The Java Language Specification explains which primitives the supplied numeric reference types are converted to in unboxing and Section 5.1.7 (“Boxing Conversion”) of that same specification lists the references types that are autoboxed from each primitive in autoboxing.In this example, unboxing reduced effort on the client’s part in terms of converting reference types to their corresponding primitive counterparts before calling Sum‘s “add” methods, but it did not completely free the client from needing to process the number values before providing them. Because reference types can be null, it is possible for a client to provide a null reference to one of Sum‘s “add” methods and, when Java attempts to automatically unbox that null to its corresponding primitive, a NullPointerException is thrown. The next code listing adapts that from above to indicate how the conversion of reference to primitive is no longer necessary on the client side but checking for null is still necessary to avoid the NullPointerException. Unboxing Automatically Coverts Reference to Primitive: Still Must Check for Null private static String sumReferences( final Long longValue, final Integer intValue, final Short shortValue) { final Sum sum = new Sum(); if (longValue != null) { sum.add(longValue); } if (intValue != null) { sum.add(intValue); } if (shortValue != null) { sum.add(shortValue); } return sum.toString(); } Requiring client code to check their references for null before calling the “add” methods on Sum may be something we want to avoid when designing our API. One way to remove that need is to change the “add” methods to explicitly accept the reference types rather than the primitive types. Then, the Sum class could check for null before explicitly or implicitly (unboxing) dereferencing it. The revised Sum class with this changed and more client-friendly API is shown next. Sum Class with “add” Methods Expecting References Rather than Primitives import java.util.ArrayList;public class Sum { private double sum = 0;public void add(Short newShort) { if (newShort != null) { sum += newShort; } }public void add(Integer newInteger) { if (newInteger != null) { sum += newInteger; } }public void add(Long newLong) { if (newLong != null) { sum += newLong; } }public void add(Float newFloat) { if (newFloat != null) { sum += newFloat; } }public void add(Double newDouble) { if (newDouble != null) { sum += newDouble; } }public String toString() { return String.valueOf(sum); } } The revised Sum class is more client-friendly because it allows the client to pass a reference to any of its “add” methods without concern for whether the passed-in reference is null or not. However, the change of the Sum class’s API like this can lead to NoSuchMethodErrors if either class involved (the client class or one of the versions of the Sum class) is compiled with different versions of Java. In particular, if the client code uses primitives and is compiled with JDK 1.4 or earlier and the Sum class is the latest version shown (expecting references instead of primitives) and is compiled with J2SE 5 or later, a NoSuchMethodError like the following will be encountered (the “S” indicates it was the “add” method expecting a primitive short and the “V” indicates that method returned void). Exception in thread "main" java.lang.NoSuchMethodError: Sum.add(S)V at Main.main(Main.java:9) On the other hand, if the client is compiled with J2SE 5 or later and with primitive values being supplied to Sum as in the first example (pre-unboxing) and the Sum class is compiled in JDK 1.4 or earlier with “add” methods expecting primitives, a different version of the NoSuchMethodError is encountered. Note that the Short reference is cited here. Exception in thread "main" java.lang.NoSuchMethodError: Sum.add(Ljava/lang/Short;)V at Main.main(Main.java:9) There are several observations and reminders to Java developers that come from this.Classpaths are important:Java .class files compiled with the same version of Java (same -source and -target) would have avoided the particular problem in this post. Classpaths should be as lean as possible to reduce/avoid possibility of getting stray “old” class definitions. Build “clean” targets and other build operations should be sure to clean past artifacts thoroughly and builds should rebuild all necessary application classes.Autoboxing and Unboxing are well-intentioned and often highly convenient, but can lead to surprising issues if not kept in mind to some degree. In this post, the need to still check for null (or know that the object is non-null) is necessary remains in situations when implicit dereferencing will take place as a result of unboxing. It’s a matter of API style taste whether to allow clients to pass nulls and have the serving class check for null on their behalf. In an industrial application, I would have stated whether null was allowed or not for each “add” method parameter with @param in each method’s Javadoc comment. In other situations, one might want to leave it the responsibility of the caller to ensure any passed-in reference is non-null and would be content throwing a NullPointerException if the caller did not obey that contract (which should also be specified in the method’s Javadoc). Although we typically see NoSuchMethodError when a method is completely removed or when we access an old class before that method was available or when a method’s API has changed in terms of types or number of types. In a day when Java autoboxing and unboxing are largely taken for granted, it can be easy to think that changing a method from taking a primitive to taking the corresponding reference type won’t affect anything, but even that change can lead to an exception if not all classes involved are built on a version of Java supporting autoboxing and unboxing. One way to determine the version of Java against which a particular .class file was compiled is to use javap -verbose and to look in the javap output for the “major version:”. In the classes I used in my examples in this post (compiled against JDK 1.4 and Java SE 8), the “major version” entries were 48 and 52 respectively (the General Layout section of the Wikipedia entry on Java class file lists the major versions).Fortunately, the issue demonstrated with examples and text in this post is not that common thanks to builds typically cleaning all artifacts and rebuilding code on a relatively continuous basis. However, there are cases where this could occur and one of the most likely such situations is when using an old JAR file accidentally because it lies in wait on the runtime classpath.Reference: Autoboxing, Unboxing, and NoSuchMethodError from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

The Emergence of DevOps and the Fall of the Old Order

Software Engineering has always been dependent on IT operations to take care of the deployment of software to a production environment. In the various roles that I have been in, the role of IT operations has come in various monikers from “Data Center” to “Web Services”. An organisation delivering software used to be able to separate these roles cleanly. Software Engineering and IT Operations were able to work in a somewhat isolated manner, with neither having the need to really know the knowledge that the other hold in their respective domains. Software Engineering would communicate with IT operations through “Deployment Requests”. This is usually done after ensuring that adequate tests have been conducted on their software.   However, the traditional way of organising departments in a software delivery organisation is starting to seem obsolete. The reason is that software infrastructure have moved towards the direction of being “agile”. The same buzzword that had gripped the software development world has started to exert its effect on IT infrastructure. The evidence of this seismic shift is seen in the fastest growing (and disruptive) companies today. Companies like Netflix, Whatsapp and many tech companies have gone into what we would call “cloud” infrastructure that is dominated by Amazon Web Services. There is huge progress in the virtualization technologies of hardware resources. This have in turn allowed companies like AWS and Rackspace to convert their server farms into discrete units of computing resources that can be diced and parcelled and redistributed as a service to their customers in an efficient manner. It is inevitable that all this configurable “hardware” resources will eventually be some form of “software” resource that can be maximally utilized by businesses. This has in turn bred a whole new genre of skillset that is required to manage, control and deploy these Infrastructure As A Service (IaaS). Some of the tools used by these services include provisioning tools like Chef or Puppet. Together with the software apis provided by the IaaS vendors, infrastructure can be brought up or down as required. The availability of large quantities of computing resources without all the upfront costs associated with capital expenditures on hardware have led to an explosion in the number of startups trying to solve problems of all kinds imaginable and coupled with the prevalence of powerful mobile devices have led to a digital renaissance for many industries. However, this renaissance has also led to the demand for a different kind of software organisation. As someone who has been part of software engineering and development, I am witness to the rapid evolution of profession. The increasing scale of data and processing needs requires a complete shift in paradigm from the old software delivery organisation to a new one that melds software engineering and IT operations together. This is where the role of a “DevOps” come into the picture. Recruiting DevOps in an organisation and restructuring the IT operations around such roles enable businesses to be Agile. Some businesses whose survival depends on the availability of their software on the Internet will find it imperative to model their software delivery organisation around DevOps. Having the ability to capitalise on software automation to deploy infrastructure within minutes allows a business to scale up quickly. Being able to practise continuous delivery of software allow features to get into the market quickly and allows a feedback loop in which a business can improve itself. We are witness to a new world order and software delivery organisations that cannot successfully transition to this Brave New World will find themselves falling behind quickly especially when a competitor is able to scale and deliver software faster, reliably and with less personnel.Reference: The Emergence of DevOps and the Fall of the Old Order from our JCG partner Lim Han at the Developers Corner blog....

Publish JAR artifact using Gradle to Artifactory

So I have wasted (invested) a day or two just to find out how to publish a JAR using Gradle to a locally running Artifactory server. I used Gradle Artifactory plugin to do the publishing. I was lost in endless loop of including various versions of various plugins and executing all sorts of tasks. Yes, I’ve read documentation before. It’s just wrong. Perhaps it got better in the meantime. Executing following has uploaded build info only. No artifact (JAR) has been published.       $ gradle artifactoryPublish :artifactoryPublish Deploying build info to: http://localhost:8081/artifactory/api/build Build successfully deployed. Browse it in Artifactory under http://localhost:8081/artifactory/webapp/builds/scala-gradle-artifactory/1408198981123/2014-08-16T16:23:00.927+0200/BUILD SUCCESSFULTotal time: 4.681 secs This guy has saved me, I wanted to kiss him: StackOverflow – upload artifact to artifactory using gradle I assume that you already have Gradle and Artifactory installed. I had a Scala project, but that doesn’t matter. Java should be just fine. I ran Artifactory locally on port 8081. I have also created a new user named devuser who has permissions to deploy artifacts. Long story short, this is my final build.gradle script file: buildscript { repositories { maven { url 'http://localhost:8081/artifactory/plugins-release' credentials { username = "${artifactory_user}" password = "${artifactory_password}" } name = "maven-main-cache" } } dependencies { classpath "org.jfrog.buildinfo:build-info-extractor-gradle:3.0.1" } }apply plugin: 'scala' apply plugin: 'maven-publish' apply plugin: "com.jfrog.artifactory"version = '1.0.0-SNAPSHOT' group = 'com.buransky'repositories { add buildscript.repositories.getByName("maven-main-cache") }dependencies { compile 'org.scala-lang:scala-library:2.11.2' }tasks.withType(ScalaCompile) { scalaCompileOptions.useAnt = false }artifactory { contextUrl = "${artifactory_contextUrl}" publish { repository { repoKey = 'libs-snapshot-local' username = "${artifactory_user}" password = "${artifactory_password}" maven = true} defaults { publications ('mavenJava') } } }publishing { publications { mavenJava(MavenPublication) { from components.java } } } I have stored Artifactory context URL and credentials in ~/.gradle/gradle.properties file and it looks like this: artifactory_user=devuser artifactory_password=devuser artifactory_contextUrl=http://localhost:8081/artifactory Now when I run the same task again, it’s what I wanted. Both Maven POM file and JAR archive are deployed to Artifactory: $ gradle artifactoryPublish :generatePomFileForMavenJavaPublication :compileJava UP-TO-DATE :compileScala UP-TO-DATE :processResources UP-TO-DATE :classes UP-TO-DATE :jar UP-TO-DATE :artifactoryPublish Deploying artifact: http://localhost:8081/artifactory/libs-snapshot-local/com/buransky/scala-gradle-artifactory/1.0.0-SNAPSHOT/scala-gradle-artifactory-1.0.0-SNAPSHOT.pom Deploying artifact: http://localhost:8081/artifactory/libs-snapshot-local/com/buransky/scala-gradle-artifactory/1.0.0-SNAPSHOT/scala-gradle-artifactory-1.0.0-SNAPSHOT.jar Deploying build info to: http://localhost:8081/artifactory/api/build Build successfully deployed. Browse it in Artifactory under http://localhost:8081/artifactory/webapp/builds/scala-gradle-artifactory/1408199196550/2014-08-16T16:26:36.232+0200/BUILD SUCCESSFULTotal time: 5.807 secs Happyend:  Reference: Publish JAR artifact using Gradle to Artifactory from our JCG partner Rado Buransky at the Rado Buransky’s Blog blog....

The dark side of Hibernate AUTO flush

Introduction Now that I described the the basics of JPA and Hibernate flush strategies, I can continue unraveling the surprising behavior of Hibernate’s AUTO flush mode. Not all queries trigger a Session flush Many would assume that Hibernate always flushes the Session before any executing query. While this might have been a more intuitive approach, and probably closer to the JPA’s AUTO FlushModeType, Hibernate tries to optimize that. If the current executed query is not going to hit the pending SQL INSERT/UPDATE/DELETE statements then the flush is not strictly required. As stated in the reference documentation, the AUTO flush strategy may sometimes synchronize the current persistence context prior to a query execution. It would have been more intuitive if the framework authors had chosen to name it FlushMode.SOMETIMES. JPQL/HQL and SQL Like many other ORM solutions, Hibernate offers a limited Entity querying language (JPQL/HQL) that’s very much based on SQL-92 syntax. The entity query language is translated to SQL by the current database dialect and so it must offer the same functionality across different database products. Since most database systems are SQL-92 complaint, the Entity Query Language is an abstraction of the most common database querying syntax. While you can use the Entity Query Language in many use cases (selecting Entities and even projections), there are times when its limited capabilities are no match for an advanced querying request. Whenever we want to make use of some specific querying techniques, such as:Window functions Pivot table Common Table Expressionswe have no other option, but to run native SQL queries. Hibernate is a persistence framework. Hibernate was never meant to replace SQL. If some query is better expressed in a native query, then it’s not worth sacrificing application performance on the altar of database portability. AUTO flush and HQL/JPQL First we are going to test how the AUTO flush mode behaves when an HQL query is about to be executed. For this we define the following unrelated entities:The test will execute the following actions:A Person is going to be persisted. Selecting User(s) should not trigger a the flush. Querying for Person, the AUTO flush should trigger the entity state transition synchronization (A person INSERT should be executed prior to executing the select query).Product product = new Product(); session.persist(product); assertEquals(0L, session.createQuery("select count(id) from User").uniqueResult()); assertEquals(product.getId(), session.createQuery("select p.id from Product p").uniqueResult()); Giving the following SQL output: [main]: o.h.e.i.AbstractSaveEventListener - Generated identifier: f76f61e2-f3e3-4ea4-8f44-82e9804ceed0, using strategy: org.hibernate.id.UUIDGenerator Query:{[select count(user0_.id) as col_0_0_ from user user0_][]} Query:{[insert into product (color, id) values (?, ?)][12,f76f61e2-f3e3-4ea4-8f44-82e9804ceed0]} Query:{[select product0_.id as col_0_0_ from product product0_][]} As you can see, the User select hasn’t triggered the Session flush. This is because Hibernate inspects the current query space against the pending table statements. If the current executing query doesn’t overlap with the unflushed table statements, the a flush can be safely ignored. HQL can detect the Product flush even for:Sub-selects session.persist(product); assertEquals(0L, session.createQuery( "select count(*) " + "from User u " + "where u.favoriteColor in (select distinct(p.color) from Product p)").uniqueResult()); Resulting in a proper flush call: Query:{[insert into product (color, id) values (?, ?)][Blue,2d9d1b4f-eaee-45f1-a480-120eb66da9e8]} Query:{[select count(*) as col_0_0_ from user user0_ where user0_.favoriteColor in (select distinct product1_.color from product product1_)][]}Or theta-style joins session.persist(product); assertEquals(0L, session.createQuery( "select count(*) " + "from User u, Product p " + "where u.favoriteColor = p.color").uniqueResult()); Triggering the expected flush : Query:{[insert into product (color, id) values (?, ?)][Blue,4af0b843-da3f-4b38-aa42-1e590db186a9]} Query:{[select count(*) as col_0_0_ from user user0_ cross join product product1_ where user0_.favoriteColor=product1_.color][]}The reason why it works is because Entity Queries are parsed and translated to SQL queries. Hibernate cannot reference a non existing table, therefore it always knows the database tables an HQL/JPQL query will hit. So Hibernate is only aware of those tables we explicitly referenced in our HQL query. If the current pending DML statements imply database triggers or database level cascading, Hibernate won’t be aware of those. So even for HQL, the AUTO flush mode can cause consistency issues. AUTO flush and native SQL queries When it comes to native SQL queries, things are getting much more complicated. Hibernate cannot parse SQL queries, because it only supports a limited database query syntax. Many database systems offer proprietary features that are beyond Hibernate Entity Query capabilities. Querying the Person table, with a native SQL query is not going to trigger the flush, causing an inconsistency issue: Product product = new Product(); session.persist(product); assertNull(session.createSQLQuery("select id from product").uniqueResult()); DEBUG [main]: o.h.e.i.AbstractSaveEventListener - Generated identifier: 718b84d8-9270-48f3-86ff-0b8da7f9af7c, using strategy: org.hibernate.id.UUIDGenerator Query:{[select id from product][]} Query:{[insert into product (color, id) values (?, ?)][12,718b84d8-9270-48f3-86ff-0b8da7f9af7c]} The newly persisted Product was only inserted during transaction commit, because the native SQL query didn’t triggered the flush. This is major consistency problem, one that’s hard to debug or even foreseen by many developers. That’s one more reason for always inspecting auto-generated SQL statements. The same behaviour is observed even for named native queries: @NamedNativeQueries( @NamedNativeQuery(name = "product_ids", query = "select id from product") ) assertNull(session.getNamedQuery("product_ids").uniqueResult()); So even if the SQL query is pre-loaded, Hibernate won’t extract the associated query space for matching it against the pending DML statements. Overruling the current flush strategy Even if the current Session defines a default flush strategy, you can always override it on a query basis. Query flush mode The ALWAYS mode is going to flush the persistence context before any query execution (HQL or SQL). This time, Hibernate applies no optimization and all pending entity state transitions are going to be synchronized with the current database transaction. assertEquals(product.getId(), session.createSQLQuery("select id from product").setFlushMode(FlushMode.ALWAYS).uniqueResult()); Instructing Hibernate which tables should be syncronized You could also add a synchronization rule on your current executing SQL query. Hibernate will then know what database tables need to be syncronzied prior to executing the query. This is also useful for second level caching as well. assertEquals(product.getId(), session.createSQLQuery("select id from product").addSynchronizedEntityClass(Product.class).uniqueResult()); Conclusion The AUTO flush mode is tricky and fixing consistency issues on a query basis is a maintainer’s nightmare. If you decide to add a database trigger, you’ll have to check all Hibernate queries to make sure they won’t end up running against stale data. My suggestion is to use the ALWAYS flush mode, even if Hibernate authors warned us that: this strategy is almost always unnecessary and inefficient. Inconsistency is much more of an issue that some occasional premature flushes. While mixing DML operations and queries may cause unnecessary flushing this situation is not that difficult to mitigate. During a session transaction, it’s best to execute queries at the beginning (when no pending entity state transitions are to be synchronized) and towards the end of the transaction (when the current persistence context is going to be flushed anyway). The entity state transition operations should be pushed towards the end of the transaction, trying to avoid interleaving them with query operations (therefore preventing a premature flush trigger).Reference: The dark side of Hibernate AUTO flush from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

Understanding volatile via example

We have spent last couple of months stabilizing the lock detection functionality in Plumbr. During this we have stumbled into many tricky concurrency issues. Many of the issues are unique, but one particular type of issues keeps repeatedly appearing. You might have guessed it – misuse of the volatile keyword. We have detected and solved bunch of issues where the extensive usage of volatile made arbitrary parts of the application slower, extended locks holding time and eventually bringing the JVM to its knees. Or vice versa – granting too liberal access policy has triggered some nasty concurrency issues. I guess every Java developer recalls the first steps in the language. Days and days spent with manuals and tutorials. Those tutorials all had the list of keywords, among which the volatile was one of the scariest. As days passed and more and more code was written without the need for this keyword, many of us forgot the existence of volatile. Until the production systems started either corrupting data or dying in unpredictable manner. Debugging such cases forced some of us to actually understand the concept. But I bet it was not a pleasant lesson to have, so maybe I can save some of you some time by shedding light upon the concept via a simple example. Example of volatile in action The example is simulating a bank office. The type of bank office where you pick a queue number from a ticketing machine and then wait for the invite when the queue in front of you has been processed. To simulate such office, we have created the following example, consisting of two threads. First of the two threads is implemented as CustomerInLine. This is a thread doing nothing but waiting until the value in NEXT_IN_LINE matches customer’s ticket. Ticket number is hardcoded to be #4. When the time arrives (NEXT_IN_LINE>=4), the thread announces that the waiting is over and finishes. This simulates a customer arriving to the office with some customers already in queue. The queuing implementation is in Queue class which runs a loop calling for the next customer and then simulating work with the customer by sleeping 200ms for each customer. After calling the next customer, the value stored in class variable NEXT_IN_LINE is increased by one. public class Volatility {static int NEXT_IN_LINE = 0;public static void main(String[] args) throws Exception { new CustomerInLine().start(); new Queue().start(); }static class CustomerInLine extends Thread { @Override public void run() { while (true) { if (NEXT_IN_LINE >= 4) { break; } } System.out.format("Great, finally #%d was called, now it is my turn\n",NEXT_IN_LINE); } }static class Queue extends Thread { @Override public void run() { while (NEXT_IN_LINE < 11) { System.out.format("Calling for the customer #%d\n", NEXT_IN_LINE++); try { Thread.sleep(200); } catch (InterruptedException e) { e.printStackTrace(); } } } } } So, when running this simple program you might expect the output of the program being similar to the following: Calling for the customer #1 Calling for the customer #2 Calling for the customer #3 Calling for the customer #4 Great, finally #4 was called, now it is my turn Calling for the customer #5 Calling for the customer #6 Calling for the customer #7 Calling for the customer #8 Calling for the customer #9 Calling for the customer #10 As it appears, the assumption is wrong. Instead, you will see the Queue processing through the list of 10 customers and the hapless thread simulating customer #4 never alerts it has seen the invite. What happened and why is the customer still sitting there waiting endlessly? Analyzing the outcome What you are facing here is a JIT optimization applied to the code caching the access to the NEXT_IN_LINE variable. Both threads get their own local copy and the CustomerInLine thread never sees the Queue actually increasing the value of the thread. If you now think this is some kind of horrible bug in the JVM then you are not fully correct – compilers are allowed to do this to avoid rereading the value each time. So you gain a performance boost, but at a cost – if other threads change the state, the thread caching the copy does not know it and operates using the outdated value. This is precisely the case for volatile. With this keyword in place, the compiler is warned that a particular state is volatile and the code is forced to reread the value each time when the loop is executed. Equipped with this knowledge, we have a simple fix in place – just change the declaration of the NEXT_IN_LINE to the following and your customers will not be left sitting in queue forever: static volatile int NEXT_IN_LINE = 0; For those, who are happy with just understanding the use case for volatile, you are good to go. Just be aware of the extra cost attached – when you start declaring everything to be volatile you are forcing the CPU to forget about local caches and to go straight into main memory, slowing down your code and clogging the memory bus. Volatile under the hood For those who wish to understand the issue in more details, stay with me. To see what is happening underneath, lets turn on the debugging to see the assembly code generated from the bytecode by the JIT. This is achieved by specifying the following JVM options: -XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly Running the program with those options turned on both with volatile turned on and off, gives us the following important insight: Running the code without the volatile keyword, shows us that on instruction 0x00000001085c1c5a we have comparison between two values. When comparison fails we continue through 0x00000001085c1c60 to 0x00000001085c1c66 which jumps back to 0x00000001085c1c60 and an infinite loop is born. 0x00000001085c1c56: mov 0x70(%r10),%r11d 0x00000001085c1c5a: cmp $0x4,%r11d 0x00000001085c1c5e: jge 0x00000001085c1c68 ; OopMap{off=64} ;*if_icmplt ; - Volatility$CustomerInLine::run@4 (line 14) 0x00000001085c1c60: test %eax,-0x1c6ac66(%rip) # 0x0000000106957000 ;*if_icmplt ; - Volatility$CustomerInLine::run@4 (line 14) ; {poll} 0x00000001085c1c66: jmp 0x00000001085c1c60 ;*getstatic NEXT_IN_LINE ; - Volatility$CustomerInLine::run@0 (line 14) 0x00000001085c1c68: mov $0xffffff86,%esi With the volatile keyword in place, we can see that on instruction 0x000000010a5c1c40 we load value to a register, on 0x000000010a5c1c4a compare it to our guard value of 4. If comparison fails, we jump back from 0x000000010a5c1c4e to 0x000000010a5c1c40, loading value again for the new check. This ensures that we will see changed value of NEXT_IN_LINE variable. 0x000000010a5c1c36: data32 nopw 0x0(%rax,%rax,1) 0x000000010a5c1c40: mov 0x70(%r10),%r8d ; OopMap{r10=Oop off=68} ;*if_icmplt ; - Volatility$CustomerInLine::run@4 (line 14) 0x000000010a5c1c44: test %eax,-0x1c1cc4a(%rip) # 0x00000001089a5000 ; {poll} 0x000000010a5c1c4a: cmp $0x4,%r8d 0x000000010a5c1c4e: jl 0x000000010a5c1c40 ;*if_icmplt ; - Volatility$CustomerInLine::run@4 (line 14) 0x000000010a5c1c50: mov $0x15,%esi Now, hopefully the explanation will save you from couple of nasty bugs.Reference: Understanding volatile via example from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog....

JUnit in a Nutshell: Hello World

JUnit seems to be the most popular testing tool for developers within the Java world. So it is no wonder that there have been written some good books about this topic. But by earning a living as consultant I still meet quite often programmers, who at most have a vague understanding of the tool and its proper usage. Hence I had the idea to write a couple of posts that introduce the essential techniques. The intention is to provide a reasonable starting point, but avoid daunting information flooding à la xUnit Test Patterns1. Instead there will be pointers to in depth articles, book chapters or dissenting opinions for further reading whenever suitable. Despite the existence of other articles on the subject the approach taken in this mini-series might be appropriate to help one or two developers to warm to to the world of JUnit testing – which would make the effort worthwhile. Why bother? Writing high quality software is a difficult undertaking. As for many other advocates of agile approaches extensive upfront-planning did not work out well for me. But for all that methodology I experienced the biggest advancement when we began to use consequently JUnit with TDD. And indeed, empirical studies seem to confirm my perception that this practice improves quality, as an infoQ article states2. However JUnit testing is not as trivial as it might look. A fatal mistake we made at the beginning was to treat test classes as second rated citizens. Gradually we realized that a test is much more than a simple verification machine and – if not written with care – it can be a pain in the ass regarding maintenance and progression3. Nowadays I tend to see a test case more as an accompanying specification of the unit under test. Quite similar to the specs of a workpiece like a cogwheel, that tells QA what key figures such a unit have to met. But due to the nature of software no one but the developer is apt to write such low level specs. By doing so automated tests become an important source of information about the intented behavior of a unit. And one that does not become outdated as easily as documentation… Getting Started A journey of a thousand miles begins with a single step Lao TzuLet us assume we have to write a simple number range counter that delivers a certain amount of consecutive integers, starting from a given value. Following the metaphor of the accompanying specification we could begin with the following code: public class NumberRangeCounterTest { } The test class expresses the intent to develop a unit NumberRangeCounter, which Meszaros would denote as system under test (SUT). And following a common naming pattern the unit’s name is complemented by the postfix Test. That is all well and good, but the impatient may wonder: What is the next step? What should be tested first? And – how do I create an executable test anyway? There are various ways to incorporate JUnit. If you work with the Eclipse Java IDE the library is already included. It simply can be added to a project’s build path, which will be sufficient throughout this tutorial. To get your own copy please refer to Download and Install, for maven integration look at Using JUnit and if you happen to need an OSGi bundle you make a find at the eclipse orbit downloads. Usually it is a good idea to start with the Happy Path, which is the ‘normal’ path of execution and ideally the general business usecase. For the SUT NumberRangeCounter this might be a test to verify, that the counter returns consecutive numbers on subsequent invocations of a method, which still has to be defined. An executable JUnit test is a public, non static method that gets annotated with @Test and takes no parameters. Summarizing all this information the next step could be the following method stub4: public class NumberRangeCounterTest { @Test public void subsequentNumber() { } } Still not much, but it is actually sufficient for JUnit to run the test the first time. JUnit test runs can be launched from command line or a particular UI, but for the scope of this tutorial I assume that you have an IDE integration available. Within Eclipse the result would look like this5:The green bar signals that the test run did not recognize any problems. Which is not a big surprise, as we have not tested anything yet. But remember we have already done some useful considerations that can help us to populate our first test easily:We intent to write a unit NumberRangeCounter that is responsible for delivering a consecutive sequence of integer values. To test it we could create a local variable that takes a new instance of such a counter. @Test public void subsequentNumber() { NumberRangeCounter counter = new NumberRangeCounter(); } As the first test should assert that numbers provided by the NumberRangeCounter are consecutive integer values, meaning 5, 6, 7 or the like, the SUT could use a method providing those values. Furthermore this method could be called twice to provide a minimum set of subsequent values. @Test public void subsequentNumber() { NumberRangeCounter counter = new NumberRangeCounter();int first = counter.next(); int second = counter.next(); }Looks reasonable so far, but how can we assure that a test run is denoted as failure, if the value of second is not a valid successor of first? JUnit offers for this purpose the class org.junit.Assert, which provides a set of static methods to help developers to write so called self-checking tests. The methods prefixed with assert are meant to check a certain condition, throwing an AssertionError on a negative result. Such errors are picked up by the JUnit runtime and mark the test as failed in the resulting report. Update 2014/08/13: Using org.junit.Assert is just one possibility. JUnit also includes a matcher library Hamcrest, which many people consider a better solution regarding clean code. Personally I like the syntax of a third party library called AssertJ best. I think that Assert might be more intuitive for beginners, so I choose it for this ‘hello world’ post. Due to the comments on that decision I realized that I have to mention these other possibilities at least at this point. I will elaborate on the usage of Hamcrest and AssertJ in a follow up post. To assert that two values or objects are equals it is plausible to use Assert#assertEquals. As it is very common to use static imports for assertion method calls, the subsequentNumber test could be completed like this: @Test public void subsequentNumber() { NumberRangeCounter counter = new NumberRangeCounter();int first = counter.next(); int second = counter.next();assertEquals( first + 1, second ); } As you can see, the test specifies an important behaviour of the SUT, which does not even exist yet. And by the way, this also means that the test class does not compile anymore!  So the next step could be to create a skeleton of our unit to solve this problem. Although this tutorial is about JUnit and not TDD, I have chosen to insinuate the latter approach to emphasize the specification character clean JUnit test cases can have. Such an approach shifts the work focus from the unit’s inwards more to its usage and low level requirements. If you want to learn more about TDD, in particular the Red/Green/Refactor mantra used to implement a single unit the books Test-Driven Development By Example by Kent Beck or Test Driven by Lasse Koskela might be a good evening read. The following snippet shows a how the NumberRangeCounter stub would look like: public class NumberRangeCounter {public int next() { return 0; } } Running the test again, now leads to a red bar due to the insufficient implementation of NumberRangeCounter#next(). This allows to ensure that the specification has not been met accidently by a useless verification or the like:Additionally to the red bar the execution report shows how many tests have been run in total, how many of those terminated with errors, and how many have failed due to wrong assertions. A stacktrace for each error/failure helps to find the exact location in the test class. An AssertionError provides an explanatory message, which is shown in the first line of the failure trace. Whereas a test in error may indicate an arbitrary programming mistake, causing an Exception to be thrown beyond the test’s assertion statements. Note that JUnit follows the all or nothing principle. This means if a test run involves more then one test, which is usually the case, the failure of a single test marks the whole execution as failed by the red bar. As the actual implementation of a particular unit is of minor interest for the topic of this article, I leave it up to you to come up with an innovative solution to make our first test pass again! Conclusion The previous sections explained the very basics of a JUnit test – how it is written, executed and evaluated. While doing so I placed value on the fact that such tests should be developed with the highest possible coding standards one could think of. The given example was hopefully well-balanced enough to provide a comprehensible introduction without being trivial. Suggestions for improvements are of course highly appreciated. The next JUnit in a Nutshell post will continue the example and cover the general concept of a test case and its four-phase test structure, so stay tuned.  Do not get me wrong – I like the book very much, but the general purpose approach is probably not the best way for getting started: xUnit Test Patterns, Gerard Meszaros, 2007 Other studies are listed at http://biblio.gdinwiddie.com/biblio/StudiesOfTestDrivenDevelopment and a comparative analysis of empirical studies can be found at https://tuhat.halvi.helsinki.fi/portal/files/29553974/2014_01_swqd_author_version.pdf See also: Keeping Tests Clean, Clean Code, Chapter 9, Robert C. Martin 2009 There are diverging opinions on how to name a test method. I have written down some considerations about this topic in Getting JUnit Test Names Right For more information on how to work with JUnit in Eclipse you might like to read my post Working Efficiently with JUnit in EclipseReference: JUnit in a Nutshell: Hello World from our JCG partner Frank Appel at the Code Affine blog....

Saving to a SQLite database in your Android application

This is the fourth post in my series about saving data in Android applications. Here are the other posts : Introduction : How to save data in your Android application Saving data to a file in your Android application Saving preferences in your Android application The previous posts described how to save files to the file system and to the preferences files. This can be enough if for a simple application, but if you data has a complex structure or if you have a lot of data to save, using a database is a better option. Managing a database requires more knowledge and setup, but it comes with many validations and performance optimization. The Android SDK includes the open source SQLite database engine and the classes needed to access it. SQLite is a self-contained relational database that requires no server to work. The database itself is saved to a file in the internal storage of your application, so each application has its own private database that is not accessible to other applications. You can learn more about the SQLite project itself and its implementation of the SQL query language at http://www.sqlite.org.New to databases? A relational database saves data to tables. Each table is made of columns, and for each column you must choose a name and the type of data that can be saved in it. Each table should also have a column or many column that are set as the key of the table so each row of data can be uniquely identified. Relationships can also be defined between tables. The basics of databases and the SQL query language used by most databases could take may articles to explain. If you don’t know how to use a database, this is a subject worth learning more about since databases are used in almost all applications to store data.To demonstrate how to create a database and interact with it, I created a small sample application, which is available at http://github.com/CindyPotvin/RowCounter. The application is a row counter for knitting projects: the user can create a knitting project containing one or many counters used to track the current number of rows done and to show the total amount of rows to reach. The structure of the database is as follow, with a project table in relation with a row_counter table :  First, to be able to create the database, we need a contract class for each table that describes the name of the elements of the table. This class should be used each time the name of elements in the database is required. To describe the name of each column, the contract class also contains a subclass with an implementation of the android.provider.BaseColumn, which automatically adds the name of an_ID and of a _COUNT column. I also like to put the CREATE TABLE SQL query in the contract class so all the strings used in SQL queries are at the same place. Here is the contract class for the row_counter table in the example : /** * This class represents a contract for a row_counter table containing row * counters for projects. The project must exist before creating row counters * since the counter have a foreign key to the project. */ public final class RowCounterContract {/** * Contains the name of the table to create that contains the row counters. */ public static final String TABLE_NAME = "row_counter";/** * Contains the SQL query to use to create the table containing the row counters. */ public static final String SQL_CREATE_TABLE = "CREATE TABLE " + RowCounterContract.TABLE_NAME + " (" + RowCounterContract.RowCounterEntry._ID + " INTEGER PRIMARY KEY AUTOINCREMENT," + RowCounterContract.RowCounterEntry.COLUMN_NAME_PROJECT_ID + " INTEGER," + RowCounterContract.RowCounterEntry.COLUMN_NAME_CURRENT_AMOUNT + " INTEGER DEFAULT 0," + RowCounterContract.RowCounterEntry.COLUMN_NAME_FINAL_AMOUNT + " INTEGER," + "FOREIGN KEY (" + RowCounterContract.RowCounterEntry.COLUMN_NAME_PROJECT_ID + ") " + "REFERENCES projects(" + ProjectContract.ProjectEntry._ID + "));";/** * This class represents the rows for an entry in the row_counter table. The * primary key is the _id column from the BaseColumn class. */ public static abstract class RowCounterEntry implements BaseColumns {// Identifier of the project to which the row counter belongs public static final String COLUMN_NAME_PROJECT_ID = "project_id";// Final amount of rows to reach public static final String COLUMN_NAME_FINAL_AMOUNT = "final_amount";// Current amount of rows done public static final String COLUMN_NAME_CURRENT_AMOUNT = "current_amount"; } } To create the tables that stores the data described by the contracts, you must implement the android.database.sqllite.SQLLiteOpenHelper class that manages the access to the database. The following methods should be implemented as needed:onCreate: this method is called the first time the database is opened by your application. You should setup the database for use in that method by creating the tables and initializing any data you need. onUpdate: this method is called when your application is upgraded and the version number has changed. You don’t need to do anything for your first version, but in the following versions you must provide queries to modify the database from the old version to the new structure as needed so your user don’t loose their data during the upgrade. onDowngrade (optional) : you may implement this method if you want to handle the case where your application is downgraded to a version requiring an older version. The default implementation will throw a SQLiteException and will not modify the database. onOpen (optional) : this method is called after the database has been created, upgraded to a newer version or downgraded to an older version.Here is a basic implementation of the android.database.sqllite.SQLLiteOpenHelper for the example that executes an SQL CREATE TABLE query for each table of the database in the onCreate method. There is no method available in the android.database.sqlite.SQLiteDatabase class to create a table, so you must use the execSQL method to execute the query. /** * This class helps open, create, and upgrade the database file containing the * projects and their row counters. */ public class ProjectsDatabaseHelper extends SQLiteOpenHelper { // If you change the database schema, you must increment the database version. public static final int DATABASE_VERSION = 1; // The name of the database file on the file system public static final String DATABASE_NAME = "Projects.db";public ProjectsDatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); }/** * Creates the underlying database with the SQL_CREATE_TABLE queries from * the contract classes to create the tables and initialize the data. * The onCreate is triggered the first time someone tries to access * the database with the getReadableDatabase or * getWritableDatabase methods. * * @param db the database being accessed and that should be created. */ @Override public void onCreate(SQLiteDatabase db) { // Create the database to contain the data for the projects db.execSQL(ProjectContract.SQL_CREATE_TABLE); db.execSQL(RowCounterContract.SQL_CREATE_TABLE); initializeExampleData(db); }/** * This method must be implemented if your application is upgraded and must * include the SQL query to upgrade the database from your old to your new * schema. * * @param db the database being upgraded. * @param oldVersion the current version of the database before the upgrade. * @param newVersion the version of the database after the upgrade. */ @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { // Logs that the database is being upgraded Log.i(ProjectsDatabaseHelper.class.getSimpleName(), "Upgrading database from version " + oldVersion + " to " + newVersion); } } Once the android.database.sqllite.SQLLiteOpenHelper is implemented, you can get an instance of the database object android.database.sqlite.SQLiteDatabase using the getReadableDatabase method of the helper if you only need to read data or the getWritableDatabase method if you need to read and write data. There are four kinds of basic operations that can be done with the data, and modifications can not be undone like in all databases.Inserting a new row:the insert method of the android.database.sqlite.SQLiteDatabase object inserts a new row of data in a table. Data can be inserted with a SQL INSERT query using the execSQL method, but using insert is recommended to avoid SQL injection: only one database row can be created by the insert method and nothing else, regardless of the input. In the following example, a few test projects are initialized in the database of the application by the onCreate method of the database helper after the creation of the table: /** * Initialize example data to show when the application is first installed. * * @param db the database being initialized. */ private void initializeExampleData(SQLiteDatabase db) { // A lot of code is repeated here that could be factorized in methods, // but this is clearer for the example // Insert the database row for an example project in the project table in the // database long projectId; ContentValues firstProjectValues = new ContentValues(); firstProjectValues.put(ProjectContract.ProjectEntry.COLUMN_NAME_TITLE, "Flashy Scarf"); projectId = db.insert(ProjectContract.TABLE_NAME, null, firstProjectValues); // Insert the database rows for a row counter linked to the project row // just created in the database (the insert method returns the // identifier of the row) ContentValues firstProjectCounterValues = new ContentValues(); firstProjectCounterValues.put(RowCounterContract .RowCounterEntry.COLUMN_NAME_PROJECT_ID, projectId); firstProjectCounterValues.put(RowCounterContract .RowCounterEntry.COLUMN_NAME_FINAL_AMOUNT, 120); db.insert(RowCounterContract.TABLE_NAME, null, firstProjectCounterValues); // Insert the database row for a second example project in the project // table in the database. ContentValues secondProjectValues = new ContentValues(); secondProjectValues.put(ProjectContract.ProjectEntry.COLUMN_NAME_TITLE, "Simple Socks"); projectId = db.insert(ProjectContract.TABLE_NAME, null, secondProjectValues); // Insert the database rows for two identical row counters for the // project in the database ContentValues secondProjectCounterValues = new ContentValues(); secondProjectCounterValues.put(RowCounterContract .RowCounterEntry.COLUMN_NAME_PROJECT_ID, projectId); secondProjectCounterValues.put(RowCounterContract .RowCounterEntry.COLUMN_NAME_FINAL_AMOUNT, 80); db.insert(RowCounterContract.TABLE_NAME, null, secondProjectCounterValues); db.insert(RowCounterContract.TABLE_NAME, null, secondProjectCounterValues); }Reading existing rows: the query method from the android.database.sqlite.SQLiteDatabase class retrieves the data that was previously inserted in the database. This method will return a cursor that points to the collection of rows returned by your request, if any. You can then convert the data fetched from the database table to an object can be used in your application: in the example, the rows from the project table are converted to Project objects. /*** Gets the list of projects from the database. * * @return the current projects from the database. */ public ArrayList getProjects() { ArrayList projects = new ArrayList(); // Gets the database in the current database helper in read-only mode SQLiteDatabase db = getReadableDatabase();// After the query, the cursor points to the first database row // returned by the request. Cursor projCursor = db.query(ProjectContract.TABLE_NAME, null, null, null, null, null, null); while (projCursor.moveToNext()) { // Get the value for each column for the database row pointed by // the cursor using the getColumnIndex method of the cursor and // use it to initialize a Project object by database row Project project = new Project(); int idColIndex = projCursor.getColumnIndex(ProjectContract.ProjectEntry._ID); long projectId = projCursor.getLong(idColIndex); project.setId(projCursor.getLong(projectId);int nameColIndex = projCursor.getColumnIndex(ProjectContract .ProjectEntry.COLUMN_NAME_TITLE); project.setName(projCursor.getString(nameColIndex)); // Get all the row counters for the current project from the // database and add them all to the Project object project.setRowCounters(getRowCounters(projectId));projects.add(project); } return (projects); }Updating existing rows: the update method of an instance of the android.database.sqlite.SQLiteDatabase class updates the data in a row or in multiple rows of a database table. Like with the insert method, you could use the execSQL query to run a SQL UPDATE query, but using the update method is safer. In the following example, the current row counter value for the row counter in the row_counter table is updated with the new value. According to the condition specified only the row counter with the identifier passed as a parameter is updated but with another condition you could update many rows, so you should always make sure that the condition only selects the rows you need. /** * Updates the current amount of the row counter in the database to the value * in the object passed as a parameter. * * @param rowCounter the object containing the current amount to set. */ public void updateRowCounterCurrentAmount(RowCounter rowCounter) { SQLiteDatabase db = getWritableDatabase(); ContentValues currentAmountValue = new ContentValues(); currentAmountValue.put(RowCounterContract.RowCounterEntry.COLUMN_NAME_CURRENT_AMOUNT, rowCounter.getCurrentAmount()); db.update(RowCounterContract.TABLE_NAME, currentAmountValue, RowCounterContract.RowCounterEntry._ID +"=?", new String[] { String.valueOf(rowCounter.getId()) }); }Deleting existing rows:the delete method of an instance of the android.database.sqlite.SQLiteDatabase class deletes a row or in multiple rows of a database table. Like with the insert method, you could use the execSQL query to run a SQL UPDATE query, but using the delete method is safer. In the following example, a row counter in the row_counter table is deleted. According to the condition specified only the row counter with the identifier passed as a parameter is deleted but with another condition you could delete many rows, so you should always make sure that the condition only selects the rows you need so you don’t delete too much data. /** * Deletes the specified row counter from the database. * * @param rowCounter the row counter to remove. */ public void deleteRowCounter(RowCounter rowCounter) { SQLiteDatabase db = getWritableDatabase(); db.delete(RowCounterContract.TABLE_NAME, RowCounterContract.RowCounterEntry._ID +"=?", new String[] { String.valueOf(rowCounter.getId()) }); }Finally, if you want to encapsulate access to the data in your database to avoid calling the database helper directly in your activity, you can also implement the android.content.ContentProvider class from the Android SDK. This is only required if your application must share data with other applications: you do not need one to get started, but you should consider using it as your data gets more complex.Reference: Saving to a SQLite database in your Android application from our JCG partner Cindy Potvin at the Web, Mobile and Android Programming blog....

Understanding JUnit’s Runner architecture

Some weeks ago I started creating a small JUnit Runner (Oleaster) that allows you to use the Jasmine way of writing unit tests in JUnit. I learned that creating custom JUnit Runners is actually quite simple. In this post I want to show you how JUnit Runners work internally and how you can use custom Runners to modify the test execution process of JUnit.           So what is a JUnit Runner? A JUnit Runner is class that extends JUnit’s abstract Runner class. Runners are used for running test classes. The Runner that should be used to run a test can be set using the @RunWith annotation. @RunWith(MyTestRunner.class) public class MyTestClass {  @Test   public void myTest() {     ..   } } JUnit tests are started using the JUnitCore class. This can either be done by running it from command line or using one of its various run() methods (this is what your IDE does for you if you press the run test button). JUnitCore.runClasses(MyTestClass.class); JUnitCore then uses reflection to find an appropriate Runner for the passed test classes. One step here is to look for a @RunWith annotation on the test class. If no other Runner is found the default runner (BlockJUnit4ClassRunner) will be used. The Runner will be instantiated and the test class will be passed to the Runner. Now it is Job of the Runner to instantiate and run the passed test class. How do Runners work? Lets look at the class hierarchy of standard JUnit Runners:Runner is a very simple class that implements the Describable interface and has two abstract methods: public abstract class Runner implements Describable {  public abstract Description getDescription();  public abstract void run(RunNotifier notifier); } The method getDescription() is inherited from Describable and has to return a Description. Descriptions contain the information that is later being exported and used by various tools. For example, your IDE might use this information to display the test results. run() is a very generic method that runs something (e.g. a test class or a test suite). I think usually Runner is not the class you want to extend (it is just too generous). In ParentRunner things get a bit more specific. ParentRunner is an abstract base class for Runners that have multiple children. It is important to understand here, that tests are structured and executed in a hierarchical order (think of a tree). For example: You might run a test suite which contains other test suites. These test suites then might contain multiple test classes. And finally each test class can contain multiple test methods. ParentRunner has the following three abstract methods: public abstract class ParentRunner<T> extends Runner implements Filterable, Sortable {      protected abstract List<T> getChildren();  protected abstract Description describeChild(T child);  protected abstract void runChild(T child, RunNotifier notifier); } Subclasses need to return a list of the generic type T in getChildren(). ParentRunner then asks the subclass to create a Description for each child (describeChild()) and finally to run each child ( runChild()). Now let’s look at two standard ParentRunners: BlockJUnit4ClassRunner and Suite. BlockJUnit4ClassRunner is the default Runner that is used if no other Runner is provided. So this is the Runner that is typically used if you run a single test class. If you look at the source of BlockJUnit4ClassRunner you will see something like this: public class BlockJUnit4ClassRunner extends ParentRunner<FrameworkMethod> {  @Override   protected List<FrameworkMethod> getChildren() {     // scan test class for methonds annotated with @Test   }  @Override   protected Description describeChild(FrameworkMethod method) {     // create Description based on method name   }  @Override   protected void runChild(final FrameworkMethod method, RunNotifier notifier) {     if (/* method not annotated with @Ignore */) {       // run methods annotated with @Before       // run test method       // run methods annotated with @After     }   } } Of course this is overly simplified, but it shows what is essentially done in BlockJUnit4ClassRunner. The generic type parameter FrameworkMethod is basically a wrapper around java.lang.reflect.Method providing some convenience methods. In getChildren() the test class is scanned for methods annotated with @Test using reflection. The found methods are wrapped in FrameworkMethod objects and returned. describeChildren() creates a Description from the method name and runChild() finally runs the test method. BlockJUnit4ClassRunner uses a lot of protected methods internally. Depending on what you want to do exactly, it can be a good idea to check BlockJUnit4ClassRunner for methods you can override. You can have a look at the source of BlockJUnit4ClassRunner on GitHub. The Suite Runner is used to create test suites. Suites are collections of tests (or other suites). A simple suite definition looks like this: @RunWith(Suite.class) @Suite.SuiteClasses({   MyJUnitTestClass1.class,   MyJUnitTestClass2.class,   MyOtherTestSuite.class }) public class MyTestSuite {} A test suite is created by selecting the Suite Runner with the @RunWith annotation. If you look at the implementation of Suite you will see that it is actually very simple. The only thing Suite does, is to create Runner instances from the classes defined using the @SuiteClasses annotation. So getChildren() returns a list of Runners and runChild() delegates the execution to the corresponding runner. Examples With the provided information it should not be that hard to create your own JUnit Runner (at least I hope so). If you are looking for some example custom Runner implementations you can have a look at the following list:Fabio Strozzi created a very simple and straightforward GuiceJUnitRunner project. It gives you the option to inject Guice components in JUnit tests. Source on GitHub Spring’s SpringJUnit4ClassRunner helps you test Spring framework applications. It allows you to use dependency injection in test classes or to create transactional test methods. Source on GitHub Mockito provides MockitoJUnitRunner for automatic mock initialization. Source on GitHub Oleaster’s Java 8 Jasmine runner. Source on GitHub (shameless self promotion)Conclusion JUnit Runners are highly customizable and give you the option to change to complete test execution process. The cool thing is that can change the whole test process and still use all the JUnit integration points of your IDE, build server, etc. If you only want to make minor changes it is a good idea to have a look at the protected methods of BlockJUnit4Class runner. Chances are high you find an overridable method at the right location.Reference: Understanding JUnit’s Runner architecture from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....

Gradle Goodness: Getting More Dependency Insight

In most of our projects we have dependencies on other code, like libraries or other projects. Gradle has a nice DSL to define dependencies. Dependencies are grouped in dependency configurations. These configuration can be created by ourselves or added via a plugin. Once we have defined our dependencies we get a nice overview of all dependencies in our project with the dependencies task. We can add the optional argument --configuration to only see dependencies for the given configuration. But we can even check for a specific dependency where it is used, any transitive dependencies and how the version is resolved. In the following sample build we define a compile dependency on Spring Boot and SLF4J API. The SLF4J API is also a transitive dependency for the Spring Boot dependency, so we can see how the dependencyInsight tasks shows a version conflict. apply plugin: 'java'// Set Bintray JCenter as repository. repositories.jcenter()dependencies { // Set dependency for Spring Boot compile "org.springframework.boot:spring-boot-starter-web:1.1.5.RELEASE" // Set dependency for SLF4J with conflicting version. compile 'org.slf4j:slf4j-api:1.7.1' } Now let’s run the dependencyInsight task for the dependency SLF4J API in the compile configuration: $ gradle -q dependencyInsight --configuration compile --dependency slf4j-api org.slf4j:slf4j-api:1.7.7 (conflict resolution) +--- org.slf4j:jcl-over-slf4j:1.7.7 | \--- org.springframework.boot:spring-boot-starter-logging:1.1.5.RELEASE | \--- org.springframework.boot:spring-boot-starter:1.1.5.RELEASE | \--- org.springframework.boot:spring-boot-starter-web:1.1.5.RELEASE | \--- compile +--- org.slf4j:jul-to-slf4j:1.7.7 | \--- org.springframework.boot:spring-boot-starter-logging:1.1.5.RELEASE (*) \--- org.slf4j:log4j-over-slf4j:1.7.7 \--- org.springframework.boot:spring-boot-starter-logging:1.1.5.RELEASE (*)org.slf4j:slf4j-api:1.7.1 -> 1.7.7 \--- compileorg.slf4j:slf4j-api:1.7.6 -> 1.7.7 \--- ch.qos.logback:logback-classic:1.1.2 \--- org.springframework.boot:spring-boot-starter-logging:1.1.5.RELEASE \--- org.springframework.boot:spring-boot-starter:1.1.5.RELEASE \--- org.springframework.boot:spring-boot-starter-web:1.1.5.RELEASE \--- compile(*) - dependencies omitted (listed previously) In the output we can see slf4j-api is referenced three times, once as a transitive dependency for jcl-over-slf4j, jul-to-slf4j and log4j-over-slf4j, once as transitive dependency for logback-classic and once as a direct dependency for the compile configuration. We also see the version is bumped to 1.7.7 where necessary, because the transitive dependency of jcl-over-slf4j defines the newest version. The value we use for the --dependency option is used to do partial matching in the group, name or version properties of the dependencies. For example to see an insight in all dependencies with logging we can invoke $ gradle dependencyInsight --dependency logging. We can also get an HTML report page with an overview of all dependencies. To get dependency insight we must click on the desired dependency from the HTML page and we get a similar output as via the command-line. First we must add the project-report plugin to our project. Next we invoke the dependencyReport task. When the task is finished we can open build/reports/project/dependencies/index.html in our web browser. When we navigate to the compile configuration and click on the slf4j-api dependency we get the following output:Written with Gradle 2.0.Reference: Gradle Goodness: Getting More Dependency Insight from our JCG partner Hubert A. Klein Ikkink at the JDriven blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: