Featured FREE Whitepapers

What's New Here?


How to run junit tests inside the android project

Hi there! Today i’m gonna show you how to create and run junit tests inside your android project without creating a separated test project. With those tests we will rapidly be able to automate and test the app’s logic and some simple UI behaviors. The example below is very straightforward and much more intuitive than other approaches i saw out there.           Defining the TestInstrumentation First of all, define in your manifest file the following entries. IMPORTANT: While the definition of the test instrumentation will be placed outside your application tag, the test runner must be defined  inside your application tag.   < manifest > ... < instrumentation android:name="android.test.InstrumentationTestRunner" android:targetPackage="com.treslines.ponto" / > < / manifest > < application > ... < uses-library android:name="android.test.runner" / > ... < / application > Creating the test packages Android works with some conventions while testing. So it is extremelly important to follow those conventions. Otherwise you’ll get compile or run errors while trying to run it. One convention is that all tests for a specific class must be placed in the same package structure but with a futher sub-folder called test as you can see below. Because i want to test the activities in the package com.treslines.ponto.activity i must create a test package called com.treslines.ponto.activity.testCreating the test itself That’s the cool part of it. Here you can write your junit tests as usual. Again android  gives us some conventions to follow. All test classes must have the same name as the class under test with the suffix Test on it. And all test methods must start with the prefix test on it. If you follow those conventions everything will work just fine. // IMPORTANT: All test cases MUST have a suffix "Test" at the end // // THAN: // Define this in your manifest outside your application tag: // < instrumentation // android:name="android.test.InstrumentationTestRunner" // android:targetPackage="com.treslines.ponto" // / > // // AND: // Define this inside your application tag: // < uses-library android:name="android.test.runner" / > // // The activity you want to test will be the "T" type of ActivityInstrumentationTestCase2 public class AlertaActivityTest extends ActivityInstrumentationTestCase2 < AlertaActivity > {private AlertaActivity alertaActivity; private AlertaController alertaController;public AlertaActivityTest() { // create a default constructor and pass the activity class // you want to test to the super() constructor super(AlertaActivity.class); }@Override // here is the place to setup the var types you want to test protected void setUp() throws Exception { super.setUp(); // because i want to test the UI in the method testAlertasOff() // i must set this attribute to true setActivityInitialTouchMode(true);// init variables alertaActivity = getActivity(); alertaController = alertaActivity.getAlertaController(); }// usually we test some pre-conditions. This method is provided // by the test framework and is called after setUp() public void testPreconditions() { assertNotNull("alertaActivity is null", alertaActivity); assertNotNull("alertaController is null", alertaController); }// test methods MUST start with the prefix "test" public void testVibrarSomAlertas() { assertEquals(true, alertaController.getView().getVibrar().isChecked()); assertEquals(true, alertaController.getView().getSom().isChecked()); assertEquals(true, alertaController.getView().getAlertas().isChecked()); }// test methods MUST start with the prefix "test" public void testAlertasOff() { Switch alertas = alertaController.getView().getAlertas(); // because i want to simulate a click on a view, i must use the TouchUtils TouchUtils.clickView(this, alertas); // wait a little (1.5sec) because the UI needs its time // to change the switch's state and than check new state of the switches new Handler().postDelayed(new Runnable() { @Override public void run() { assertEquals(false, alertaController.getView().getVibrar().isChecked()); assertEquals(false, alertaController.getView().getSom().isChecked()); } }, 1500); } } Running the JUnit tests The only difference while running junit test in android is that you’ll be calling Run As > Android JUnit Test instead of just JUnit Test like you are used to in java.That’s all! Hope you like it!Reference: How to run junit tests inside the android project from our JCG partner Ricardo Ferreira at the Clean Code Development – Quality Seal blog....

For All You Know, It’s Just a Java Library

Blast from the past… I wrote this in May, 2008… and I’ve gotta say, I was pretty spot-on including Java 8 adopting some of Scala’s better features: It’s starting to happen… the FUD around Scala. The dangers of Scala. The “operational risks” of using Scala. It started back in November when I started doing lift and Scala projects for hire. The questions were reasonable and rational:  If you build this project in Scala and lift, who else can maintain it? A: the lift community has 500+ (300+ back in November) in it and at least 3 (now 10) of the people on the lift list are actively seeking Scala and lift related gigs. What does lift give us? A: a much faster time to market with Web 2.0, collaborative applications than anything else out where. Why not use Rails, it’s got great developer productivity? A: Rails is great from Person to Computer applications… when you’re building a chat app or something that’s person to person collaborative, lift’s Comet support is much more effective than Rails… plus if you have to scale your app to hundreds of users simultaneously using your app, you can do it on a single box with lift because the JVM gives great operational benefits.Then came the less rational questions:Why use new technology when we can outsource the coding to India and pay someone 5% of what we’re paying you? A: Because I’m more than 20 times better than the coders you buy for 5% of my hourly rate plus there’s a lot less management of my development efforts. Why not write it in Java to get the operational benefits of Java? A: Prototyping is hard. Until you know what you want, you need to be agile. Java is not agile. Ruby/Rails and Scala/lift are agile. Choose one of those to do prototyping and then port to Java if there’s a reason to. Will Scala be incompatible with our existing Java code? A: No. Read my lips… No. The same guy who wrote the program (javac) that converts Java to JVM byte code wrote the program that converts Scala to JVM byte code. There’s no one on this planet that could make a more compatible language than Martin Odersky.Okay… flash forward a few months. Buy a Feature has gone through prototyping, revisions, and all the normal growth that software goes through. It works (yeah… it still needs more, but that’s the nature of software versions.) There’s another developer (he mainly write ECMAScript) who picked up the lift code and made a dozen modifications in the heart of the code with 2 hours of walk-through from me and 20-30 IM and email questions. He mainly does Flash and Air work and found Scala and lift to be pretty straight forward. We also had an occasion to have 2,000 simultaneous (as in at the same time, pounding on their keyboards) users of Buy a Feature and we were able to, thanks to Jetty Continuations, service all 2,000 users with 2,000 open connections to our server and an average of 700 requests per second on a dual core opteron with a load average of around 0.24… try that with your Rails app. One of the customers of Buy a Feature wanted it integrated into their larger, Java-powered web portal along with 2 other systems. I did the integration. The customer asks “Where’s the Scala part?” I answer “It’s in this JAR file.” He goes “But, your program is written in Scala, but I looked at the byte-code and it’s just Java.” I answer “It’s Scala… but it compiles down to Java byte-code and it runs in a Java debugger and you can’t tell the difference.” “You’re right,” he says. So, to this customer’s JVM, the Scala and lift code looks, smells and tastes just like Java code. If I renamed the scala-library.jar file to apache-closures.jar, nobody would know the difference… at all. Okay… but each set of people I talk to, I hear a similar variation about the “operational risks” of using Scala. Let’s step back for a minute. There are development and team risks for using Scala. Some Java programmers can’t wrap their heads around the triple concepts of (1) type inference, (2) passing functions/higher order functions and (3) immutability as the default way of writing code. Most Ruby programmers that I’ve met don’t have the above limitations. So, find a Ruby program who knows some Java libraries or find a Java programmer who’s done some moonlighting with Rails or Python or JavaScript and you’ve got a developer who can pick up Scala in a week. Yes, the tools in Scala-land are not as rich as the tools in Java-land. But, once again, anyone who can program Ruby can program in Scala. There’s a fine Textmate bundle for Scala. I use jEdit. Steve Jenson uses emacs. Thanks to David Bernard’s continuous compilation Maven plugin, you save your file and your code is compiled. Oh… and there’s the old Eclipse plugin which more or less works and has access to the Eclipse debugger and the new Eclipse plugin is reported to work quite well. And then there’s the NetBeans plugin which is still raw, but getting better every week. Even with the limitation of weak IDE support, head-to-head people can write Scala code 2 to 10 times faster than they can write Java code and maintaining Scala code is much easier because of Scala’s strong type system and code conciseness. But, getting back to our old friend “you can’t tell it’s not Java”, I wrote a Scala program and compiled it with -g:vars (put all the symbols in the class file), started the program under jdb (the Java Debugger… a little more on this later) and set a breakpoint. This is what I got: > Step completed: "thread=main", foo.ScalaDB$$anonfun$main$1.apply(), line=6 bci=0 > 6 args.zipWithIndex.foreach(v => println(v)) main[1] dump v v = { _2: instance of java.lang.Integer(id=463) _1: "Hello" } main[1] where [1] foo.ScalaDB$$anonfun$main$1.apply (ScalaDB.scala:6) [2] foo.ScalaDB$$anonfun$main$1.apply (ScalaDB.scala:6) [3] scala.Iterator$class.foreach (Iterator.scala:387) [4] scala.runtime.BoxedArray$$anon$2.foreach (BoxedArray.scala:45) [5] scala.Iterable$class.foreach (Iterable.scala:256) [6] scala.runtime.BoxedArray.foreach (BoxedArray.scala:24) [7] foo.ScalaDB$.main (ScalaDB.scala:6) [8] foo.ScalaDB.main (null) main[1] print v v = "(Hello,0)" main[1] My code worked without any fancy footwork inside of the standard Java Debugger. The text of the line that I was on and the variables in the local scope were all there… just as if it was a Java program. The stack traces work the same way. The symbols work the same way. Everything works the same way. Scala code looks and smells and tastes to the JVM just like Java code… now, let’s explore why. A long time ago, when Java was Oak and it was being designed as a way to distribute untrusted code into set-top boxes (and later browsers), the rules defining how a program executed and what the means of the instruction set (byte codes) were was super important. Additionally, the semantics of the program had to be such that the Virtual Machine running the code could (1) verify that the code was well behaved and (2) that the source code and the object code had the same meaning. For example, the casting operation in Java compiles down to a byte code that checks that the class can actually be cast to the right thing and the verifier insures that there’s no code path that could put an unchecked value into a variable. Put another way, there’s no way to write verifiable byte code that can put a reference to a non-String into a variable that’s defined as a String. It’s not just at the compiler level, but at the actual Virtual Machine level that object typing is enforced. In Java 1.0 days, there was nearly a 1:1 correspondence between Java language code and Java byte code. Put another way, there was only one thing you could write in Java byte code that you could not write in Java source code (it has to do with calling super in a constructor.) There was one source code file per class file. Java 1.1 introduced inner classes which broke the 1:1 relationship between Java code and byte code. One of the things that inner classes introduced was access to private instance variables by the inner class. This was done without violating the JVM’s enforcement of the privacy of private variables by creating accessor methods that were compiler enforced (but not JVM enforced) ways for the anonymous classes to access private variables. But the horse was out of the barn at this point anyway, because 1.1 brought us reflection and private was no longer private. An interesting thing about the JVM. From 1.0 through 1.6, there has not been a new instruction added to the JVM. Wow. Think about it. Java came out when the 486 was around. How many instructions have been added to Intel machines since 1995? The Microsoft CLR has been around since 2000 and has gone through 3 revisions and new instructions have been added at every revision and source code compiled under an older revision does not work with newer revisions. On the other hand, I have Java 1.1 compiled code that works just fine under Java 1.6. Pretty amazing. Even to this day, Java Generics are implemented using the same JVM byte-codes that were used in 1996. This is why you get the “type erasure” warnings. The compiler knows the type, but the JVM does not… so a List<String> looks to the JVM like a List, even though the compiler will not let you pass a List<String> to something that expects a List<URL>. On the server side, where we trust the code, this is not an issue. If we were writing code for an untrusted world, we’d care a lot more about the semantics of the source code being enforced by the execution environment. So, there have been no new JVM instructions since Java was released. The JVM is perhaps the best specified piece of software this side of ADA-based military projects. There are specs and slow-moving JSRs for everything. Turns out, this works to our benefit. The JVM has a clearly defined interface to debugging. The information that a class file needs to provide to the JVM for line numbers, variable names, etc. is very clearly specified. Because the JVM has a limited instruction set and the type of each item on the stack and of each instance variable in a class is know and verified when the class loads, the debugging information works for anything that compiles down to Java byte code and has semantics of named local variables and named instance variables. Scala shares these semantics with Java and that’s why the Scala compiler can compile byte-code that has the appropriate debugging information so that it “just works” with jdb. And, just to be clear, jdb uses the standard, well documented interface into the JVM to do debugging and every other IDE for the JVM uses this same interface. That means that an IDE that compiles Scala can also hook into the JVM and debug Scala. That’s why debugging work with the Scala Eclipse plugin. But, let’s go back to the statement: nobody knows Scala’s operational characteristics. That’s just not true. Scala’s operational characteristics are the same as Java’s. The Scala compiler generates byte code that is nearly identical to the Java compiler. In fact, that you can decompile Scala code and wind up with readable Java code, with the exception of certain constructor operations. To the JVM, Scala code and Java code are indistinguishable. The only difference is that there’s a single extra library file to support Scala. Now, in most software projects, you don’t have CEOs and board members, and everybody’s grandmother asking what libraries you’re using. In fact, in every project I’ve stepped into, there have been at least 2 libraries that the senior developers did not add but somehow got introduced into the mix (I believe in library audits to make sure there’s no license violations in the library mix.) So, in the normal course of business, libraries are added to projects all the time. Any moderately complex project depends on dozens of libraries. I can tell you to a 100% degree of certainty that there are libraries in that mix that will not pass the “is the company that supports them going to be around in 5 years?” test. Period. Sure, memcached will be around in 5 years and most of the memcached clients will. Slide on the other hand is “retired”. And Mongrel… Making the choice to use Scala should be a deliberate, deliberated, well reasoned choice. It has to do with developer productivity, both to build the initial product and to maintain the product through a 2-5 year lifecycle. It has to do with maintaining existing QA and operations infrastructure (for existing JVM shops) or moving to the most scalable, flexible, predictable, well tested, and well supported web infrastructure around: the JVM. Recruiting team members who can do Scala may be a challenge. Standardizing on a development environment may be a challenge as the Scala IDE support is immature (but there’s always emacs, vi, jEdit and Textmate which work just fine.) Standardizing on a coding style is a challenge. These are all people challenges and all localized to recruiting and development and management thereof. The only rational parts of the debate are the trade-off between recruiting and organizing the team and the benefits to be gained from Scala. But, you say, what if Martin Odersky decides to take Scala in a wrong direction? Then freeze at Scala 2.7 or 2.8 or where-ever you feel the break is rational. It was only last year that Kaiser moved from Java 1.3 to 1.4. Working off of 2 or 3 year old technology is normal. Running against trunk-head is not the way of an organization that’s asking the question “where will Martin take Scala in 5 years?” And oh, by the way, if Martin gets off track or Scala for some reason languishes, it’s most likely to be the same scenario as GJ (Generics Java… Martin’s prior project that turned into Java Generics)… it’s because Java 8 or Java 9 has adopted enough of Scala’s features to make Scala marginal. In that case, you spend a couple of months porting the Scala code to Java XX and in the process fix some bugs. And not to put too fine a point on it, but Martin’s team runs one of the best ISVs I’ve ever seen. They crank out a new, feature-packed release every six months or so. They respond, sometimes within hours, to bug reports. There is an active support mechanism with some of the best coders around waiting to answer questions from newbies and old hands alike. If we were to measure the Scala team on commercial standards, they’ve got a longer funding runway than any private software company around and they’re more responsive than almost every ISV, public or private. So what if they’re academic… maybe that means they’re thinking through issues rather then being code-wage slaves. Bottom line… to anyone other than the folks with hands in the code and the folks who have to recruit and manage them, “For all you know, it’s just another Java library.”Reference: For All You Know, It’s Just a Java Library from our JCG partner David Pollak at the DPP’s Blog blog....

ChoiceFormat: Numeric Range Formatting

The Javadoc for the ChoiceFormat class states that ChoiceFormat “allows you to attach a format to a range of numbers” and is “generally used in a MessageFormat for handling plurals.” This post describes java.text.ChoiceFormat and provides some examples of applying it in Java code. One of the most noticeable differences between ChoiceFormat and other “format” classes in the java.text package is that ChoiceFormat does not provide static methods for accessing instances of ChoiceFormat. Instead, ChoiceFormat provides two constructors that are used for instantiating ChoiceFormat objects. The Javadoc for ChoiceFormat highlights and explains this:  ChoiceFormat differs from the other Format classes in that you create a ChoiceFormat object with a constructor (not with a getInstance style factory method). The factory methods aren’t necessary because ChoiceFormat doesn’t require any complex setup for a given locale. In fact, ChoiceFormat doesn’t implement any locale specific behavior.Constructing ChoiceFormat with Two Arrays The first of two constructors provided by ChoiceFormat accepts two arrays as its arguments. The first array is an array of primitive doubles that represent the smallest value (starting value) of each interval. The second array is an array of Strings that represent the names associated with each interval. The two arrays must have the same number of elements because there is an assumed one-to-one mapping between the numeric (double) intervals and the Strings describing those intervals. If the two arrays do not have the same number of elements, the following exception is encountered.Exception in thread “main” java.lang.IllegalArgumentException: Array and limit arrays must be of the same length.The Javadoc for the ChoiceFormat(double[], String[]) constructor states that the first array parameter is named “limits”, is of type double[], and is described as “limits in ascending order.” The second array parameter is named “formats”, is of type String[], and is described as “corresponding format strings.” According to the Javadoc, this constructor “constructs with the limits and the corresponding formats.” Use of the ChoiceFormat constructor accepting two array arguments is demonstrated in the next code listing (the writeGradeInformation(ChoiceFormat) method and fredsTestScores variable will be shown later). /** * Demonstrate ChoiceFormat instantiated with ChoiceFormat * constructor that accepts an array of double and an array * of String. */ public void demonstrateChoiceFormatWithDoublesAndStringsArrays() { final double[] minimumPercentages = {0, 60, 70, 80, 90}; final String[] letterGrades = {"F", "D", "C", "B", "A"}; final ChoiceFormat gradesFormat = new ChoiceFormat(minimumPercentages, letterGrades); writeGradeInformation(fredsTestScores, gradesFormat); } The example above satisfies the expectations of the illustrated ChoiceFormat constructor. The two arrays have the same number of elements, the first (double[]) array has its elements in ascending order, and the second (String[]) array has its “formats” in the same order as the corresponding interval-starting limits in the first array. The writeGradeInformation(ChoiceFormat) method referenced in the code snippet above demonstrates use of a ChoiceFormat instance based on the two arrays to “format” provided numerical values as Strings. The method’s implementation is shown next. /** * Write grade information to standard output * using the provided ChoiceFormat instance. * * @param testScores Test Scores to be displayed with formatting. * @param gradesFormat ChoiceFormat instance to be used to format output. */ public void writeGradeInformation( final Collection<Double> testScores, final ChoiceFormat gradesFormat) { double sum = 0; for (final Double testScore : testScores) { sum += testScore; out.println(testScore + " is a '" + gradesFormat.format(testScore) + "'."); } double average = sum / testScores.size(); out.println( "The average score (" + average + ") is a '" + gradesFormat.format(average) + "'."); } The code above uses the ChoiceFormat instance provided to “format” test scores. Instead of printing a numeric value, the “format” prints the String associated with the interval that numeric value falls within. The next code listing shows the definition of fredsTestScores used in these examples. private static List<Double> fredsTestScores; static { final ArrayList<Double> scores = new ArrayList<>(); scores.add(75.6); scores.add(88.8); scores.add(97.3); scores.add(43.3); fredsTestScores = Collections.unmodifiableList(scores); } Running these test scores through the ChoiceFormat instance instantiated with two arrays generates the following output: 75.6 is a 'C'. 88.8 is a 'B'. 97.3 is a 'A'. 43.3 is a 'F'. The average score (76.25) is a 'C'. Constructing ChoiceFormat with a Pattern String The ChoiceFormat(String) constructor that accepts a String-based pattern may be more appealing to developers who are comfortable using String-based pattern with similar formatting classes such as DateFormat and DecimalFormat. The next code listing demonstrates use of this constructor. The pattern provided to the constructor leads to an instance of ChoiceFormat that should format the same way as the ChoiceFormat instance created in the earlier example with the constructor that takes two arrays. /** * Demonstrate ChoiceFormat instantiated with ChoiceFormat * constructor that accepts a String pattern. */ public void demonstrateChoiceFormatWithStringPattern() { final String limitFormatPattern = "0#F | 60#D | 70#C | 80#B | 90#A"; final ChoiceFormat gradesFormat = new ChoiceFormat(limitFormatPattern); writeGradeInformation(fredsTestScores, gradesFormat); } The writeGradeInformation method called here is the same as the one called earlier and the output is also the same (not shown here because it is the same). ChoiceFormat Behavior on the Extremes and Boundaries The examples so far have worked well with test scores in the expected ranges. Another set of test scores will now be used to demonstrate some other features of ChoiceFormat. This new set of test scores is set up in the next code listing and includes an “impossible” negative score and another “likely impossible” score above 100. private static List<Double> boundaryTestScores; static { final ArrayList<Double> boundaryScores = new ArrayList<Double>(); boundaryScores.add(-25.0); boundaryScores.add(0.0); boundaryScores.add(20.0); boundaryScores.add(60.0); boundaryScores.add(70.0); boundaryScores.add(80.0); boundaryScores.add(90.0); boundaryScores.add(100.0); boundaryScores.add(115.0); boundaryTestScores = boundaryScores; } When the set of test scores above is run through either of the ChoiceFormat instances created earlier, the output is as shown next. -25.0 is a 'F '. 0.0 is a 'F '. 20.0 is a 'F '. 60.0 is a 'D '. 70.0 is a 'C '. 80.0 is a 'B '. 90.0 is a 'A'. 100.0 is a 'A'. 115.0 is a 'A'. The average score (56.666666666666664) is a 'F '. The output just shown demonstrates that the “limits” set in the ChoiceFormat constructors are “inclusive,” meaning that those limits apply to the specified limit and above (until the next limit). In other words, the range of number is defined as greater than or equal to the specified limit. The Javadoc documentation for ChoiceFormat describes this with a mathematical description:X matches j if and only if limit[j] ≤ X < limit[j+1]The output from the boundaries test scores example also demonstrates another characteristic of ChoiceFormat described in its Javadoc documentation: “If there is no match, then either the first or last index is used, depending on whether the number (X) is too low or too high.” Because there is no match for -25.0 in the provided ChoiceFormat instances, the lowest (‘F’ for limit of 0) range is applied to that number lower than the lowest range. In these test score examples, there is no higher limit specified than the “90” for an “A”, so all scores higher than 90 (including those above 100) are for “A”. Let’s suppose that we wanted to enforce the ranges of scores to be between 0 and 100 or else have the formatted result indicate “Invalid” for scores less than 0 or greater than 100. This can be done as shown in the next code listing. /** * Demonstrating enforcing of lower and upper boundaries * with ChoiceFormat instances. */ public void demonstrateChoiceFormatBoundariesEnforced() { // Demonstrating boundary enforcement with ChoiceFormat(double[], String[]) final double[] minimumPercentages = {Double.NEGATIVE_INFINITY, 0, 60, 70, 80, 90, 100.000001}; final String[] letterGrades = {"Invalid - Too Low", "F", "D", "C", "B", "A", "Invalid - Too High"}; final ChoiceFormat gradesFormat = new ChoiceFormat(minimumPercentages, letterGrades); writeGradeInformation(boundaryTestScores, gradesFormat);// Demonstrating boundary enforcement with ChoiceFormat(String) final String limitFormatPattern = "-\u221E#Invalid - Too Low | 0#F | 60#D | 70#C | 80#B | 90#A | 100.0<Invalid - Too High"; final ChoiceFormat gradesFormat2 = new ChoiceFormat(limitFormatPattern); writeGradeInformation(boundaryTestScores, gradesFormat2); } When the above method is executed, its output shows that both approaches enforce boundary conditions better. -25.0 is a 'Invalid - Too Low'. 0.0 is a 'F'. 20.0 is a 'F'. 60.0 is a 'D'. 70.0 is a 'C'. 80.0 is a 'B'. 90.0 is a 'A'. 100.0 is a 'A'. 115.0 is a 'Invalid - Too High'. The average score (56.666666666666664) is a 'F'. -25.0 is a 'Invalid - Too Low '. 0.0 is a 'F '. 20.0 is a 'F '. 60.0 is a 'D '. 70.0 is a 'C '. 80.0 is a 'B '. 90.0 is a 'A '. 100.0 is a 'A '. 115.0 is a 'Invalid - Too High'. The average score (56.666666666666664) is a 'F '. The last code listing demonstrates using Double.NEGATIVE_INFINITY and \u221E (Unicode INFINITY character) to establish a lowest possible limit boundary in each of the examples. For scores above 100.0 to be formatted as invalid, the arrays-based ChoiceFormat uses a number slightly bigger than 100 as the lower limit of that invalid range. The String/pattern-based ChoiceFormat instance provides greater flexibility and exactness in specifying the lower limit of the “Invalid – Too High” range as any number greater than 100.0 using the less-than symbol (<). Handling None, Singular, and Plural with ChoiceFormat I opened this post by quoting the Javadoc stating that ChoiceFormat is “generally used in a MessageFormat for handling plurals,” but have not yet demonstrated this common use in this post. I will demonstrate a portion of this (plurals without MessageFormat) very briefly here for completeness, but a much more complete explanation (plurals with MessageFormat) of this common usage of ChoiceFormat is available in the Java Tutorials‘ Handling Plurals lesson (part of the Internationalization trail). The next code listing demonstrates application of ChoiceFormat to handle singular and plural cases. /** * Demonstrate ChoiceFormat used for differentiation of * singular from plural and none. */ public void demonstratePluralAndSingular() { final double[] cactiLowerLimits = {0, 1, 2, 3, 4, 10}; final String[] cactiRangeDescriptions = {"no cacti", "a cactus", "a couple cacti", "a few cacti", "many cacti", "a plethora of cacti"}; final ChoiceFormat cactiFormat = new ChoiceFormat(cactiLowerLimits, cactiRangeDescriptions); for (int cactiCount = 0; cactiCount < 11; cactiCount++) { out.println(cactiCount + ": I own " + cactiFormat.format(cactiCount) + "."); } } Running the example in the last code listing leads to output that is shown next. 0: I own no cacti. 1: I own a cactus. 2: I own a couple cacti. 3: I own a few cacti. 4: I own many cacti. 5: I own many cacti. 6: I own many cacti. 7: I own many cacti. 8: I own many cacti. 9: I own many cacti. 10: I own a plethora of cacti. One Final Symbol Supported by ChoiceFormat’s Pattern One other symbol that ChoiceFormat pattern parsing recognizes for formatting strings from a generated numeric value is the \u2264 (≤). This is demonstrated in the next code listing and the output for that code that follows the code listing. Note that in this example the \u2264 works effectively the same as using the simpler # sign shown earlier. /** * Demonstrate using \u2264 in String pattern for ChoiceFormat * to represent >= sign. Treated differently than less-than * sign but similarly to #. */ public void demonstrateLessThanOrEquals() { final String limitFormatPattern = "0\u2264F | 60\u2264D | 70\u2264C | 80\u2264B | 90\u2264A"; final ChoiceFormat gradesFormat = new ChoiceFormat(limitFormatPattern); writeGradeInformation(fredsTestScores, gradesFormat); } 75.6 is a 'C '. 88.8 is a 'B '. 97.3 is a 'A'. 43.3 is a 'F '. The average score (76.25) is a 'C '. Observations in Review In this section, I summarize some of the observations regarding ChoiceFormat made during the course of this post and its examples.When using the ChoiceFormat(double[], String[]) constructor, the two passed-in arrays must be of equal size or else an IllegalArgumentException (“Array and limit arrays must be of the same length.”) will be thrown. The “limits” double[] array provided to the ChoiceFormat(double[], String[]) constructor constructor should have the limits listed from left-to-right in ascending numerical order. When this is not the case, no exception is thrown, but the logic is almost certainly not going to be correct as Strings being formatted against the instance of ChoiceFormat will “match” incorrectly. This same expectation applies to the constructor accepting a pattern. ChoiceFormat allows Double.POSITIVE_INFINITY and Double.NEGATIVE_INFINITY to be used for specifying lower range limits via its two-arrays constructor. ChoiceFormat allows \u221E and -\u221E to be used for specifying lower range limits via its single String (pattern) constructor. The ChoiceFormat constructor accepting a String pattern is a bit more flexible than the two-arrays constructor and allows one to specify lower limit boundaries as everything over a certain amount without including that certain amount exactly. Symbols and characters with special meaning in the String patterns provided to the single String ChoiceFormat constructor include #, <, \u2264 (≤), \u221E (∞), and |.Conclusion ChoiceFormat allows formatting of numeric ranges to be customized so that specific ranges can have different and specific representations. This post has covered several different aspects of numeric range formatting with ChoiceFormat, but parsing numeric ranges from Strings using ChoiceFormat was not covered in this post. Further ReadingChoiceFormat API Documentation Handling Plurals Text: Freedom with Message Format – Part 2: Choice Format Java i18n Pluralisation using ChoiceFormat What’s wrong with ChoiceFormat? (Lost in translation – part IV) More about what’s wrong with ChoiceFormatReference: ChoiceFormat: Numeric Range Formatting from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Reduce Boilerplate Code in your Java applications with Project Lombok

One of the most frequently voiced criticisms of the Java programming language is the amount of Boilerplate Code it requires. This is especially true for simple classes that should do nothing more than store a few values. You need getters and setters for these values, maybe you also need a constructor, overriding equals() and hashcode() is often required and maybe you want a more useful toString() implementation. In the end you might have 100 lines of code that could be rewritten with 10 lines of Scala or Groovy code. Java IDEs like Eclipse or IntelliJ try to reduce this problem by providing various types of code generation functionality. However, even if you do not have to write the code yourself, you always see it (and get distracted by it) if you open such a file in your IDE.   Project Lombok (don’t be frightened by the ugly web page) is a small Java library that can help reducing the amount of Boilerplate Code in Java Applications. Project Lombok provides a set of annotations that are processed at development time to inject code into your Java application. The injected code is immediately available in your development environment. Lets have a look at the following Eclipse Screenshot:The defined class is annotated with Lombok’s @Data annotation and does not contain any more than three private fields. @Data automatically injects getters, setters (for non final fields), equals(), hashCode(), toString() and a constructor for initializing the final dateOfBirth field. As you can see the generated methods are directly available in Eclipse and shown in the Outline view. Setup To set up Lombok for your application you have to put lombok.jar to your classpath. If you are using Maven you just have to add to following dependency to your pom.xml: <dependency>   <groupId>org.projectlombok</groupId>   <artifactId>lombok</artifactId>   <version>1.14.6</version>   <scope>provided</scope> </dependency> You also need to set up Lombok in the IDE you are using:NetBeans users just have to enable the Enable Annotation Processing in Editor option in their project properties (see: NetBeans instructions). Eclipse users can install Lombok by double clicking lombok.jar and following a quick installation wizard. For IntelliJ a Lombok Plugin is available.Getting started The @Data annotation shown in the introduction is actually a shortcut for various other Lombok annotations. Sometimes @Data does too much. In this case, you can fall back to more specific Lombok annotations that give you more flexibility. Generating only getters and setters can be achieved with @Getter and @Setter: @Getter @Setter public class Person {   private final LocalDate birthday;   private String firstName;   private String lastName;  public Person(LocalDate birthday) {     this.birthday = birthday;   } } Note that getter methods for boolean fields are prefixed with is instead of get (e.g. isFoo() instead of getFoo()). If you only want to generate getters and setters for specific fields you can annotate these fields instead of the class. Generating equals(), hashCode() and toString(): @EqualsAndHashCode @ToString public class Person {   ... } @EqualsAndHashCode and @ToString also have various properties that can be used to customize their behaviour: @EqualsAndHashCode(exclude = {"firstName"}) @ToString(callSuper = true, of = {"firstName", "lastName"}) public class Person {   ... } Here the field firstName will not be considered by equals() and hashCode(). toString() will call super.toString() first and only consider firstName and lastName. For constructor generation multiple annotations are available:@NoArgsConstructor generates a constructor that takes no arguments (default constructor). @RequiredArgsConstructor generates a constructor with one parameter for all non-initialized final fields. @AllArgsConstructor generates a constructor with one parameter for all fields in the class.The @Data annotation is actually an often used shortcut for @ToString, @EqualsAndHashCode, @Getter, @Setter and @RequiredArgsConstructor. If you prefer immutable classes you can use @Value instead of @Data: @Value public class Person {   LocalDate birthday;   String firstName;   String lastName; } @Value is a shortcut for @ToString, @EqualsAndHashCode, @AllArgsConstructor, @FieldDefaults(makeFinal = true, level = AccessLevel.PRIVATE) and @Getter. So, with @Value you get toString(), equals(), hashCode(), getters and a constructor with one parameter for each field. It also makes all fields private and final by default, so you do not have to add private or final modifiers. Looking into Lombok’s experimental features Besides the well supported annotations shown so far, Lombok has a couple of experimental features that can be found on the Experimental Features page. One of these features I like in particular is the @Builder annotation, which provides an implementation of the Builder Pattern. @Builder public class Person {   private final LocalDate birthday;   private String firstName;   private String lastName; } @Builder generates a static builder() method that returns a builder instance. This builder instance can be used to build an object of the class annotated with @Builder (here Person): Person p = Person.builder()   .birthday(LocalDate.of(1980, 10, 5))   .firstName("John")   .lastName("Smith")   .build(); By the way, if you wonder what this LocalDate class is, you should have a look at my blog post about the Java 8 date and time API! Conclusion Project Lombok injects generated methods, like getters and setters, based on annotations. It provides an easy way to significantly reduce the amount of Boilerplate code in Java applications. Be aware that there is a downside: According to reddit comments (including a comment of the project author), Lombok has to rely on various hacks to get the job done. So, there is a chance that future JDK or IDE releases will break the functionality of project Lombok. On the other hand, these comments where made 5 years ago and Project Lombok is still actively maintained.You can find the source of Project Lombok on GitHub.Reference: Reduce Boilerplate Code in your Java applications with Project Lombok from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....

3 Essential Ways To Start Your JBoss BPM Process

This episode of tips and tricks will help you to understand the best way to initiate your process instances for your needs. Planning your projects might include process projects, but have you thought about the various ways that you can initiate your process? Maybe you have JBoss BPM Suite running locally in your architecture, maybe you have it running in the Cloud, but wherever it is you will still need to make an informed choice about how to initiate a process. We will cover here three essential ways you can best start a JBoss BPM process:UI dashboard RestAPI client application (API)BPM Suite UI In the interest of completeness we have to mention the ability to start a process instance exists in the form of a button within JBoss BPM Suite dashboard tooling. When logged into JBoss BPM Suite and you have finished project development, your BPM project can then be built and deployed as follows.     AUTHORING -> PROJECT AUTHORING -> TOOLS -> PROJECT EDITOR -> BUILD&DEPLOY (button) The next step is to start a process instance in the process management perspective in one of two ways. 1. PROCESS MANAGEMENT -> PROCESS DEFINITIONS -> start-icon2. PROCESS MANAGEMENT -> PROCESS DEFINITIONS -> magnifying-glass-icon -> in DETAILS panel -> NEW INSTANCE (button)Both of these methods will result in a process instance being started, popping up a start form if data is to be submitted to the BPM process. RestAPI Assuming you are going to be calling for a start of your BPM process after deployment from various possible locations we wanted to show you how these might be easily integrated.It does not matter if you are starting a process from a web application, a mobile application or creating backend services for your enterprise to use as a starting point for processes. The exposed RestAPI provides the perfect way to trigger your BPM process and can be show in the following code example. This example is a very simple Rest client that, for clarity, will be embedding the various variables one might pass to such a client directly into the example code. There are no variables passed to the process being started, for that we will provide a more complete example in the section covering a client application. It sends a start process command and expects no feedback from the Customer Evaluation BPM process being called, as it is a Straight Through Process (STP). public class RestClientSimple { private static final String BASE_URL = "http://localhost:8080/business-central/rest/"; private static final String AUTH_URL = "http://localhost:8080/business-central/org.kie.workbench.KIEWebapp/j_security_check"; private static final String DEPLOYMENT_ID = "customer:evaluation:1.0"; private static final String PROCESS_DEF_ID = "customer.evaluation"; private static String username = "erics"; private static String password = "bpmsuite"; private static AuthenticationType type = AuthenticationType.FORM_BASED;public static void main(String[] args) throws Exception {System.out.println("Starting process instance: " + DEPLOYMENT_ID); System.out.println(); // start a process instance with no variables. startProcess();System.out.println(); System.out.println("Completed process instance: " + DEPLOYMENT_ID); }/** * Start a process using the rest api start call, no map variables passed. * * @throws Exception */ public static void startProcess() throws Exception { String newInstanceUrl = BASE_URL + "runtime/" + DEPLOYMENT_ID + "/process/" + PROCESS_DEF_ID + "/start"; String dataFromService = getDataFromService(newInstanceUrl, "POST"); System.out.println("newInstanceUrl:["+newInstanceUrl+"]"); System.out.println("--------"); System.out.println(dataFromService); System.out.println("--------"); }<...SNIPPED MORE CODE...> } The basics here are the setup of the business central URL to point to the start RestAPI call. In the main method one finds a method call to startProcess() which builds the RestAPI URL and captures the data reply sent from JBoss BPM Suite. To see the details of how that is accomplished, please refer to the class in its entirety within the JBoss BPM Suite and JBoss Fuse Integration Demo project. Intermezzo on testing An easy way to test your process once it has been built and deployed is to use curl to push a request to the process via the RestAPI. Such a request looks like the following, first in generic form and then a real run through the same Customer Evaluation project as used in the previous example. The generic RestAPI call and proper authentication request is done in curl as follows: $ curl -X POST -H 'Accept: application/json' -uerics 'http://localhost:8080/business-central/rest/runtime/customer:evaluation:1.1/process/customer.evaluation/start?map_par1=var1↦_par2=var2' For the Customer Evaluation process a full cycle of using curl to call the start process, authenticating our user and receiving a response from JBoss BPM Suite should provide the following output. $ curl -X POST -H 'Accept: application/json' -uerics 'http://localhost:8080/business-central/rest/runtime/customer:evaluation:1.1/process/customer.evaluation/start?map_employee=erics'Enter host password for user 'erics': bpmsuite1!{"status":"SUCCESS","url":"http://localhost:8080/business-central/rest/runtime/customer:evaluation:1.1/process/customer.evaluation/start?map_employee=erics","index":null,"commandName":null,"processId":"customer.evaluation","id":3,"state":2,"eventTypes":[]}We see the process instances complete in the process instance perspectives as shown. Client application The third and final way to start your JBoss BPM Suite process instances is more in line with injecting a bunch of pre-defined submissions to populate both the reporting history and could be based on historical data. The example shown here is available in most demo projects we provide but is taken from the Mortgage Demo project. This demo client is using static lines of data to be injected into the process one at a time. With a few minor adjustments one could pull in historical data from an existing data source and inject as many processes as desired in this format. It also is a nice way to stress test your process projects. We will skip the setup of the session and process details as these have been shown above, but provide instead a link to the entire demo client class and leave these details for the reader to pursue. Here we will just focus on how the individual start process calls will look. public static void populateSamples(String userId, String password, String applicationContext, String deploymentId) {RuntimeEngine runtimeEngine = getRuntimeEngine( applicationContext, deploymentId, userId, password ); KieSession kieSession = runtimeEngine.getKieSession(); Map processVariables;//qualify with very low interest rate, great credit, non-jumbo loan processVariables = getProcessArgs( "Amy", "12301 Wilshire", 333224449, 100000, 500000, 100000, 30 ); kieSession.startProcess( "com.redhat.bpms.examples.mortgage.MortgageApplication", processVariables );} As you can see the last line is where the individual mortgage submission is pushed to JBoss BPM Suite. If you examine the rest of the class you will find multiple entries being started one after another. We hope you now have a good understanding of the ways you can initiate a process and choose the one that best suits your project needs.Reference: 3 Essential Ways To Start Your JBoss BPM Process from our JCG partner Eric Schabell at the Eric Schabell’s blog blog....

Common Mistakes Junior Developers Do When Writing Unit Tests

It’s been 10 years since I wrote my first unit test. Since then, I can’t remember how many thousands of unit tests I’ve written. To be honest I don’t make any distinction between source code and test code. For me it’s the same thing. Test code is part of the source code. The last 3-4 years, I’ve worked with several development teams and I had the chance to review a lot of test code. In this post I’m summarizing the most common mistakes that in-experienced developers usually do when writing unit tests. Let’s take a look at the following simple example of a class that collects registration data, validates them and performs a user registration. Clearly the method is extremely simple and its purpose is to demonstrate the common mistakes of unit tests and not to provide a fully functional registration example: public class RegistrationForm { private String name,email,pwd,pwdVerification; // Setters - Getters are ommitted public boolean register(){ validate(); return doRegister(); } private void validate () { check(name, "email"); check(email, "email"); check(pwd, "email"); check(pwdVerification, "email"); if (!email.contains("@")) { throw new ValidationException(name + " cannot be empty."); } if ( !pwd.equals(pwdVerification)) throw new ValidationException("Passwords do not match."); } private void check(String value, String name) throws ValidationException { if ( value == null) { throw new ValidationException(name + " cannot be empty."); } if (value.length() == 0) { throw new ValidationException(name + " is too short."); } } private boolean doRegister() { //Do something with the persistent context return true; } Here’s a corresponding unit test for the register method to intentionally show the most common mistakes in unit testing. Actually I’ve seen many times very similar test code, so it’s not what I’d call science fiction: @Test public void test_register(){ RegistrationForm form = new RegistrationForm(); form.setEmail("Al.Pacino@example.com"); form.setName("Al Pacino"); form.setPwd("GodFather"); form.setPwdVerification("GodFather"); assertNotNull(form.getEmail()); assertNotNull(form.getName()); assertNotNull(form.getPwd()); assertNotNull(form.getPwdVerification()); form.register(); }Now, this test, obviously will pass, the developer will see the green light so thumbs up! Let’s move to the next method. However this test code has several important issues. The first one which is in my humble opinion, the biggest misuse of unit tests is that the test code is not adequately testing the register method. Actually it tests only one out of many possible paths. Are we sure that the method will correctly handle null arguments? How the method will behave if the email doesn’t contain the @ character or passwords don’t match? Developers tend to write unit tests only for the successful paths and my experience has shown that most of the bugs discovered in code are not related to the successful paths.  A very good rule to remember is that for every method you need N numbers of tests where N equals to the cyclomatic complexity of the method adding the cyclomatic complexity of all private method calls. Next is the name of the test method. For this one I partially blame all these modern IDEs that auto-generate stupid names for test methods like the one in the example. The test method should be named in such a way that explains to the reader what is going to be tested and under which conditions. In other words it should describe the path under testing. In our case a better name could be : should_register_when_all_registration_data_are_valid. In this article you can find several approaches on naming unit tests but for me the ‘should’ pattern is the closest to the human languages and easier to understand when reading test code. Now let’s see the meat of the code. There are several assertions and this violates the rule that each test method should assert one and only one thing. This one asserts the state of four(4) RegistrationForm attributes. This makes the test harder to maintain and read (oh yes, test code should be maintainable and readable just like the source code. Remember that for me there’s no distinction between them) and it makes difficult to understand which part of the test fails. This test code also asserts setters/getters. Is this really necessary? To answer that I will quote Roy Osherove’s saying from his famous book :” The Art of Unit Testing” Properties (getters/setters in Java) are good examples of code that usually doesn’t contain any logic, and doesn’t require testing. But watch out: once you add any check inside the property, you’ll want to make sure that logic is being tested. In our case there’s no business logic in our setters/getters so these assertions are completely useless. Moreover they wrong because they don’t even test the correctness of the setter. Imagine that an evil developer changes the code of the getEmail method to always return a constant String instead of the email attribute value. The test will still pass because it asserts that the setter is not null and it doesn’t assert for the expected value. So here’s a rule you might want to remember. Always try to be as much as specific you can when you assert the return value of a method. In other words try to avoid assertIsNull, assertIsNotNull unless you don’t care about the actual return value. The last but not least problem with the test code we’re looking at is that the actual method (register) that is under test, is never asserted. It’s called inside the test method but we never evaluate its result. A variation of this anti-pattern is even worse. The method under test is not even invoked in the test case. So just keep in mind that you should not only invoke the method under test but you should always assert the expected result, even if it’s just a Boolean value. One might ask : “what about void methods?”. Nice question but this is another discussion – maybe another post, but to give you a couple of tips testing of a void method might hide a bad design or it should be done using a framework that verifies method invocations ( such as Mockito.Verify ) As a bonus here’s a final rule you should remember. Imagine that the doRegister is actually implemented and do some real work with an external database. What will happen if some developer that has no database installed in her local environment tries to run the test. Correct! Everything will fail. Make sure that your test will have the same behavior even if it runs from the dumpiest terminal that has access only to the code and the JDK. No network, no services, no databases, no file system. Nothing!Reference: Common Mistakes Junior Developers Do When Writing Unit Tests from our JCG partner Patroklos Papapetrou at the Only Software matters blog....

Preventing lost updates in long conversations

Introduction All database statements are executed within the context of a physical transaction, even when we don’t explicitly declare transaction boundaries (BEGIN/COMMIT/ROLLBACK). Data integrity is enforced by the ACID properties of database transactions. Logical vs Physical transactions An logical transaction is an application-level unit of work that may span over multiple physical (database) transactions. Holding the database connection open throughout several user requests, including user think time, is definitely an anti-pattern. A database server can accommodate a limited number of physical connections, and often those are reused by using connection pooling. Holding limited resources for long periods of time hinders scalability. So database transactions must be short to ensure that both database locks and the pooled connections are released as soon as possible. Web applications entail a read-modify-write conversational pattern. A web conversation consists of multiple user requests, all operations being logically connected to the same application-level transaction. A typical use case goes like this:Alice requests a certain product for being displayed The product is fetched from the database and returned to the browser Alice requests a product modification The product must be updated and saved to the databaseAll these operations should be encapsulated in a single unit-of-work. We therefore need an application-level transaction that’s also ACID complaint, because other concurrent users might modify the same entities, long after shared locks had been released. In my previous post I introduced the perils of lost updates. The database transaction ACID properties can only prevent this phenomena within the boundaries of a single physical transaction. Pushing transaction boundaries into the application layer requires application-level ACID guarantees. To prevent lost updates we must have application-level repeatable reads along with a concurrency control mechanisms. Long conversations HTTP is a stateless protocol. Stateless applications are always easier to scale than stateful ones, but conversations can’t be stateless. Hibernate offers two strategies for implementing long conversations:Extended persistence context Detached objectsExtended persistence context After the first database transaction ends the JDBC connection is closed (usually going back to the connection pool) and the Hibernate session becomes disconnected. A new user request will reattach the original Session. Only the last physical transaction must issue DML operations, as otherwise the application-level transaction is not an atomic unit of work. For disabling persistence in the course of the application-level transaction, we have the following options:We can disable automatic flushing, by switching the Session FlushMode to MANUAL. At the end of the last physical transaction, we need to explicitly call Session#flush() to propagate the entity state transitions. All but the last transaction are marked read-only. For read-only transactions Hibernate disables both dirty checking and the default automatic flushing.The read-only flag might propagate to the underlying JDBC Connection, so the driver might enable some database-level read-only optimizations.The last transaction must be writeable so that all changes are flushed and committed.Using an extended persistence context is more convenient since entities remain attached across multiple user requests. The downside is the memory footprint. The persistence context might easily grow with every new fetched entity. Hibernate default dirty checking mechanism uses a deep-comparison strategy, comparing all properties of all managed entities. The larger the persistence context, the slower the dirty checking mechanism will get. This can be mitigated by evicting entities that don’t need to be propagated to the last physical transaction. Java Enterprise Edition offers a very convenient programming model through the use of @Stateful Session Beans along with an EXTENDED PersistenceContext. All extended persistence context examples set the default transaction propagation to NOT_SUPPORTED which makes it uncertain if the queries are enrolled in the context of a local transaction or each query is executed in a separate database transaction. Detached objects Another option is to bind the persistence context to the life-cycle of the intermediate physical transaction. Upon persistence context closing all entities become detached. For a detached entity to become managed, we have two options:The entity can be reattached using Hibernate specific Session.update() method. If there’s an already attached entity (same entity class and with the same identifier) Hibernate throws an exception, because a Session can have at most one reference of any given entity.There is no such equivalent in Java Persistence API. Detached entities can also be merged with their persistent object equivalent. If there’s no currently loaded persistence object, Hibernate will load one from the database. The detached entity will not become managed.By now you should know that this pattern smells like trouble:What if the loaded data doesn’t match what we have previously loaded? What if the entity has changed since we first loaded it? Overwriting new data with an older snapshot leads to lost updates. So the concurrency control mechanism is not an option when dealing with long conversations.Both Hibernate and JPA offer entity merging.Detached entities storage The detached entities must be available throughout the lifetime of a given long conversation. For this, we need a stateful context to make sure all conversation requests find the same detached entities. Therefore we can make use of:Stateful Session Beans: Stateful session beans is one of the greatest feature offered by Java Enterprise Edition. It hides all the complexity of saving/loading state between different user requests. Being a built-in feature, it automatically benefits from cluster replication, so the developer can concentrate on business logic instead. Seam is a Java EE application framework that has built-in support for web conversations. HttpSession: We can save the detached objects in the HttpSession. Most web/application servers offer session replication so this option can be used by non-JEE technologies, like Spring framework. Once the conversation is over, we should always discard all associated state, to make sure we don’t bloat the Session with unnecessary storage.You need to be careful to synchronize all HttpSession access (getAttribute/setAttribute), because for a very strange reason, this web storage is not thread-safe. Spring Web Flow is a Spring MVC companion that supports HttpSession web conversations. Hazelcast: Hazelcast is an in-memory clustered cache, so it’s a viable solution for the long conversation storage. We should always set an expiration policy, because in a web application, conversations might be started and abandoned. Expiration acts as the Http session invalidation.The stateless conversation anti-pattern Like with database transactions, we need repeatable reads as otherwise we might load an already modified record without realizing it so:Alice request a product to be displayed The product is fetched from the database and returned to the browser Alice request a product modification Because Alice hasn’t kept a copy of the previously displayed object, she has to reload it once again The product is updated and saved to the database The batch job update has been lost and Alice will never realize itThe stateful version-less conversation anti-pattern Preserving conversation state is a must if we want to ensure both isolation and consistency, but we can still run into lost updates situations:Even if we have application-level repeatable reads others can still modify the same entities. Within the context of a single database transaction, row-level locks can block concurrent modifications but this is not feasible for logical transactions. The only option is to allow others modify any rows, while preventing persisting stale data. Optimistic locking to the rescue Optimistic locking is a generic-purpose concurrency control technique, and it works for both physical and application-level transactions. Using JPA is only a matter of adding a @Version field to our domain models:Conclusion Pushing database transaction boundaries into the application layer requires an application-level concurrency control. To ensure application-level repeatable reads we need to preserve state across multiple user requests, but in the absence of database locking we need to rely on an application-level concurrency control. Optimistic locking works for both database and application-level transactions, and it doesn’t make use of any additional database locking. Optimistic locking can prevent lost updates and that’s why I always recommend all entities be annotated with the @Version attribute.Reference: Preventing lost updates in long conversations from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

Who Needs Side Projects?

The topic of side projects or personal projects for software professionals (commonly in the form of mobile apps, websites, various GitHub repos, and even technical blogs) has been fairly well-documented in the past few years. The concept of side projects as hiring barometer is still a relatively nascent industry phenomenon that emerged in parallel with the rising popularity and eventual ubiquity of open source software, web-based repository hosting, and usable blogging platforms. I distinctly remember the first time a client of mine indicated a preference (however slight) for candidates that could demonstrate coding from outside their employment. It was 2004, still a few years before GitHub.     The growth of SourceForge and then GitHub provided the opportunity to collaborate easily and effectively with friends, colleagues, or even strangers. Some engineers chose to code off-hours and some chose not to. As has been written before, others may not have had the time. Whether engineers did or did not contribute to open source or create side projects, all eventually became aware of the trend, and some likely predicted that the landscape for job search may be changing. Flash-forward to today, where we commonly see job advertisements requesting a résumé and links to GitHub or personal projects. College seniors scramble to assemble a code portfolio while balancing internships and coursework in their final semesters. Highly experienced engineers with 20 years of stable employment and a formidable list of professional accomplishments still wonder if their work alone is enough to compete for jobs. Even high school students ask questions about starting to build things in preparation for a job search several years down the line. It’s clear that many engineers are concerned with the perceived necessity for personal projects, regardless of experience. Blogs like mine are probably guilty of unintentionally feeding fears, as career advice is not one-size-fits-all. It’s a useful exercise to clarify the benefit of side projects for different groups. Who benefits most from side projects? Entry-level candidates, new college graduates – When a group of largely homogenous candidates compete for positions, side projects are a differentiator. Typical résumés for new grads list GPA and select courses. Without internships or projects, they are virtually identical in every way. Even a link to rather mundane GitHub repos might give employers a bit of insight into ability. Recent industry entrants without marketable experience - Many professionals are forced to accept undesirable jobs due to personal circumstances, while others make poor career choices early on. Two or three years of professional experience in the wrong shop might not lead to any real accomplishments. Side projects may be the best method to demonstrate skill early in a career. Stagnant employment history - Candidates who have held the same position in the same company for many years are subject to somewhat unique scrutiny, with the most common theme being the concept of whether someone has “ten years of experience or only one year of experience ten times“. Showing some other skills gained outside work might shed the one-trick pony image and demonstrate an interest in learning. Dated skill set, limited technical environment – For some engineers, side work might be the only opportunity to play with the new and shiny toys that other employers use. If the day job only allows archaic languages and tools, future marketability may depend upon side projects. Those looking to change their path - Similar to entry-level candidates, professionals attempting to alter career direction may look to side projects as their most powerful strategy to gain some credibility. A good example of this is the large number of budding engineers who accepted QA positions while intending to move into development roles, or web developers who seek employment in mobile. Conclusion Side projects are often effectively used in lieu of work experience and tangible accomplishments in order to establish credibility and demonstrate skills. Their importance fades over time for professionals in ideal technical environments, defined as places where engineers are productive, learn continuously, move between projects and groups, and are able to develop a varied set of marketable skills. The value of side projects may vary over the course of a career depending on factors in the workplace, which are usually beyond the employee’s control. The groups outlined above derive the most benefit from side projects. For established and highly accomplished professionals, side projects can be completely unnecessary.Reference: Who Needs Side Projects? from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Cost, Value & Investment: How Much Will This Project Cost? Part 2

This post is continued from Cost, Value & Investment: How Much Will This Project Cost, Part 1 We’ve established that you need to know how much this project will cost. I’m assuming you have more than a small project. If you have to estimate a project, please read the series starting at Estimating the Unknown: Dates or Budget, Part 1. Or, you could get Essays on Estimation. I’m in the midst of fixing it so it reads like a real book. I have more tips on estimation there. For a program, each team does this for its own ranked backlog:Take every item on the backlog and roadmap, and use whatever relative sizing approach you use now to estimate. You want to use relative sizing, because you need to estimate everything on the backlog. Tip: If each item on the backlog/roadmap is about team-day or smaller, this is easy. The farther out you go, the more uncertainty you have and the more difficult the estimation is. That’s why this is a tip. Walk through the entire backlog, estimating as you proceed. Don’t worry about how large the features are. Keep a count of the large features. Decide as a team if this feature is larger than two or three team-days. If it is, keep a count of those features. The larger the features, the more uncertainty you have in your estimate. Add up your estimate of relative points. Add up the number of large features. Now, you have a relative estimate, which based on your previous velocity means something to you. You also have a number of large features, which will decrease the confidence in that estimate. Do you have 50 features, of which only five are large? Maybe you have 75% confidence in your estimate. On the other hand, maybe all your features are large. I might only have 5-10% confidence in the estimate. Why? Because the team hasn’t completed any work yet and you have no idea how long your work will take.As a software program team, get together, and assess the total estimate. Why the program team? Because the program team is the cross-functional team whose job is to get the software product to done. It’s not just the software teams—it’s everyone involved in the technical program team. Note: the teams have to trust Sally, Joe, Henry and Tom to represent them to the software program team. If the teams do not, no one has confidence in any estimate at all. The estimate is a total SWAG. The delegates to the program team know what their estimates mean individually. Now, they “add” them together, whatever that means. Do you realize why we will call this prediction? Do Sally, Joe, Henry, and Tom have feature teams, service teams, or component teams? Do they have to add time for the experiments as they transition to agile? Do they need to gain the trust of their management? Or, are they already experienced agile feature teams? The more experienced the teams are at agile, the better the estimate is. The more the teams are feature teams, the better the estimate is. If you are new to agile, or have feature teams, or have a mixed program (agile and non-agile teams), you know that estimate is way off. Is it time for the software program manager to say, “We have an initial order-of-magnitude prediction. But we haven’t tested this estimate with any work, so we don’t know how accurate our estimates are. Right now our confidence is about 5-10% (or whatever it is) in our estimate. We’ve spent a day or so estimating, because we can spend time delivering, rather than estimating. We need to do a week or two of work, deliver a working skeletong, and then we can tell you more about our prediction. We can better our prediction as we proceed. Remember, back in the waterfall days, we spent a month estimating and we were wrong. This way, you’ll get to see product as we work.” You want to use the word “prediction” as much as possible, because people understand the word prediction. They hear weather predictions all the time. They know about weather predictions. But when they hear estimates of work, they think you are correct, even if you use confidence numbers, they think you are accurate. Use the word prediction. Beware of These Program Estimation Traps There are plenty of potential traps when you estimate programs. Here are some common problems:The backlog is full of themes. You haven’t even gotten to epics, never mind stories. I don’t see how you can make a prediction. That’s like me saying, “I can go from Boston to China on an airplane. Yes, I can. It will take time.” I need more data: which time of year? Mid-week, weekend? Otherwise, I can only provide a ballpark, not a real estimate. Worse, the backlog is full of tasks, so you don’t know the value of a story. “Fix the radio button” does not tell me the value of a story. Maybe we can eliminate the button instead of fix it. The people estimating are not the ones who will do the work, so the estimate is full of estimation bias. Just because work looks easy or looks hard does not mean it is. The estimate becomes a target. This never works, but managers do it all the time. “Sure, my team can do that work by the end of Q1.” The people on your program multitask, so the estimate is wrong. Have you read the Cost of Delay due to Multitasking? Managers think they can predict team size from the estimate. This is the problem of splitting work in the mistaken belief that more people make it easier to do more work. More people make the communications burden heavier.Estimating a program is more difficult, because bigness makes everything harder. A better way to manage the issues of a program is to decide if it’s worth funding in the project portfolio. Then, work in an agile way. Be ready to change the order of work in the backlog, for teams and among teams. As a program manager, you have two roles when people ask for estimates. You want to ask your sponsors these questions:How much do you want to invest before we stop? Are you ready to watch the program grow as we build it? What is the value of this project or program?You want to ask the teams and product owners these questions:Please produce walking skeletons (of features in the product) and build on it Please produce small features, so we can see the product evolve every dayThe more the sponsors see the product take shape, the less interested they will be in an overall estimate. They may ask for more specific estimates (when can you do this specific feature), which is much easier to answer. Delivering working software builds trust. Trust obviates many needs for estimates. If your managers or customers have never had trust with a project or program team before, they will start asking for estimates. Your job is to deliver working software every day, so they stop asking.Reference: Cost, Value & Investment: How Much Will This Project Cost? Part 2 from our JCG partner Johanna Rothman at the Managing Product Development blog....

JPA tutorial: Mapping Entities – Part 1

In this article I will discuss about the entity mapping procedure in JPA. As for my examples I will use the same schema that I used in one of my previous articles. In my two previous articles I explained how to set up JPA in a Java SE environment. I do not intend to write the setup procedure for a web application because most of the tutorials on the web do exactly that. So let’s skip over directly to object relational mapping, or entity mapping.         Wikipedia defines Object Relational Mapping as follows: Object-relational mapping (ORM, O/RM, and O/R mapping) in computer science is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a “virtual object database” that can be used from within the programming language. There are both free and commercial packages available that perform object-relational mapping, although some programmers opt to create their own ORM tools.Typically, mapping is the process through which you provide necessary information about your database to your ORM tool. The tool then uses this information to read/write objects into the database. Usually you tell your ORM tool the table name to which an object of a certain type will be saved. You also provide column names to which an object’s properties will be mapped to. Relation between different object types also need to be specified. All of these seem to be a lot of tasks, but fortunately JPA follows what is known as “Convention over Configuration” approach, which means if you adopt to use the default values provided by JPA, you will have to configure very little parts of your applications. In order to properly map a type in JPA, you will at a minimum need to do the following:Mark your class with the @Entity annotation. These classes are called entities. Mark one of the properties/getter methods of the class with the @Id annotation.And that’s it. Your entities are ready to be saved into the database because JPA configures all other aspects of the mapping automatically. This also shows the productivity gain that you can enjoy by using JPA. You do not need to manually populate your objects each time you query the database, saving you from writing lots of boilerplate code. Let’s see an example. Consider the following Address entity which I have mapped according to the above two rules: import javax.persistence.Entity; import javax.persistence.Id;@Entity public class Address { @Id private Integer id;private String street; private String city; private String province; private String country; private String postcode;/** * @return the id */ public Integer getId() { return id; }/** * @param id the id to set */ public Address setId(Integer id) { this.id = id; return this; }/** * @return the street */ public String getStreet() { return street; }/** * @param street the street to set */ public Address setStreet(String street) { this.street = street; return this; }/** * @return the city */ public String getCity() { return city; }/** * @param city the city to set */ public Address setCity(String city) { this.city = city; return this; }/** * @return the province */ public String getProvince() { return province; }/** * @param province the province to set */ public Address setProvince(String province) { this.province = province; return this; }/** * @return the country */ public String getCountry() { return country; }/** * @param country the country to set */ public Address setCountry(String country) { this.country = country; return this; }/** * @return the postcode */ public String getPostcode() { return postcode; }/** * @param postcode the postcode to set */ public Address setPostcode(String postcode) { this.postcode = postcode; return this; } } Now based on your environment, you may or may not add this entity declaration in your persistence.xml file, which I have explained in my previous article. Ok then, let’s save some object! The following code snippet does exactly that: import com.keertimaan.javasamples.jpaexample.entity.Address; import javax.persistence.EntityManager; import com.keertimaan.javasamples.jpaexample.persistenceutil.PersistenceManager;public class Main { public static void main(String[] args) { EntityManager em = PersistenceManager.INSTANCE.getEntityManager();Address address = new Address().setId(1) .setCity("Dhaka") .setCountry("Bangladesh") .setPostcode("1000") .setStreet("Poribagh"); em.getTransaction() .begin(); em.persist(address); em.getTransaction() .commit(); System.out.println("addess is saved! It has id: " + address.getId());Address anotherAddress = new Address().setId(2) .setCity("Shinagawa-ku, Tokyo") .setCountry("Japan") .setPostcode("140-0002") .setStreet("Shinagawa Seaside Area"); em.getTransaction() .begin(); em.persist(anotherAddress); em.getTransaction() .commit(); em.close(); System.out.println("anotherAddress is saved! It has id: " + anotherAddress.getId());PersistenceManager.INSTANCE.close(); } } Let’s take a step back at this point and think what we needed to do if we had used plain JDBC for persistence. We had to manually write the insert queries and map each of the attributes to the corresponding columns for both cases, which would have required a lot of code. An important point to note about the example is the way I am setting the id of the entities. This approach will only work for short examples like this, but for real applications this is not good. You’d typically want to use, say, auto-incremented id columns or database sequences to generate the id values for your entities. For my example, I am using a MySQL database, and all of my id columns are set to auto increment. To reflect this in my entity model, I can use an additional annotation called @GeneratedValue in the id property. This tells JPA that the id value for this entity will be automatically generated by the database during the insert, and it should fetch that id after the insert using a select command. With the above modifications, my entity class becomes something like this: import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.GeneratedValue;@Entity public class Address { @Id @GeneratedValue private Integer id;// Rest of the class code........ And the insert procedure becomes this: Address anotherAddress = new Address() .setCity("Shinagawa-ku, Tokyo") .setCountry("Japan") .setPostcode("140-0002") .setStreet("Shinagawa Seaside Area"); em.getTransaction() .begin(); em.persist(anotherAddress); em.getTransaction() .commit(); How did JPA figure out which table to use to save Address entities? Turns out, it’s pretty straight-forward:When no explicit table information is provided with the mapping then JPA tries to find a table whose name matches with the entity name. The name of an entity can be explicitly specified by using the “name” attribute of the @Entity annotation. If no name attribute is found, then JPA assumes a default name for an entity. The default name of an entity is the simple name (not fully qualified name) of the entity class, which in our case is Address. So our entity name is then determined to be “Address”. Since our entity name is “Address”, JPA tries to find if there is a table in the database whose name is “Address” (remember, most of the cases database table names are case-insensitive). From our schema, we can see  that this is indeed the case.So how did JPA figure our which columns to use to save property values for address entities? At this point I think you will be able to easily guess that. If you cannot, stay tuned for my next post! Until next time. [ Full working code can be found at github.]Reference: JPA tutorial: Mapping Entities – Part 1 from our JCG partner Sayem Ahmed at the Random Thoughts blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: