Featured FREE Whitepapers

What's New Here?


Efficient Creation of Eclipse Modules with Maven Archetype

Maven Archetype is a project templating toolkit that provides developers with the means to generate parameterized versions of built-in or custom scaffolding artefacts. Recently I applied it to our Xiliary P2 repository to achieve automation of Eclipse module stubs creation. As it worked out well enough, I thought it might be worth while to share my experiences in this post.         Maven Archetype Maven Archetype allows programmers to setup scaffolding quickly and consistent with respect to conventions and best practices prescribed by a project or organization. It comes with set of predefined templates to ease generation of commonly used structures. For a list of default archetypes provided by Maven please refer to Introduction to Archetypes, section Provided Archetypes. Generation of a web-app project stub for example can be based on the archetype maven-archetype-webapp: mvn archetype:generate \ -DgroupId=com.codeaffine \ -DartifactId=com.codeaffine.webapp \ -Dversion=1.0-SNAPSHOT \ -DarchetypeGroupId=org.apache.maven.archetypes \ -DarchetypeArtifactId=maven-archetype-webapp \ -DarchetypeVersion=1.0 \ -DinteractiveMode=false The parameters groupId, artifactId and version are used to create a project root folder containing an appropriate configured project object model definition (pom.xml). Whereas the archetypeXXX arguments specify which template to employ. Based on the web-app archetype Maven provides a pom that sets the build lifecycle packaging attribute to war and produces the following directory and file structure: com.codeaffine.webapp |-- pom.xml `-- src `-- main |-- resources `-- webapp |-- WEB-INF | `-- web.xml `-- index.jsp If you happen to work with the Maven Integration for Eclipse you can select the New Project wizard for Maven projects to generate an eclipse project derived from a particular archetype:The selection shown by the image creates the same structure as in the command line example above. Additionally it provides Eclipse project specific files and settings and imports the generated project automatically into the workspace of the IDE. Custom Archetype Creation While the predefined templates are good for a quick-start, they are obviously not sufficient to employ project or organization specific conventions. The Eclipse IDE for example allows to configure all kind of settings in files located within the generated scaffolding structure. Hence it would be helpful to include such presets in a custom archetype. Luckily Maven Archetype facilitates the creation of custom template definitions as explained in the online documentation Guide to Creating Archetypes. However instead of building up the archetype from scratch, I found it more efficient to use the create-from-project option as described by Dirk Reinemann. I was able to do this because I already had a couple of Eclipse plug-ins, fragments and features I could use as prototypes. I found a tycho-eclipse-plugin-archetype definition at GitHub providing a template for generating eclipse modules with test fragments and repository site in one step, which seems to provide a good quick start for eclipse plug-in development. Create from Project To create a Maven Archetype from a given project copy it to a working directory and remove all files that should not go into the archetype package. This project torso now provides all the files and the directory structure that make up the scaffolding. Ensure that the root folder of the torso also contains a simple pom.xml as explained in step one of the Guide to Creating Archetypes. After that navigate to the folder where the pom resides and execute the following command: mvn archetype:create-from-project This generates the archetype development structure stored in the subfolder target/generated-sources/archetype. It contains a pom for the new archetype that is derived from the one which was placed in the root folder of the project torso. Furthermore there is a sub path src/main/resources/archetype-resources that contains a copy of the scaffolding structure and resources. The file src/main/resources/META-INF/maven/archetype-metadata.xml is the ArchetypeDescriptor, which lists all the files that will be contained in newly created template and categorizes them, so they can be processed correctly by the generation mechanism. Now it is possible to package the archetype and give it a first try to see if it works as expected so far. To do so navigate to the folder where the archetype’s pom resides and run: mvn install This makes the archetype available in your local repository. Using it for the first time is as easy as in the web-app example above and should look somewhat like the following snippet: mvn archetype:generate \ -DarchetypeArtifactId=foo.artefactId \ -DarchetypeGroupId=foo.groupId \ -DarchetypeVersion=foo.version If done properly Maven should now have created a project stub that basically looks the same as the one composed in the the project torso. Adjustments Unfortunately there is still more work to do. Eclipse plug-ins, fragments and features provide their own meta descriptors containing identifiers, version numbers, names and the like. And of course we expect those values to be reasonably prepopulated by the template processor. Maven Archetypes handles this with properties which can be declared in the ArchetypeDescriptor (see above). <requiredProperties> <requiredProperty key="identifier"></requiredProperty> </requiredProperties> Now you can refer to this property in arbitrary resources of the archetype using the following syntax: [...] Bundle-SymbolicName: ${identifier} [...] Initialization of the property can be done by setting it as system parameter of the command line for example: mvn archetype:generate \ -DarchetypeArtifactId=foo.artefactId \ -DarchetypeGroupId=foo.groupId \ -DarchetypeVersion=foo.version \ -Didentifier=foo.identifier \ Another problem for plug-ins and fragments is e.g. the empty or non existing source folder referred to by the .project definition file. Maven ignores empty directories during template processing. But the following snippet shows how to configure the descriptor to create such a folders nevertheless: <fileSets> <fileSet filtered="true" encoding="UTF-8"> <directory>src</directory> <includes> <include>**/*.java</include> </includes> </fileSet> [...] For more details on descriptor configuration please refer to the online documentation. Assembling the Pieces Given this knowledge I was able to create Maven Archetype artefacts for plug-in, test fragment and feature definition stubs that match the Xiliary development presets. This means each stub comes with the specific settings for codeformatting, execution environment, compile error/warning preferences and the like out of the box. For flexibility reasons I decided to go with three individual artefacts instead of one and wired them together using a little script. This is because most of the time I need to create all three stubs in one step. Although this renders the Eclipse New Project wizard unusable, it is not a big deal as the only benefit would be the automatic workspace import of the stubs.The only manual tasks left are the registration of the new modules in the parent pom of the repository’s build definition and the addition af a new feature entry in the P2 related catagory.xml. Conclusion This post gave an short introduction to Maven Archetype and showed how it can be used to automate Eclipse module creation. With the custom archetypes described above in place, it now takes about a minute to add a new feature definition with plug-in and test fragment to the workspace and build definition. And being development and build ready within a minute isn’t that bad compared to the previous manual create, configure, copy and paste litany… So you want to have look at the archetype sources by yourself, the definitions are located in the com.codeaffine.xiliary.archetype project of the Xiliary repository at GitHub.Reference: Efficient Creation of Eclipse Modules with Maven Archetype from our JCG partner Frank Appel at the Code Affine blog....

When null checking miserably fails

Disclaimer Before going on I have to state that the techniques described in this article serve no practical purpose when we program Java. It is like a crossword or puzzle. It main train your brain in logical thinking, may develop your Java language knowledge or even your thinking skills. It is like a trick a magician performs. At the end you realize that nothing is what it looks like. Never do such tricks in real life programming that you may need to apply to solve this mind twister.       The Problem I recently read an article that described the debugging case when: if(trouble != null && !trouble.isEmpty()) { System.out.println(“fine here: ” + trouble); }else{ System.out.println(“not so fine here: ” + trouble); } was printing out: fine here: null The actual bug was that the string contained “null”, a.k.a. the characters ‘n’, ‘u’, ‘l’ and ‘l’. May happen in real life especially when you concatenate strings without checking the nullity of a variable. Then I started to think about other similar strange code and debug situation. Could I make it so that the variable is not only “null” string with these characters but really null? Seems to be crazy? Have a look at the code: package com.javax0.blog.nullisnotnull;public class NullIsNotNull {public static void troubled(){ String trouble = new String("hallo"); Object z = trouble != null && !trouble.toString().isEmpty() ? trouble.toString() : ""; if (z == null) { System.out.println("z is really " + z + "?"); } } } Will it ever print out the: z is really null? question. The fact is that you can create a Java class containing a public static void main() so that starting that class as a Java application the sentence will be printed when main() invokes the method troubled(). In other words: I really invoke the method troubled() and the solution is not that main() prints the sentence. In this case the variable z is not only printed as “null” but it really is null. Hints The solution should not involvereflection byte code manipulation calling JNI special class loaders java agent annotation processorThese are too heavy tools. You do not need such armory for the purpose. Hint #1 If I change the code so that the variable z is String it does not even compile:  If it confused you even more, then sorry. Read on! Hint #2 In the Java language String is an identifier and not a keyword. The Java Language Specification section 3.9 may give more information on the significance of this. Hint #3 The method toString() in class Object has a return type java.lang.String. You may want to read my article about the difference between the name, simple name and canonical name of a class. It may shed some light and increase the hit count of the article. Hint #4 To use a class declared in the same package you need not import that package. Solution The solution is to create a class named String in the same package. In that case the compiler will use this class instead of java.lang.String. The ternary operator in the code is simple magician trick. Something to diverge your attention from the important point. The major point is that String is not java.lang.String in the code above. If you still can not find out how to create the trick class, click on the collapsed source code block to see it in all glory: package com.javax0.blog.nullisnotnull;class String { private java.lang.String jString; private boolean first = true;public String(java.lang.String s) { jString = s; }public boolean isEmpty() { return jString.isEmpty(); }@Override public java.lang.String toString() { if( first ){ first = false; return jString; } return null; }public static void main(java.lang.String[] args) { NullIsNotNull.troubled(); } }Reference: When null checking miserably fails from our JCG partner Peter Verhas at the Java Deep blog....

Don’t Waste Time Writing Perfect Code

A system can last for 5 or 10 or even 20 or more years. But the life of specific lines of code, even of designs, is often much shorter: months or days or even minutes when you’re iterating through different approaches to a solution. Some code matters more than other code Researching how code changes over time, Michael Feathers has identified a power curve in code bases. Every system has code, often a lot of it, that is written once and is never changed. But a small amount of code, including the code that is most important and useful, is changed over and over again, refactored or rewritten from scratch several times. As you get more experience with a system, or with a problem domain or an architectural approach, it should get easier to know and to predict what code will change all the time, and what code will never change: what code matters, and what code doesn’t. Should we try to write Perfect Code? We know that we should write clean code, code that is consistent, obvious and as simple as possible. Some people take this to extremes, and push themselves to write code that is as beautiful and elegant and as close to perfect as they can get, obsessively refactoring and agonizing over each detail. But if code is only going to be written once and never changed, or at the other extreme if it is changing all the time, isn’t writing perfect code as wasteful and unnecessary (and impossible to achieve) as trying to write perfect requirements or trying to come up with a perfect design upfront? You Can’t Write Perfect Software. Did that hurt? It shouldn’t. Accept it as an axiom of life. Embrace it. Celebrate it. Because perfect software doesn’t exist. No one in the brief history of computing has ever written a piece of perfect software. It’s unlikely that you’ll be the first. And unless you accept this as a fact, you’ll end up wasting time and energy chasing an impossible dream.” Andrew Hunt, The Pragmatic Programmer: from Journeyman to Master Code that is written once doesn’t need to be beautiful and elegant. It has to be correct. It has to be understandable – because code that is never changed may still be read many times over the life of the system. It doesn’t have to be clean and tight – just clean enough. Copy and paste and other short cuts in this code can be allowed, at least up to a point. This is code that never needs to be polished. This is code that doesn’t need to be refactored (until and unless you need to change it), even if other code around it is changing. This is code that isn’t worth spending extra time on. What about the code that you are changing all of the time? Agonizing over style and coming up with the most elegant solution is a waste of time, because this code will probably be changed again, maybe even rewritten, in a few days or weeks. And so is obsessively refactoring code each time that you make a change, or refactoring code that you aren’t changing because it could be better. Code can always be better. But that’s not important. What matters is: Does the code do what it is supposed to do – is it correct and usable and efficient? Can it handle errors and bad data without crashing – or at least fail safely? Is it easy to debug? Is it easy and safe to change? These aren’t subjective aspects of beauty. These are practical measures that make the difference between success and failure. Pragmatic Coding and Refactoring The core idea of Lean Development is: don’t waste time on things that aren’t important. This should inform how we write code, and how we refactor it, how we review it, how we test it. Only refactor what you need to, in order to get the job done – what Martin Fowler calls opportunistic refactoring (comprehension, cleanup, Boy Scout rule stuff) and preparatory refactoring. Enough to make a change easier and safer, and no more. If you’re not changing the code, it doesn’t really matter what it looks like. In code reviews, focus only on what is important. Is the code correct? Is it defensive? Is it secure? Can you follow it? Is it safe to change? Forget about style (unless style gets in the way of understandability). Let your IDE take care of formatting. No arguments over whether the code could be “more OO”. It doesn’t matter if it properly follows this or that pattern as long as it makes sense. It doesn’t matter if you like it or not. Whether you could have done it in a nicer way isn’t important – unless you’re teaching someone who is new to the platform and the language, and you’re expected to do some mentoring as part of code review. Write tests that matter. Tests that cover the main paths and the important exception cases. Tests that give you the most information and the most confidence with the least amount of work. Big fat tests, or small focused tests – it doesn’t matter, and it doesn’t matter if you write the tests before you write the code or after, as long as they do the job. It’s not (Just) About the Code The architectural and engineering metaphors have never been valid for software. We aren’t designing and building bridges or skyscrapers that will stay essentially the same for years or generations. We’re building something much more plastic and abstract, more ephemeral. Code is written to be changed – that is why it’s called “software”. “After five years of use and modification, the source for a successful software program is often completely unrecognizable from its original form, while a successful building after five years is virtually untouched.” Kevin Tate, Sustainable Software DevelopmentWe need to look at code as a temporary artefact of our work: …we’re led to fetishize code, sometimes in the face of more important things. Often we suffer under the illusion that the valuable thing produced in shipping a product is the code, when it might actually be an understanding of the problem domain, progress on design conundrums, or even customer feedback. Dan Grover, Code and Creative Destruction Iterative development teaches us to experiment and examine the results of our work – did we solve the problem, if we didn’t, what did we learn, how can we improve? The software that we are building is never done. Even if the design and the code are right, they may only be right for a while, until circumstances demand that they be changed again or replaced with something else that fits better. We need to write good code: code that is understandable, correct, safe and secure. We need to refactor and review it, and write good useful tests, all the while knowing that some of this code, or maybe all of it, could be thrown out soon, or that it may never be looked at again, or that it may not get used at all. We need to recognize that some of our work will necessarily be wasted, and optimize for this. Do what needs to be done, and no more. Don’t waste time trying to write perfect code.Reference: Don’t Waste Time Writing Perfect Code from our JCG partner Jim Bird at the Building Real Software blog....

On Java Generics and Erasure

“Generics are erased during compilation” is common knowledge (well, type parameters and arguments are actually the ones erased). That happens due to “type erasure”. But it’s wrong that everything specified inside the <..> symbols is erased, as many developers are assuming. See the code below:               public class ClassTest { public static void main(String[] args) throws Exception { ParameterizedType type = (ParameterizedType) Bar.class.getGenericSuperclass(); System.out.println(type.getActualTypeArguments()[0]); ParameterizedType fieldType = (ParameterizedType) Foo.class.getField("children").getGenericType(); System.out.println(fieldType.getActualTypeArguments()[0]); ParameterizedType paramType = (ParameterizedType) Foo.class.getMethod("foo", List.class) .getGenericParameterTypes()[0]; System.out.println(paramType.getActualTypeArguments()[0]); System.out.println(Foo.class.getTypeParameters()[0] .getBounds()[0]); } class Foo<E extends CharSequence> { public List<Bar> children = new ArrayList<Bar>(); public List<StringBuilder> foo(List<String> foo) {return null; } public void bar(List<? extends String> param) {} } class Bar extends Foo<String> {} } Do you know what that prints?class java.lang.String class ClassTest$Bar class java.lang.String class java.lang.StringBuilder interface java.lang.CharSequenceYou see that every single type argument is preserved and is accessible via reflection at runtime. But then what is “type erasure”? Something must be erased? Yes. In fact, all of them are, except the structural ones – everything above is related to the structure of the classes, rather than the program flow. In other words, the metadata about the type arguments of a class and its field and methods is preserved to be accessed via reflection. The rest, however, is erased. For example, the following code: List<String> list = new ArrayList<>(); Iterator<String> it = list.iterator(); while (it.hasNext()) { String s = it.next(); } will actually be transformed to this (the bytecode of the two snippets is identical): List list = new ArrayList(); Iterator it = list.iterator(); while (it.hasNext()) { String s = (String) it.next(); } So, all type arguments you have defined in the bodies of your methods will be removed and casts will be added where needed. Also, if a method is defined to accept List<T>, this T will be transformed to Object (or to its bound, if such is declared. And that’s why you can’t do new T() (by the way, one open question about this erasure). So far we covered the first two points of the type erasure definition. The third one is about bridge methods. And I’ve illustrated it with this stackoverflow question (and answer). Two “morals” of all this. First, java generics are complicated. But you can use them without understanding all the complications. Second, do not assume that all type information is erased – the structural type arguments are there, so make use of them, if needed (but don’t be over-reliant on reflection).Reference: On Java Generics and Erasure from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

Java 8 Streams: Micro Katas

A programming kata is an exercise which helps a programmer hone his skills through practice and repetition. This article is part of the series Java Tutorial Through Katas. The article assumes that the reader already has experience with Java, that he is familiar with the basics of unit tests and that he knows how to run them from his favorite IDE (mine is IntelliJ IDEA). The idea behind exercises displayed below is to learn Java 8 Streaming using test-driven development approach (write the implementation for the first test, confirm that it passes and move to the next). Each section will start with an objective in form of tests that prove that the implementation will be correct once it’s written. Each of those tests are followed with one possible implementation in Java 7 (or earlier) and Java 8 using Streams. That way reader can compare some of the new features of Java 8 with their equivalents in earlier JDKs. Please try to solve tests without looking at provided solutions. For more information about TDD best practices, please read the Test Driven Development (TDD): Best Practices Using Java Examples. Java 8 map Convert elements of a collection to upper case. Tests package com.technologyconversations.java8exercises.streams;import org.junit.Test;import java.util.List;import static com.technologyconversations.java8exercises.streams.ToUpperCase.*; import static java.util.Arrays.asList; import static org.assertj.core.api.Assertions.assertThat;/* Convert elements of a collection to upper case. */ public class ToUpperCaseSpec {@Test public void transformShouldConvertCollectionElementsToUpperCase() { List<String> collection = asList("My"< "name"< "is"< "John"< "Doe"); List<String> expected = asList("MY"< "NAME"< "IS"< "JOHN"< "DOE"); assertThat(transform(collection)).hasSameElementsAs(expected); }} Java 7 (transform7) and Java8 (transform) Implementations package com.technologyconversations.java8exercises.streams;import java.util.ArrayList; import java.util.List;import static java.util.stream.Collectors.toList;public class ToUpperCase {public static List<String> transform7(List<String> collection) { List<String> coll = new ArrayList<>(); for (String element : collection) { coll.add(element.toUpperCase()); } return coll; }public static List<String> transform(List<String> collection) { return collection.stream() // Convert collection to Stream .map(String::toUpperCase) // Convert each element to upper case .collect(toList()); // Collect results to a new list }} Java 8 filter Filter collection so that only elements with less than 4 characters are returned. Tests package com.technologyconversations.java8exercises.streams;import org.junit.Test;import java.util.List;import static com.technologyconversations.java8exercises.streams.FilterCollection.*; import static java.util.Arrays.asList; import static org.assertj.core.api.Assertions.assertThat;/* Filter collection so that only elements with less then 4 characters are returned. */ public class FilterCollectionSpec {@Test public void transformShouldFilterCollection() { List<String> collection = asList("My", "name", "is", "John", "Doe"); List<String> expected = asList("My", "is", "Doe"); assertThat(transform(collection)).hasSameElementsAs(expected); }} Java 7 (transform7) and Java8 (transform) Implementations package com.technologyconversations.java8exercises.streams;import java.util.ArrayList; import java.util.List;import static java.util.stream.Collectors.toList;public class FilterCollection {public static List<String> transform7(List<String> collection) { List<String> newCollection = new ArrayList<>(); for (String element : collection) { if (element.length() < 4) { newCollection.add(element); } } return newCollection; }public static List<String> transform(List<String> collection) { return collection.stream() // Convert collection to Stream .filter(value -> value.length() < 4) // Filter elements with length smaller than 4 characters .collect(toList()); // Collect results to a new list }} Java 8 flatMap Flatten multidimensional collection. Tests package com.technologyconversations.java8exercises.streams;import org.junit.Test;import java.util.List;import static com.technologyconversations.java8exercises.streams.FlatCollection.*; import static java.util.Arrays.asList; import static org.assertj.core.api.Assertions.assertThat;/* Flatten multidimensional collection */ public class FlatCollectionSpec {@Test public void transformShouldFlattenCollection() { List<List<String>> collection = asList(asList("Viktor", "Farcic"), asList("John", "Doe", "Third")); List<String> expected = asList("Viktor", "Farcic", "John", "Doe", "Third"); assertThat(transform(collection)).hasSameElementsAs(expected); }} Java 7 (transform7) and Java8 (transform) Implementations package com.technologyconversations.java8exercises.streams;import java.util.ArrayList; import java.util.List;import static java.util.stream.Collectors.toList;public class FlatCollection {public static List<String> transform7(List<List<String>> collection) { List<String> newCollection = new ArrayList<>(); for (List<String> subCollection : collection) { for (String value : subCollection) { newCollection.add(value); } } return newCollection; }public static List<String> transform(List<List<String>> collection) { return collection.stream() // Convert collection to Stream .flatMap(value -> value.stream()) // Replace list with stream .collect(toList()); // Collect results to a new list }} Java 8 max and comparator Get oldest person from the collection. Tests package com.technologyconversations.java8exercises.streams;import org.junit.Test;import java.util.List;import static com.technologyconversations.java8exercises.streams.OldestPerson.*; import static java.util.Arrays.asList; import static org.assertj.core.api.Assertions.assertThat;/* Get oldest person from the collection */ public class OldestPersonSpec {@Test public void getOldestPersonShouldReturnOldestPerson() { Person sara = new Person("Sara", 4); Person viktor = new Person("Viktor", 40); Person eva = new Person("Eva", 42); List<Person> collection = asList(sara, eva, viktor); assertThat(getOldestPerson(collection)).isEqualToComparingFieldByField(eva); }} Java 7 (getOldestPerson7) and Java8 (getOldestPerson) Implementations package com.technologyconversations.java8exercises.streams;import java.util.Comparator; import java.util.List;public class OldestPerson {public static Person getOldestPerson7(List<Person> people) { Person oldestPerson = new Person("", 0); for (Person person : people) { if (person.getAge() > oldestPerson.getAge()) { oldestPerson = person; } } return oldestPerson; }public static Person getOldestPerson(List<Person> people) { return people.stream() // Convert collection to Stream .max(Comparator.comparing(Person::getAge)) // Compares people ages .get(); // Gets stream result }} Java 8 sum and reduce Sum all elements of a collection. Tests package com.technologyconversations.java8exercises.streams;import org.junit.Test;import java.util.List;import static com.technologyconversations.java8exercises.streams.Sum.*; import static java.util.Arrays.asList; import static org.assertj.core.api.Assertions.assertThat;/* Sum all elements of a collection */ public class SumSpec {@Test public void transformShouldConvertCollectionElementsToUpperCase() { List<Integer> numbers = asList(1, 2, 3, 4, 5); assertThat(calculate(numbers)).isEqualTo(1 + 2 + 3 + 4 + 5); }} Java 7 (calculate7) and Java8 (calculate) Implementations package com.technologyconversations.java8exercises.streams;import java.util.List;public class Sum {public static int calculate7(List<Integer> numbers) { int total = 0; for (int number : numbers) { total += number; } return total; }public static int calculate(List<Integer> people) { return people.stream() // Convert collection to Stream .reduce(0, (total, number) -> total + number); // Sum elements with 0 as starting value }} Java 8 filter and map Get names of all kids (under age of 18). Tests package com.technologyconversations.java8exercises.streams;import org.junit.Test;import java.util.List;import static com.technologyconversations.java8exercises.streams.Kids.*; import static java.util.Arrays.asList; import static org.assertj.core.api.Assertions.assertThat;/* Get names of all kids (under age of 18) */ public class KidsSpec {@Test public void getKidNameShouldReturnNamesOfAllKidsFromNorway() { Person sara = new Person("Sara", 4); Person viktor = new Person("Viktor", 40); Person eva = new Person("Eva", 42); Person anna = new Person("Anna", 5); List<Person> collection = asList(sara, eva, viktor, anna); assertThat(getKidNames(collection)) .contains("Sara", "Anna") .doesNotContain("Viktor", "Eva"); }} Java 7 (getKidNames7) and Java8 (getKidNames) Implementations package com.technologyconversations.java8exercises.streams;import java.util.*;import static java.util.stream.Collectors.toSet;public class Kids {public static Set<String> getKidNames7(List<Person> people) { Set<String> kids = new HashSet<>(); for (Person person : people) { if (person.getAge() < 18) { kids.add(person.getName()); } } return kids; }public static Set<String> getKidNames(List<Person> people) { return people.stream() .filter(person -> person.getAge() < 18) // Filter kids (under age of 18) .map(Person::getName) // Map Person elements to names .collect(toSet()); // Collect values to a Set }} Java 8 summaryStatistics Get people statistics: average age, count, maximum age, minimum age and sum og all ages. Tests package com.technologyconversations.java8exercises.streams;import org.junit.Test;import java.util.List;import static com.technologyconversations.java8exercises.streams.PeopleStats.*; import static java.util.Arrays.asList; import static org.assertj.core.api.Assertions.assertThat;/* Get people statistics: average age, count, maximum age, minimum age and sum og all ages. */ public class PeopleStatsSpec {Person sara = new Person("Sara", 4); Person viktor = new Person("Viktor", 40); Person eva = new Person("Eva", 42); List<Person> collection = asList(sara, eva, viktor);@Test public void getStatsShouldReturnAverageAge() { assertThat(getStats(collection).getAverage()) .isEqualTo((double)(4 + 40 + 42) / 3); }@Test public void getStatsShouldReturnNumberOfPeople() { assertThat(getStats(collection).getCount()) .isEqualTo(3); }@Test public void getStatsShouldReturnMaximumAge() { assertThat(getStats(collection).getMax()) .isEqualTo(42); }@Test public void getStatsShouldReturnMinimumAge() { assertThat(getStats(collection).getMin()) .isEqualTo(4); }@Test public void getStatsShouldReturnSumOfAllAges() { assertThat(getStats(collection).getSum()) .isEqualTo(40 + 42 + 4); }} Java 7 (getStats7) and Java8 (getStats) Implementations package com.technologyconversations.java8exercises.streams;import java.util.IntSummaryStatistics; import java.util.List;public class PeopleStats {public static Stats getStats7(List<Person> people) { long sum = 0; int min = people.get(0).getAge(); int max = 0; for (Person person : people) { int age = person.getAge(); sum += age; min = Math.min(min, age); max = Math.max(max, age); } return new Stats(people.size(), sum, min, max); }public static IntSummaryStatistics getStats(List<Person> people) { return people.stream() .mapToInt(Person::getAge) .summaryStatistics(); }} Java 8 partitioningBy Partition adults and kids. Tests package com.technologyconversations.java8exercises.streams;import org.junit.Test;import java.util.List; import java.util.Map;import static com.technologyconversations.java8exercises.streams.Partitioning.*; import static java.util.Arrays.asList; import static org.assertj.core.api.Assertions.assertThat;/* Partition adults and kids */ public class PartitioningSpec {@Test public void partitionAdultsShouldSeparateKidsFromAdults() { Person sara = new Person("Sara", 4); Person viktor = new Person("Viktor", 40); Person eva = new Person("Eva", 42); List<Person> collection = asList(sara, eva, viktor); Map<Boolean, List<Person>> result = partitionAdults(collection); assertThat(result.get(true)).hasSameElementsAs(asList(viktor, eva)); assertThat(result.get(false)).hasSameElementsAs(asList(sara)); }} Java 7 (partitionAdults7) and Java8 (partitionAdults) Implementations package com.technologyconversations.java8exercises.streams;import java.util.*; import static java.util.stream.Collectors.*;public class Partitioning {public static Map<Boolean, List<Person>> partitionAdults7(List<Person> people) { Map<Boolean, List<Person>> map = new HashMap<>(); map.put(true, new ArrayList<>()); map.put(false, new ArrayList<>()); for (Person person : people) { map.get(person.getAge() >= 18).add(person); } return map; }public static Map<Boolean, List<Person>> partitionAdults(List<Person> people) { return people.stream() // Convert collection to Stream .collect(partitioningBy(p -> p.getAge() >= 18)); // Partition stream of people into adults (age => 18) and kids }} Java 8 groupingBy Group people by nationality. Tests package com.technologyconversations.java8exercises.streams;import org.junit.Test;import java.util.List; import java.util.Map;import static com.technologyconversations.java8exercises.streams.Grouping.*; import static java.util.Arrays.asList; import static org.assertj.core.api.Assertions.assertThat;/* Group people by nationality */ public class GroupingSpec {@Test public void partitionAdultsShouldSeparateKidsFromAdults() { Person sara = new Person("Sara", 4, "Norwegian"); Person viktor = new Person("Viktor", 40, "Serbian"); Person eva = new Person("Eva", 42, "Norwegian"); List<Person> collection = asList(sara, eva, viktor); Map<String, List<Person>> result = groupByNationality(collection); assertThat(result.get("Norwegian")).hasSameElementsAs(asList(sara, eva)); assertThat(result.get("Serbian")).hasSameElementsAs(asList(viktor)); }} Java 7 (groupByNationality7) and Java8 (groupByNationality) Implementations package com.technologyconversations.java8exercises.streams;import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map;import static java.util.stream.Collectors.*;public class Grouping {public static Map<String, List<Person>> groupByNationality7(List<Person> people) { Map<String, List<Person>> map = new HashMap<>(); for (Person person : people) { if (!map.containsKey(person.getNationality())) { map.put(person.getNationality(), new ArrayList<>()); } map.get(person.getNationality()).add(person); } return map; }public static Map<String, List<Person>> groupByNationality(List<Person> people) { return people.stream() // Convert collection to Stream .collect(groupingBy(Person::getNationality)); // Group people by nationality }} Java 8 joining Return people names separated by comma. Tests package com.technologyconversations.java8exercises.streams;import org.junit.Test;import java.util.List;import static com.technologyconversations.java8exercises.streams.Joining.namesToString; import static java.util.Arrays.asList; import static org.assertj.core.api.Assertions.assertThat;/* Return people names separated by comma */ public class JoiningSpec {@Test public void toStringShouldReturnPeopleNamesSeparatedByComma() { Person sara = new Person("Sara", 4); Person viktor = new Person("Viktor", 40); Person eva = new Person("Eva", 42); List<Person> collection = asList(sara, viktor, eva); assertThat(namesToString(collection)) .isEqualTo("Names: Sara, Viktor, Eva."); }} Java 7 (namesToString7) and Java8 (namesToString) Implementations package com.technologyconversations.java8exercises.streams;import java.util.List;import static java.util.stream.Collectors.joining;public class Joining {public static String namesToString7(List<Person> people) { String label = "Names: "; StringBuilder sb = new StringBuilder(label); for (Person person : people) { if (sb.length() > label.length()) { sb.append(", "); } sb.append(person.getName()); } sb.append("."); return sb.toString(); }public static String namesToString(List<Person> people) { return people.stream() // Convert collection to Stream .map(Person::getName) // Map Person to name .collect(joining(", ", "Names: ", ".")); // Join names }} Source Full source is located in the GitHub repo https://github.com/vfarcic/java-8-exercises. Besides tests and implementation, repository includes build.gradle that can be used, among other things, to download AssertJ dependencies and run tests.Reference: Java 8 Streams: Micro Katas from our JCG partner Viktor Farcic at the Technology conversations blog....

Beyond Thread Pools: Java Concurrency is Not as Bad as You Think

Apache Hadoop, Apache Spark, Akka, Java 8 streams and Quasar: The classic use cases to the newest concurrency approaches for Java developers There’s a lot of chatter going around about newer concepts in concurrency, yet many developers haven’t had a chance to wrap their heads around them yet. In this post we’ll go through the things you need to know about Java 8 streams, Hadoop, Apache Spark, Quasar fibers and the Reactive programming approach – and help you stay in the loop, especially if you’re not working with them on a regular basis. It’s not the future, this is happening right now. What are we dealing with here? When talking about concurrency, a good way to characterize the issue at hand is answering a few questions to get better feel for it:Is it a data processing task? If so, can it be broken down to independent pieces of work? What’s the relationship between the OS, the JVM and your code? (Native threads Vs. light-weight threads) How many machines and processors are involved? (Single core Vs. Multicore)Let’s go through each of these and figure out the best use cases to each approach. 1. From Thread Pools to Parallel Streams Data processing on single machines, letting the Java take care of thread handling With Java 8, we’ve been introduced to the new Stream API that allows applying aggregate operations like Filter, Sort or Map on streams of data. Another thing Streams allow are parallel operations on multicore machines when applying .parallelStream() – Splitting the work between threads using the Fork/Join framework introduced in Java 7. An evolution from the Java 6 java.util.concurrency library, where we met the ExecutorService which creates and handles our worker thread pools. Fork/Join is also built on top of the ExecuterService, the main difference from a traditional thread pool is how they distribute the work between threads and thereby multicore machine support. With a simple ExecuterService you’re in full control of the workload distribution between worker threads, determining the size of each task for the threads to handle. With Fork/Join on the other hand, there’s a work-stealing algorithm in place that abstracts workload handling between threads. In a nutshell, this allows large tasks to be divided to smaller ones (forked), and processed in different threads, eventually joining the results – Balancing the the work between threads. However, it’s not a silver bullet. Sometimes Parallel Streams may even slow you down, so you’ll need to think it through. Adding .parallelStream() to your methods can cause bottlenecks and slowdowns (some 15% slower on this benchmark we ran), the fine line goes through the number of threads. Let’s say we’re already running multiple threads and we’re using .parallelStream() in some of them, adding more and more threads to the pool. This could easily turn into more than our cores could handle, and slow everything down due to increased context switching. Bottom line: Parallel Streams abstract handling threads on a single machine in a way that distributes the workload between your cores. However, if you want to use them efficiently it’s critical to keep the hardware in mind not spawn more threads than your machine can handle. 2. Apache Hadoop and Apache Spark Heavy duty lifting: Big data processing across multiple machines Moving on to multiple machines, petabytes of data, and tasks that resemble pulling all tweets that mention Java from twitter or heavy duty machine learning algorithms. When speaking of Hadoop, it’s important to take another step and think of the wider framework and its components: The Hadoop Distributed File System (HDFS), a resource management platform (YARN), the data processing module (MapReduce) and other libraries and utilities needed for Hadoop (Common). On top of these come other optional tools like a database which runs on top of HDFS (HBase), a platform for a querying language (Pig), and a data warehouse infrastructure (Hive) to name a few of the popular ones. This is where Apache Spark steps in as a new data processing module, famous for its in-memory performance and the use of fast performing Resilient Distributed Datasets (RDDs), unlike the Hadoop MapReduce which doesn’t employ in-memory (and on-disk) operations as efficiently. The latest benchmark released by Databricks shows that Spark was 3x faster than Hadoop in sorting a petabyte of data, while using 10x less nodes.The classic use case for Hadoop would be querying data, while Spark is getting famous for its fast runtimes of machine learning algorithms. But this is only the tip of the iceberg, as stated by Databricks: “Spark enables applications in Hadoop clusters to run up to 100x faster in memory, and 10x faster even when running on disk”. Bottom line: Spark is the new rising star in Hadoop’s ecosystem. There’s a common misconception that we’re talking about something unrelated or competing, but I believe that what we’re seeing here is the evolution of the framework. 3. Quasar fibers Breaking native threads to virtual light-weight threads We’ve had the chance to run through the Hadoop, now let’s back to single machines. In fact, let’s zoom in even further than the standard multithreaded Java application and focus on one single thread. As far as we’re concerned, HotSpot JVM threads are the same as native OS threads, holding one thread and running “virtual” threads within it is what fibers are all about. Java doesn’t have a native fibers support, but no worries, Quasar by Parallel Universe got us covered. Quasar is an open source JVM library that supports fibers (Also known as light weight threads), and also acts as an Actor Framework, which I’ll mention later. Context switching is the name of the game here. As we’re limited by the number of cores, once the native thread count grows larger we’re subjected to more and more context switching overhead. One way around this is fibers, using a single thread that supports “multithreading”. Looks like a case of threadcepiton. Fibers can also be seen as an evolution from thread pools, dodging the dangers of thread overload we went through with Parallel Streams. They make it easier to scale threads and allow a significantly larger number of concurrent “light” threads. They’re not intended to replace threads and should be used for code that blocks relatively often, it’s like they’re acting as true async threads. Bottom line: Parallel Universe is offering a fresh approach to concurrency in Java, haven’t reached v1.0 yet but definitely worth checking out. 4. Actors & Reactive Programming A different model for handling concurrency in Java In the Reactive Manifesto, the new movement is described with 4 principles: Responsive, Resilient, Elastic and Message-Driven. Which basically means fast, fault tolerant, scalable and suuports non-blocking communication. Let’s see how Akka Actors support that. To simplify things, think of Actors as people that have a state and a certain behavior, communicating by exchanging messages that go to each other’s mailbox. An Actor system as a whole should be created per application, with a hierarchy that breaks down tasks to smaller tasks so that each actor has only one supervising actor at most. An actor can either take care of the task, break it down event further with delegation to another actor or in case of failure, escalate it to his supervisor. Either way, messages shouldn’t include behavior or share mutable states, each Actor has an isolated stated and behavior of its own. It’s a paradigm shift from the concurrency models most developers are used to. And a bit of an off-shoot from the evolution in the first 3 topics we covered here. Although its roots stem back from the 70’s, its been under the radar just until recent years with a revival to better fit modern application demands. Parallel Universe’s Quasar also supports Actor, based on its light-weight threads. The main difference in implementation lies in the fibers/light-weight threads. Bottom line: Taking on the Actor model takes managing thread pools off your back, leaving it to the toolkit. The revival of interest comes from the kind of problems applications deal with today, highly concurrent systems with much more cores that we can work with. Conclusion We’ve ran through 4 methods to solve problems using concurrent or parallel algorithms with the most interesting approaches to tackle today’s challenges. Hopefully this helped pique your interest and get a better view of the hot topics in concurrency today. Going beyond the thread pools, there’s a trend of delegating this responsibly to the language and its tools – Focusing dev resources on shipping new functionality rather than spending countless hours solving race conditions and locks.Reference: Beyond Thread Pools: Java Concurrency is Not as Bad as You Think from our JCG partner Alex Zhitnitsky at the Takipi blog....

Exploration of ideas

There are many professionals for an individual to choose and I believe that e should follow the professional that he like most or hate least. The chance of success and quality of life are both much better that way. So, if you ask me why I choose software development as a career, I can assure you that programming is a fun career. Of course, it is fun not by sitting in front of the monitor and typing endlessly on the keyboard. It is fun because we control what we write and we can let our innovation run wild. In this article, I want to share some ideas that I have tried and please see for your self if any idea fits your work.   TOOLS Using OpenExtern and AnyEdit plugins for Eclipse Of all the plugins that I have tried, these two plugins are my two most favorites. They are small, fast and stable. OpenExtern give you the shortcut to open file explorer or console for any eclipse resource. At the beginning day of using Eclipse, I often found myself opening project properties just to copy project folder to open console or file explorer. OpenExtern plugin make the tedious 5 to 10 seconds process become 1 second mouse click. This simple tool actually helps a lot because many of us running Maven or Git commands from console. The other plugin that I find useful is AnyEdit. It adds handful of converting, comparing and sorting tool to Eclipse editor. It eliminates the need to use external editor or comparison tool for Eclipse. I also like to turn on auto formatting and removing trailing spaces when saving. This works pretty well if all of us have the same formatter configuration (line wrapping, indentation,…). Before we standardized our code formatter, we had a hard time comparing and merging source code after each check in. Other than these two, in the past, I often installed the Decompiler and Json Editor plugins. However, as most of Maven artifacts now a day are uploaded with source code and Json content can be viewed easily using Chrome Json plugin, I do not find these plugins useful anymore. Using JArchitect to monitor code quality In the past, we monitored the code quality of projects with eye balls. That seems good enough when you have time to take care of all modules. Things get more complicated when the team is growing or multiple team working on the same project. Eventually, the code still need to be reviewed and we need to be alerted if things go wrong. Fortunately, I got an offer from JArchitect to try out their latest product. First and foremost, this is a stand alone product rather than traditional way of integrating to IDE. For me, it is a plus point because you may not want to make your IDE too heavy. The second good thing is JArchitect can understand Maven, which is a rare feature in the market. The third good thing is that JArchitect create their own project file in their own workspace, which make no harm to the original Java project. Currently, the product is commercial but you may want to take a look to see if the benefit justify cost.SETTING UP PROJECTS As we all know, a Java web project has both unit test and functional  test. For functional test, if you use framework like Spring MVC, it is possible to create tests for controller without bringing up the server. Otherwise, it is quite normal that developers need to start up server, run functional test, then shutdown the server. This process is a bit tedious given the fact that the project may be created by someone else, which we have never communicated before. Therefore, we are trying to setup project in such a way that people can just download it and run without much hassle. Server In the past, we had the server setup manually for each local development box and the integration server (Cruise Control or Hudson). Gradually, our common practice shift toward checking in the server to every project. The motivation behind this move is saving of effort to setup a new project after checking out. Because the server is embedded inside project, there is no need to download or setup the server for each box. Moreover, this practice discourages shared server among projects, which is less error prone. Other than server, there are two other elements inside a project that are server dependence. They are properties files and database. For each of this element, we have slightly different practice, depending on situation. Properties file Checking in properties template rather than properties file Each developer need to clone the template file and edit when necessary. For continuous integration server, it is a bit trickier. Developer can manually create the file in the workspace or simply check in the build server properties file to the project. The former practice is not used any more because it is too error prone. Any attempt to clean workspace will delete properties file and we cannot track back the modification of properties in the file. Moreover, as we setup Jenkins as a cluster rather than single node like in the past, it is not applicable any more. For second practice, rather than checking in my-project.properties, I would rather checkin my-project.properties-template and my-project.properties-jenkins. The first file can be used as guidance to setup local properties file while the second can be renamed and used for Jenkins. Using host name to define external network connection This practice may work better sometimes if we need to setup similar network connections for various projects. Let say we need to configure database for around 10 similar project pointing to the same database. In this case, we can hard code the database server in the properties file and manually setup the host file in each build node to point the pre-defined domain of the database. For non essential properties, provide as much default value as possible There is nothing much to say about this practice. We only need to remind ourselves to be less lazy so that other people may enjoy handling our projects. Using LandLord Service This is a pretty new practice that we only apply from last year. With the regulation in our office, Web Service is the only team that manage and have access to UAT and PRODUCTION server. That why we need to guide them and they need to do the repetitive setup for at least 2 environments, both normally require clustering. It is quite tedious to them until the day we consolidate the properties of all environments, all project to a single landlord service. From that time, each project would start up and connecting to the landlord service with an environment id and application id. The landlord will happily serve them all the information they need. Database Using DBUnit to setup database once and let each test case automatically roll back transaction This is the traditional practice which is still work quite well now a day. However, it still require developer to create an empty schema for DBUnit to connect to. For it to work, the project must have a transaction management framework that support auto roll back in test environment. It also requires that the database operation happen within the test environment. For example, if in the functional test, we attempt to send a HTTP request to the server. In this case, the database operation happen in the server itself rather than in the test environment wen cannot do anything to roll it back. Running database in memory This is a built-in feature of Play framework. Developer will work with in-memory database in development mode and external database in production mode. This is doable as long as developer only work with JPA entities and has no knowledge of underlying database system. Database evolutions This is a borrowed idea from Ror. Rather than setting up the database from beginning, we can simply check current version and sequentially run the delta script so that the database got the wanted schema. Similar to above, it is expensive to do it yourselves unless there is native support from a framework like Ror or Play. CODING I have been in the industry for 10 years and I can tell you that software development is like fashion. There is no clear separation between good and bad practice. Whatever things are classified as bad practices may come back another day as new best practices. Let summarize some of the heaty debates we have. Access modifiers for class attributes Most of us were taught that we should hide class attributes from external access. Instead, we suppose to create getter and setter for each attribute. I strictly followed this rule in my early days even I did not know that most IDE can auto generate getter and setter. However, later, I was introduced to another idea that setter is dangerous. It allows other developers to spoil your object and mess up the system. In this case, you should create immutable object and do not provide the attribute setter. The challenge is how should we write our Unit Test if we hide the setter for class attributes. Sometimes, the attributes is inserted to the object using IOC framework like Spring. If there is no framework around, we may need to use reflection util to insert mock dependency to the test object. I have seen many developers solving problem this way but I think it is over-engineering. If we compromise a bit, it is much more convenient to use package modifier for the attribute. As best practices, test cases will always be on the same package with implementation and we should have no issue injecting mock dependencies. The package is normally controlled and contributed by the same individual or team; therefore the chance of spoiling object is minimal. Finally, as package modifier is the default modifier, it saves a few bytes of your codes. Monolithic application versus Micro Service architecture When I joined industry, the word “Enterprise” mean lots of xml, lots of deployment steps, huge application, many layers and the code is very domain oriented. As things evolve, we adopted ORM, learned to split business logic from presentation logic, learn how to simplify and enrich our application with AOP. It goes well until I was told again that the way to move forward is Micro Service Architect, which make Java Enterprise application functions similarly to a Php application. The biggest benefit we have with Java now may be the performance advantage of Java Vitual Machine. When adopting Micro Service architecture, it is obvious that our application will be database oriented. By removing the layers, it also minimize the benefit of AOP. The code base sure will shrink but it will be harder to write Unit Test.Reference: Exploration of ideas from our JCG partner Nguyen Anh Tuan at the Developers Corner blog....

Beginner’s Guide to Hazelcast Part 2

This article continues the series that I have started featuring Hazelcast, a distributed, in-memory database. If one has not read the first post, please click here. Distributed Collections Hazelcast has a number of distributed collections that can be used to store data. Here is a list of them:        IList ISet IQueueIList IList is a collection that keeps the order of what is put in and can have duplicates. In fact, it implements the java.util.List interface. This is not thread safe and one must use some sort of mutex or lock to control access by many threads. I suggest Hazelcast’s ILock. ISet ISet is a collection that does not keep order of the items placed in it. However, the elements are unique. This collection implements the java.util.Set interface. Like ILists, this collection is not thread safe. I suggest using the ILock again. IQueue IQueue is a collection that keeps the order of what comes in and allows duplicates. It implements the java.util.concurrent.BlockingQueue so it is thread safe. This is the most scalable of the collections because its capacity grows as the number of instances go up. For instance, lets say there is a limit of 10 items for a queue. Once the queue is full, no more can go in there unless another Hazelcast instance comes up, then another 10 spaces are available. A copy of the queue is also made. IQueues can also be persisted via implementing the interface QueueStore. What They Have in Common All three of them implement the ICollection interface. This means one can add an ItemListener to them.  This lets one know when an item is added or removed. An example of this is in the Examples section. Scalablity As scalability goes, ISet and IList don’t do that well in Hazelcast 3.x. This is because the implementation changed from being map based to becoming a collection in the MultiMap. This means they don’t partition and don’t go beyond a single machine. Striping the collections can go a long way or making one’s own that are based on the mighty IMap. Another way is to implement Hazelcast’s spi. Examples Here is an example of an ISet, IList and IQueue. All three of them have an ItemListener. The ItemListener is added in the hazelcast.xml configuration file. One can also add an ItemListener programmatically for those inclined. A main class and the snippet of configuration file that configured the collection will be shown. CollectionItemListener I implemented the ItemListener interface to show that all three of the collections can have an ItemListener. Here is the implementation: package hazelcastcollections;import com.hazelcast.core.ItemEvent; import com.hazelcast.core.ItemListener;/** * * @author Daryl */ public class CollectionItemListener implements ItemListener {@Override public void itemAdded(ItemEvent ie) { System.out.println(“ItemListener – itemAdded: ” + ie.getItem()); }@Override public void itemRemoved(ItemEvent ie) { System.out.println(“ItemListener – itemRemoved: ” + ie.getItem()); }} ISet Code package hazelcastcollections.iset;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.ISet;/** * * @author Daryl */ public class HazelcastISet {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); ISet<String> set = instance.getSet(“set”); set.add(“Once”); set.add(“upon”); set.add(“a”); set.add(“time”);ISet<String> set2 = instance2.getSet(“set”); for(String s: set2) { System.out.println(s); }System.exit(0); }} Configuration <set name=”set”> <item-listeners> <item-listener include-value=”true”>hazelcastcollections.CollectionItemListener</item-listener> </item-listeners> </set> IList Code package hazelcastcollections.ilist;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IList;/** * * @author Daryl */ public class HazelcastIlist {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); IList<String> list = instance.getList(“list”); list.add(“Once”); list.add(“upon”); list.add(“a”); list.add(“time”);IList<String> list2 = instance2.getList(“list”); for(String s: list2) { System.out.println(s); } System.exit(0); }} Configuration <list name=”list”> <item-listeners> <item-listener include-value=”true”>hazelcastcollections.CollectionItemListener</item-listener> </item-listeners> </list>  IQueue Code I left this one for last because I have also implemented a QueueStore. There is no call on IQueue to add a QueueStore.  One has to configure it in the hazelcast.xml file. package hazelcastcollections.iqueue;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IQueue;/** * * @author Daryl */ public class HazelcastIQueue {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); IQueue<String> queue = instance.getQueue(“queue”); queue.add(“Once”); queue.add(“upon”); queue.add(“a”); queue.add(“time”);IQueue<String> queue2 = instance2.getQueue(“queue”); for(String s: queue2) { System.out.println(s); }System.exit(0); }} QueueStore Code package hazelcastcollections.iqueue;import com.hazelcast.core.QueueStore; import java.util.Collection; import java.util.Map; import java.util.Set; import java.util.TreeMap; import java.util.TreeSet; /** * * @author Daryl */ public class QueueQStore implements QueueStore<String> {@Override public void store(Long l, String t) { System.out.println(“storing ” + t + ” with ” + l); }@Override public void storeAll(Map<Long, String> map) { System.out.println(“store all”); }@Override public void delete(Long l) { System.out.println(“removing ” + l); }@Override public void deleteAll(Collection<Long> clctn) { System.out.println(“deleteAll”); }@Override public String load(Long l) { System.out.println(“loading ” + l); return “”; }@Override public Map<Long, String> loadAll(Collection<Long> clctn) { System.out.println(“loadAll”); Map<Long, String> retMap = new TreeMap<>(); return retMap; }@Override public Set<Long> loadAllKeys() { System.out.println(“loadAllKeys”); return new TreeSet<>(); }} Configuration Some mention needs to be addressed when it comes to configuring the QueueStore. There are three properties that do not get passed to the implementation. The binary property deals with how Hazelcast will send the data to the store. Normally, Hazelcast stores the data serialized and deserializes it before it is sent to the QueueStore. If the property is true, then the data is sent serialized. The default is false. The memory-limit is how many entries are kept in memory before being put into the QueueStore. A 10000 memory-limit means that the 10001st is being sent to the QueueStore. At initialization of the IQueue, entries are being loaded from the QueueStore. The bulk-load property is how many can be pulled from the QueueStore at a time. <queue name=”queue”> <max-size>10</max-size> <item-listeners> <item-listener include-value=”true”>hazelcastcollections.CollectionItemListener</item-listener> </item-listeners> <queue-store> <class-name>hazelcastcollections.iqueue.QueueQStore</class-name> <properties> <property name=”binary”>false</property> <property name=”memory-limit”>10000</property> <property name=”bulk-load”>500</property> </properties> </queue-store> </queue>  Conclusion I hope one has learned about distributed collections inside Hazelcast. ISet, IList and IQueue were discussed. The ISet and IList only stay on the instance that they are created while the IQueue has a copy made, can be persisted and its capacity increases as the number of instances increase. The code can be seen here. ReferencesThe Book of Hazelcast: www.hazelcast.com Hazelcast Documentation (comes with the hazelcast download)Reference: Beginner’s Guide to Hazelcast Part 2 from our JCG partner Daryl Mathison at the Daryl Mathison’s Java Blog blog....

Lightweight Integration Tests for Eclipse Extensions

Recently I introduced a little helper for Eclipse extension point evaluation. The auxiliary strives to reduce boilerplate code for common programming steps, while increasing development guidance and readability at the same time. This post is the promised follow-up that shows how to combine the utility with an AssertJ custom assert to write lightweight integration tests for Eclipse extensions. Eclipse Extensions Loose coupling is in Eclipse partially achieved by the mechanism of extension-points and extensions. Whereby an extension serves as a contribution to a particular extension-point. However the declarative nature of extensions and extension-points leads sometimes to problems, which can be difficult to trace. This may be the case if by accident the extension declaration has been removed, the default constructor of an executable extension has been expanded with parameters, the plugin.xml has not been added to the build.properties or the like. Depending on the PDE Error/Warning settings one should be informed about a lot of these problems by markers, but somehow it happens again and again that contributions are not recognized and valuable time gets lost with error tracking. Because of this it might be helpful to have lightweight integration tests in place to verify that a certain contribution actually is available. For general information on how to extend Eclipse using the extension point mechanism you might refer to the Plug-in Development Environment Guide of the online documentation. Integration tests with JUnit Plug-in Tests Given the extension-point definition of the last post…… an extension contribution could look like this: <extension point="com.codeaffine.post.contribution"> <contribution id="myContribution" class="com.codeaffine.post.MyContribution"> </contribution> </extension> Assuming that we have a test-fragment as described in Testing Plug-ins with Fragments, we could introduce a PDETest to verify that the extension above with the given id exists and is instantiable by a default constructor. This test makes use of the RegistryAdapter introduced by the previous post and a specific custom assert called ExtensionAssert: public class MyContributionPDETest { @Test public void testExtension() { Extension actual = new RegistryAdapter() .readExtension( "com.codeaffine.post.contribution" ) .thatMatches( attribute( "id", "myContribution" ) ) .process(); assertThat( actual ) .hasAttributeValue( "class", MyContribution.class.getName() ) .isInstantiable( Runnable.class ); } } As described in the previous post RegistryAdapter#readExtension(String) reads exactly one extension for the given ‘id’ attribute. In case it detects more than one contribution with this attribute, an exception would be thrown. ExtensionAssert#assertThat(Extension) (used via static import) provides an AssertJ custom assert that provides some common checks for extension contributions. The example verifies that the value of the ‘class’ attribute matches the fully qualified name of the contribution’s implementation type, that the executable extension is actually instantiable using the default constructor and that the instance is assignable to Runnable. Where to get it? For those who want to check it out, there is a P2 repository that contains the features com.codeaffine.eclipse.core.runtime and com.codeaffine.eclipse.core.runtime.test.util providing the RegistryAdapter and the ExtensionAssert. The repository is located at:http://fappel.github.io/xiliary/and the source code and issue tracker is hosted at:https://github.com/fappel/xiliaryAlthough documentation is missing completely at this moment, it should be quite easy to get started with the given explanations of this and the previous post. But keep in mind that the features are in a very early state and probably will undergo some API changes. In particular assertions of nested extensions seems a bit too weak at the moment. In case you have ideas for improvement or find some bugs the issue tracker is probably the best place to handle this, for everything else feel free to use the comment section below.Reference: Lightweight Integration Tests for Eclipse Extensions from our JCG partner Frank Appel at the Code Affine blog....

Storing the state of an activity of your Android application

This is the last post in my series about saving data in your Android application. The previous posts went over the various way to save data in your application: Introduction : How to save data in your Android application Saving data to a file in your Android application Saving preferences in your Android application Saving to a SQLite database in your Android application This final post will explain when you should save the current state of your application so you users do not lose their data. There are two types of states that can be saved: you can save the current state of the UI if the user is interrupted while entering data so he can resume input when the application is started again, or you can save the data to a more permanent store of data to access it at any time. Saving the current UI state When another application is launched and hides your application, the data your user entered may not be ready to be saved to a permanent store of data yet since the input is not done. On the other hand, you should still save the current state of the activity so the user don’t lose all their work if a phone call comes in for example. A configuration change with the Android device like rotating the screen will also have the same effect, so this is another good reason to save the state. When one of those event or another event that requires saving the state of the activity occurs, the Android SDK calls the onSaveInstanceState method of the current activity, which receives an android.os.Bundle object as a parameter. If you use standard views from the Android SDK and those views have unique identifiers, the state of those controls will be automatically saved to the bundle. But if you use multiple instances of a view that have the same identifier, for example by repeating a view using a ListView, the values entered in your controls will not be saved since the identifier is duplicated. Also, if you create your own custom controls, you will have to save the state of those controls. If you need to manually save the state, you must override the onSaveInstanceState method and add your own information to the android.os.Bundle received as a parameter with pairs of key/values. This information will then be available later on when the activity needs to be restored in its onCreate and onRestoreInstanceState methods. All primitives types or arrays of values of those types can be saved to a bundle. If you want to save objects or an array of objects to the bundle they must implement the java.io.Serializable or android.os.Parcelable interfaces. To demonstrate saving to a state, I will use an upgraded version of the application used in the article about saving to a database that is available on GitHub at http://github.com/CindyPotvin/RowCounter. The application manages row counters used for knitting project, but it had no way to create a project. In the new version, the user can now create a new project,  and the state of the project being created needs to be saved if the user is interrupted while creating the project. For demonstration purpose, numerical values are entered using a custom CounterView control that does no handle saving the state, so we must save the state of each counter manually to the bundle. @Override public void onSaveInstanceState(Bundle savedInstanceState) { CounterView rowCounterAmountView = (CounterView)this.findViewById(R.id.row_counters_amount); savedInstanceState.putInt(ROW_COUNTERS_AMOUNT_STATE, rowCounterAmountView.getValue());CounterView rowAmountView = (CounterView)this.findViewById(R.id.rows_amount); savedInstanceState.putInt(ROWS_AMOUNT_STATE, rowAmountView.getValue());// Call the superclass to save the state of all the other controls in the view hierarchy super.onSaveInstanceState(savedInstanceState); } When the user navigates back to the application, the activity is recreated automatically by the Android SDK from the information that was saved in the bundle. At that point, you must also restore the UI state for the your custom controls. You can restore the UI state using the data you saved to the bundle from two methods of your activity: the onCreate method which is called first when the activity is recreated or the onRestoreInstanceState method that is called after the onStart method. You can restore the state in one method or the other and it most cases it won’t matter, but both are available in case some initialization needs to be done after the onCreate and onStart methods. Here are the two possible ways to restore the state from the activity using the bundle saved in the previous example : @Override protected void onCreate(Bundle savedInstanceState) { [...Normal initialization of the activity...]// Check if a previously destroyed activity is being recreated. // If a new activity is created, the savedInstanceState will be empty if (savedInstanceState != null) { // Restore value of counters from saved state CounterView rowCounterAmountView; rowCounterAmountView = (CounterView)this.findViewById(R.id.row_counters_amount); rowCounterAmountView.setValue(savedInstanceState.getInt(ROW_COUNTERS_AMOUNT_STATE));CounterView rowAmountView = (CounterView)this.findViewById(R.id.rows_amount); rowAmountView.setValue(savedInstanceState.getInt(ROWS_AMOUNT_STATE)); } } @Override protected void onRestoreInstanceState(Bundle savedInstanceState) { // Call the superclass to restore the state of all the other controls in the view hierarchy super.onRestoreInstanceState(savedInstanceState);// Restore value of counters from saved stat CounterView rowCounterAmountView = (CounterView)this.findViewById(R.id.row_counters_amount); rowCounterAmountView.setValue(savedInstanceState.getInt(ROW_COUNTERS_AMOUNT_STATE));CounterView rowAmountView = (CounterView)this.findViewById(R.id.rows_amount); rowAmountView.setValue(savedInstanceState.getInt(ROWS_AMOUNT_STATE)); } Remember, saving data in a bundle is not meant to be a permanent store of data since it only stores the current state of the view: it is not part of the activity lifecyle and is only called when the activity needs to be recreated or it is sent to the background. This means that the onSaveInstanceState method is not called when the application is destroyed since the activity state could never be restored. To save data that should never be lost you should save the data to one of the permanent data store described earlier in this series. But when should this data be stored? Saving your data to a permanent data store If you need to save data to a permanent data store when the activity is send to the background or destroyed for any reason, your must save your data in the onPause method of the activity. The onStop method is called only if the UI is completely hidden, so you cannot rely on it being raised all the time. All the essential data must be saved at this point because you have no control on what happens after: the user may kill the application for example and the data would be lost. In the previous version of the application, when the user incremented the counter for a row in a project the application saved the current value of the counter to the database every time. Now we’ll save the data only when the user leaves the activity, and saving at each counter press is no longer required: @Override public void onPause() { super.onPause(); ProjectsDatabaseHelper database = new ProjectsDatabaseHelper(this); // Update the value of all the counters of the project in the database since the activity is // destroyed or send to the background for (RowCounter rowCounter: mRowCounters) { database.updateRowCounterCurrentAmount(rowCounter); } Later on, if your application was not destroyed and the user accesses the activity again there are two possible processes that can occurs depending on how the Android OS handled your activity. If the activity was still in memory, for example if the user opened another application and came back immediately, the onRestart method is first called, followed by a call to the onStart method and finally the OnResume method is called and the activity is shown to the user. But if the activity was recycled and is being recreated,  for example if the user rotated the device so the layout is recreated, the process is the same as the one for a new activity: the onCreate method is first called, followed by a call to the onStart method and finally the onResume method is called and the activity is shown to the user. So, if you want to use the data that was saved to a permanent data store to initialize controls in your activity that lose their state, you should put your code in the onResume method since it is always called, regardless of if the activity was recreated or not. In the previous example, it is not necessary to explicitly restore the data since no custom controls were used: if the activity was recycled, it is recreated from scratch and the onCreate method initialize the controls from the data in the database. If the activity is still in memory, there is nothing else to do: the Android SDK handles showing the values as they were first displayed as explained earlier in the section about saving UI states. Here is a reminder of what happens in the onCreate method: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.project_activity); Intent intent = getIntent(); long projectId = intent.getLongExtra("project_id", -1); // Gets the database helper to access the database for the application ProjectsDatabaseHelper database = new ProjectsDatabaseHelper(this); // Use the helper to get the current project Project currentProject = database.getProject(projectId);TextView projectNameView = (TextView)findViewById(R.id.project_name); projectNameView.setText(currentProject.getName()); // Initialize the listview to show the row counters for the project from // the database ListView rowCounterList = (ListView)findViewById(R.id.row_counter_list); mRowCounters = currentProject.getRowCounters(); ListAdapter rowCounterListAdapter = new RowCounterAdapter(this, R.layout.rowcounter_row, currentProject.getRowCounters()); rowCounterList.setAdapter(rowCounterListAdapter); } This concludes the series about saving data in your Android application. You now know about the various types of data storage that are available for Android applications and when should they be used so your users never lose their data and have the best user experience possible.Reference: Storing the state of an activity of your Android application from our JCG partner Cindy Potvin at the Web, Mobile and Android Programming blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: