Featured FREE Whitepapers

What's New Here?


Hazelcast’s MapLoader pitfalls

One of the core data structures provided by Hazelcast is IMap<K, V> extendingjava.util.concurrent.ConcurrentMap – which is basically a distributed map, often used as cache. You can configure such map to use custom MapLoader<K, V> – piece of Java code that will be asked every time you try to .get()something from that map (by key) which is not yet there. This is especially useful when you use IMap as a distributed in-memory cache – if client code asks for something that wasn’t cached yet, Hazelcast will transparently execute yourMapLoader.load(key):         public interface MapLoader<K, V> { V load(K key); Map<K, V> loadAll(Collection<K> keys); Set<K> loadAllKeys(); }The remaining two methods are used during startup to optionally warm-up cache by loading pre-defined set of keys. Your custom MapLoader can reach out to (No)SQL database, web-service, file-system, you name it. Working with such a cache is much more convenient because you don’t have to implement tedious “if not in cache load and put in cache” cycle. Moreover, MapLoader has a fantastic feature – if many clients are asking at the same time for the same key (from different threads, or even different cluster members – thus machines), MapLoader is executed only once. This significantly decreases load on external dependencies, without introducing any complexity. In essence IMap with MapLoader is similar to LoadingCache found in Guava – but distributed. However with great power comes great frustration, especially when you don’t understand the peculiarities of API and inherent complexity of a distributed system. First let’s see how to configure custom MapLoader. You can use hazelcast.xml for that (<map-store/> element), but you then have no control over life-cycle of your loader (e.g. you can’t use Spring bean). A better idea is to configure Hazelcast directly from code and pass an instance of MapLoader: class HazelcastTest extends Specification { public static final int ANY_KEY = 42 public static final String ANY_VALUE = "Forty two"def 'should use custom loader'() { given: MapLoader loaderMock = Mock() loaderMock.load(ANY_KEY) >> ANY_VALUE def hz = build(loaderMock) IMap<Integer, String> emptyCache = hz.getMap("cache")when: def value = emptyCache.get(ANY_KEY)then: value == ANY_VALUEcleanup: hz?.shutdown() }Notice how we obtain an empty map, but when asked for ANY_KEY, we get ANY_VALUE in return. This is not a surprise, this is what our loaderMock was expected to do. I left Hazelcast configuration: def HazelcastInstance build(MapLoader<Integer, String> loader) { final Config config = new Config("Cluster") final MapConfig mapConfig = config.getMapConfig("default") final MapStoreConfig mapStoreConfig = new MapStoreConfig() mapStoreConfig.factoryImplementation = {name, props -> loader } as MapStoreFactory mapConfig.mapStoreConfig = mapStoreConfig return Hazelcast.getOrCreateHazelcastInstance(config) }Any IMap (identified by name) can have a different configuration. However special "default" map specifies default configuration for all maps. Let’s play a bit with custom loaders and see how they behave when MapLoader returns nullor throws an exception: def 'should return null when custom loader returns it'() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> cache = hz.getMap("cache")when: def value = cache.get(ANY_KEY)then: value == null !cache.containsKey(ANY_KEY)cleanup: hz?.shutdown() }public static final String SOME_ERR_MSG = "Don't panic!"def 'should propagate exceptions from loader'() { given: MapLoader loaderMock = Mock() loaderMock.load(ANY_KEY) >> {throw new UnsupportedOperationException(SOME_ERR_MSG)} def hz = build(loaderMock) IMap<Integer, String> cache = hz.getMap("cache")when: cache.get(ANY_KEY)then: UnsupportedOperationException e = thrown() e.message.contains(SOME_ERR_MSG)cleanup: hz?.shutdown() }MapLoader is executed in a separate thread So far nothing surprising. The first trap you might encounter is how threads interact here. MapLoader is never executed from client thread, always from a separate thread pool: def 'loader works in a different thread'() { given: MapLoader loader = Mock() loader.load(ANY_KEY) >> {key -> "$key: ${Thread.currentThread().name}"} def hz = build(loader) IMap<Integer, String> cache = hz.getMap("cache")when: def value = cache.get(ANY_KEY)then: value != "$ANY_KEY: ${Thread.currentThread().name}"cleanup: hz?.shutdown() }This test passes because current thread is "main" while loading occurs from within something like"hz.Cluster.partition-operation.thread-10". This is an important observation and is actually quite obvious if you remember that when many threads try to access the same absent key, loader is called only once. But more needs to be explained here. Almost every operation on IMap is encapsulated into one of operation objects (see also: Command pattern). This operation is later dispatched to one or all cluster members and executed remotely in a separate thread pool, or even on a different machine. Thus, don’t expect loading to occur in the same thread, or even same JVM/server (!) This leads to an interesting situation where you request given key on one machine, but actual loading happens on the other. Or even more epic – machines A, B and C request given key whereas machine D physically loads value for that key. The decision which machine is responsible for loading is made based on consistent hashing algorithm. One final remark – of course you can customize the size of thread pools running these operations, see Advanced Configuration Properties. IMap.remove() calls MapLoader This one is totally surprising and definitely to be expected once you think about it: def 'IMap.remove() on non-existing key still calls loader (!)'() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> emptyCache = hz.getMap("cache")when: emptyCache.remove(ANY_KEY)then: 1 * loaderMock.load(ANY_KEY)cleanup: hz?.shutdown() }Look carefully! All we do is removing absent key from a map. Nothing else. Yet, loaderMock.load() was executed. This is a problem especially when your custom loader is particularly slow or expensive. Why was it executed here? Look up the API of `java.util.Map#remove(): V remove(Object key) [...] Returns the value to which this map previously associated the key, or null if the map contained no mapping for the key. Maybe it’s controversial but one might argue that Hazelcast is doing the right thing. If you consider our map withMapLoader attached as sort of like a view to external storage, it makes sense. When removing absent key, Hazelcast actually asks our MapLoader: what could have been a previous value? It pretends as if the map contained every single value returned from MapLoader, but loaded lazily. This is not a bug since there is a special method IMap.delete() that works just like remove(), but doesn’t load “previous” value: @Issue("https://github.com/hazelcast/hazelcast/issues/3178") def "IMap.delete() doesn't call loader"() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> cache = hz.getMap("cache")when: cache.delete(ANY_KEY)then: 0 * loaderMock.load(ANY_KEY)cleanup: hz?.shutdown() }Actually, there was a bug: IMap.delete() should not call MapLoader.load(), fixed in 3.2.6 and 3.3. If you haven’t upgraded yet, even IMap.delete() will go to MapLoader. If you think IMap.remove() is surprising, check out howput() works! IMap.put() calls MapLoader If you thought remove() loading value first is suspicious, what about explicit put() loading a value for a given key first? After all, we are explicitly putting something into a map by key, why Hazelcast loads this value first via MapLoader? def 'IMap.put() on non-existing key still calls loader (!)'() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> emptyCache = hz.getMap("cache")when: emptyCache.put(ANY_KEY, ANY_VALUE)then: 1 * loaderMock.load(ANY_KEY)cleanup: hz?.shutdown() }Again, let’s restore to java.util.Map.put() JavaDoc: V put(K key, V value) [...] Returns: the previous value associated with key, or null if there was no mapping for key. Hazelcast pretends that IMap is just a lazy view over some external source, so when we put() something into an IMapthat wasn’t there before, it first loads the “previous” value so that it can return it. Again this is a big issue whenMapLoader is slow or expensive – if we can explicitly put something into the map, why load it first? Luckily there is a straightforward workaround, putTransient(): def "IMap.putTransient() doesn't call loader"() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> cache = hz.getMap("cache")when: cache.putTransient(ANY_KEY, ANY_VALUE, 1, TimeUnit.HOURS)then: 0 * loaderMock.load(ANY_KEY)cleanup: hz?.shutdown() }One caveat is that you have to provide TTL explicitly, rather then relying on configured IMap defaults. But this also means you can assign arbitrary TTL to every map entry, not only globally to whole map – useful. IMap.containsKey() involves MapLoader, can be slow or block Remember our analogy: IMap with backing MapLoader behaves like a view over external source of data. That’s why it shouldn’t be a surprise that containsKey() on an empty map will call MapLoader: def "IMap.containsKey() calls loader"() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> emptyMap = hz.getMap("cache")when: emptyMap.containsKey(ANY_KEY)then: 1 * loaderMock.load(ANY_KEY)cleanup: hz?.shutdown() }Every time we ask for a key that’s not present in a map, Hazelcast will ask MapLoader. Again, this is not an issue as long as your loader is fast, side-effect free and reliable. If this is not the case, this will kill you: def "IMap.get() after IMap.containsKey() calls loader twice"() { given: MapLoader loaderMock = Mock() def hz = build(loaderMock) IMap<Integer, String> cache = hz.getMap("cache")when: cache.containsKey(ANY_KEY) cache.get(ANY_KEY)then: 2 * loaderMock.load(ANY_KEY)cleanup: hz?.shutdown() }Despite containsKey() calling MapLoader, it doesn’t “cache” loaded value to use it later. That’s why containsKey()followed by get() calls MapLoader two times, quite wasteful. Luckily if you call containsKey() on existing key, it runs almost immediately, although most likely will require network hop. What is not so fortunate is the behaviour of keySet(),values(), entrySet() and few other methods before version 3.3 of Hazelcast. These would all block in case any key is being loaded at a time. So if you have a map with thousands of keys and you ask for keySet(), one slowMapLoader.load() invocation will block whole cluster. This was fortunately fixed in 3.3, so that IMap.keySet(),IMap.values(), etc. do not block, even when some keys are being computed at the moment.As you can see IMap + MapLoader combo is powerful, but also filled with traps. Some of them are dictated by the API, osme by distributed nature of Hazelcast, finally some are implementation specific. Be sure you understand them before implementing loading cache feature.Reference: Hazelcast’s MapLoader pitfalls from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....

Log your miles and community runs: Java EE 7 Real World Experience

miles2run.org is an easy way to track your running activities and share with friends and families. Day-based or distance-based goals can be created and then tracked. It also allows to create community run goals and have multiple runners participate and track their activities towards that goal. You can also find out local runners and connect with them. The project was started to help track running activities for #JavaOneStreak. The goal was to run at least a mile every day all the way until JavaOne and use this website to track the runs. There are tons of sophisticated applications and websites that allow you to track running activity. Most of them provide integration with your GPS watch, phone GPS, and other fancy features. Some of them even allow creating a group but none of them is based on Java! The application is hosted as a website and built using HTML5 and Java EE 7. The landing page provides a summary of total runners, their city/country, miles, and hours logged so far.The website can be viewed on a desktop, tablet, or a cell phone. The runners can login to the website using common social brokers such as Facebook, Google, and Twitter. Any body can click on “Community Runs” on the top-right corner to see what all group runs have been created so far. These can only be created by an administrator. The group run page for JavaOne shows how many runners have joined this run and other statistics.Each runner is presented with a dashboard showing how much distance they’ve run so far and total/completed/remaining/missed days.A visual representation of the progress and a heat map of the activity calendar is shown:A line chart of mileage over the days is shown:And then a summary of activities over the past months is shown as well:Runners also have the opportunity to follow other runners and track their activities. Here is a conceptual view of the application:And here is a technology view of the application:Here is a brief description of the technology stack:PresentationThymeleaf template engine views rendered by JAX-RS Social brokering using native APIs for Facebook, Google, TwitterMiddle Tier@Stateless EJB for all transactional JPA interactions, @Asynchronous for posting status to social networks JAX-RS for exposing REST endpoints. ContainerRequestFilter and ContainerResponseFilter for security for cross-cutting concerns like authentication, injecting profile, and CORS. Bean Validation constraints in JAX-RS resources. Bean discovery mode=”all”BackendCDI producers for creating EntityManagers and other configuration objects like Redis connection pool objects or MongoDB configuration object. All NoSQL services are created @ApplicationScoped. Using native drivers for Redis and MongoDB. Jedis is being used for Redis and MongoDB Java driver is used for MongoDB. CDI services wrap these driver API and expose business functionalities that other layers could use. JPA + Bean Validation. Database scripts are generated from JPA model, added to source control, manually applied to the database. Using @Index and Entity Graph. Servlets are used for callback URLs for social brokers. Error pages are configured using <error-page>. MySQL is used for all business entities like Activity, Goal, User Profile etc. Redis is used for storing counters and timeline data. MongoDB is used for location-based user recommendations and follower/following recommendations.Technologies from outside the platform:JavaScriptD3.js and C3.js for visually appealing graphs AngularJS for views Cal Heatmap for calendar heatmap jQueryGoogle Geocoding API to convert Location Text to latitude and longitude Jadira usertype for storing dates in UTC Joda-Time for working with dates Thymeleaf was used instead of JavaServer Faces because:Allows JAX-RS to be used as an MVC framework to render server side HTML pages and exposing REST services. This application is a Single Page application built using AngularJS. So, we needed a lightweight approach to render server side pages. JAX-RS along with Thyemleaf render the main HTML 5 page and then we use AngularJS to render different partials/views on that page. For example, the main home page is rendered by JAX-RS and Thymeleaf. When you work with different sections of this page all of them are part of a SPA managed by AngualarJS. Thymeleaf documents are valid HTML 5 documents so you can work with them offline for static prototyping.Redis is used for storing all the counters like number of runners, cities, counters specific to goal like total distance covered in a goal etc. To avoid lots of read/write from the database, an an in-memory database is used so all the write and read operations are very performant. Redis counters are atomic, which means there are no concurrency issues associated with it. INCR andINCRBY Redis operations are used to update counters. MongoDB is used for location data.ToolsetJDK 8 IntelliJ 13.1 with Maven Wildfly 8.1.0.Final – Development was done using a local WildFly instance and then pushed to scalable WildFly instances on OpenShift for deployment. HA Proxy is used as the load balancer.The advantage of working with OpenShift is that there is no OpenShift specific code in your application. So, it’s the same application that work locally that is deployed to test and production environment. You just have to use environment variables to abstract out environment specific configuration. GithubPlanned updatesUse Jenkins for Continuous Integration and manage deployments JPA 2.1 converter instead of Jadira Keycloak instead of native social broker Opensource the applicationWish list for Java EE 8Integration with OAuth providers A real MVC framework with support for pluggable template engines Seamless working with NoSQL databasesDownload WildFly 8.1 today, learn the technology by reading/trying Java EE 7 samples, browse through Java EE 7 resources. Or if you want to be on the bleeding edge, check out WildFly 9.0. Many thanks to Shekhar Gulati (@shekhargulati) for authoring the application and providing all the answers!Reference: Log your miles and community runs: Java EE 7 Real World Experience from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....

Asynchronous SQL Execution with jOOQ and Java 8’s CompletableFuture

Reactive programming is the new buzzword, which essentially just means asynchronous programming or messaging. Fact is that functional syntax greatly helps with structuring asynchronous execution chains, and today, we’ll see how we can do this in Java 8 using jOOQ and the new CompletableFuture API. In fact, things are quite simple:       // Initiate an asynchronous call chain CompletableFuture// This lambda will supply an int value // indicating the number of inserted rows .supplyAsync(() -> DSL .using(configuration) .insertInto(AUTHOR, AUTHOR.ID, AUTHOR.LAST_NAME) .values(3, "Hitchcock") .execute() ) // This will supply an AuthorRecord value // for the newly inserted author .handleAsync((rows, throwable) -> DSL .using(configuration) .fetchOne(AUTHOR, AUTHOR.ID.eq(3)) )// This should supply an int value indicating // the number of rows, but in fact it'll throw // a constraint violation exception .handleAsync((record, throwable) -> { record.changed(true); return record.insert(); }) // This will supply an int value indicating // the number of deleted rows .handleAsync((rows, throwable) -> DSL .using(configuration) .delete(AUTHOR) .where(AUTHOR.ID.eq(3)) .execute() )// This tells the calling thread to wait for all // chained execution units to be executed .join(); What did really happen here? Nothing out of the ordinary. There are 4 execution blocks:One that inserts a new AUTHOR One that fetches that same AUTHOR again One that re-inserts the newly fetched AUTHOR (throwing an exception) One that ignores the thrown exception and delets the AUTHOR againFinally, when the execution chain is established, the calling thread will join the whole chain using the CompletableFuture.join() method, which is essentially the same as the Future.get() method, except that it doesn’t throw any checked exception. Comparing this to other APIs Other APIs like Scala’s Slick have implemented similar things via “standard API”, such as calls to flatMap(). We’re currently not going to mimick such APIs as we believe that the new Java 8 APIs will become much more idiomatic to native Java speakers. Specifically, when executing SQL, getting connection pooling and transactions right is of the essence. The semantics of asynchronously chained execution blocks and how they relate to transactions is very subtle. If you want a transaction to span more than one such block, you will have to encode this yourself via jOOQ’s Configuration and its contained ConnectionProvider. Blocking JDBC Obviously, there will always be one blocking barrier to such solutions, and that is JDBC itself – which is very hard to turn into an asynchronous API. In fact, few databases really support asynchronous query executions and cursors, as most often, a single database session can only be used by a single thread for a single query at a time.Reference: Asynchronous SQL Execution with jOOQ and Java 8’s CompletableFuture from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

How to run junit tests inside the android project

Hi there! Today i’m gonna show you how to create and run junit tests inside your android project without creating a separated test project. With those tests we will rapidly be able to automate and test the app’s logic and some simple UI behaviors. The example below is very straightforward and much more intuitive than other approaches i saw out there.           Defining the TestInstrumentation First of all, define in your manifest file the following entries. IMPORTANT: While the definition of the test instrumentation will be placed outside your application tag, the test runner must be defined  inside your application tag.   < manifest > ... < instrumentation android:name="android.test.InstrumentationTestRunner" android:targetPackage="com.treslines.ponto" / > < / manifest > < application > ... < uses-library android:name="android.test.runner" / > ... < / application > Creating the test packages Android works with some conventions while testing. So it is extremelly important to follow those conventions. Otherwise you’ll get compile or run errors while trying to run it. One convention is that all tests for a specific class must be placed in the same package structure but with a futher sub-folder called test as you can see below. Because i want to test the activities in the package com.treslines.ponto.activity i must create a test package called com.treslines.ponto.activity.testCreating the test itself That’s the cool part of it. Here you can write your junit tests as usual. Again android  gives us some conventions to follow. All test classes must have the same name as the class under test with the suffix Test on it. And all test methods must start with the prefix test on it. If you follow those conventions everything will work just fine. // IMPORTANT: All test cases MUST have a suffix "Test" at the end // // THAN: // Define this in your manifest outside your application tag: // < instrumentation // android:name="android.test.InstrumentationTestRunner" // android:targetPackage="com.treslines.ponto" // / > // // AND: // Define this inside your application tag: // < uses-library android:name="android.test.runner" / > // // The activity you want to test will be the "T" type of ActivityInstrumentationTestCase2 public class AlertaActivityTest extends ActivityInstrumentationTestCase2 < AlertaActivity > {private AlertaActivity alertaActivity; private AlertaController alertaController;public AlertaActivityTest() { // create a default constructor and pass the activity class // you want to test to the super() constructor super(AlertaActivity.class); }@Override // here is the place to setup the var types you want to test protected void setUp() throws Exception { super.setUp(); // because i want to test the UI in the method testAlertasOff() // i must set this attribute to true setActivityInitialTouchMode(true);// init variables alertaActivity = getActivity(); alertaController = alertaActivity.getAlertaController(); }// usually we test some pre-conditions. This method is provided // by the test framework and is called after setUp() public void testPreconditions() { assertNotNull("alertaActivity is null", alertaActivity); assertNotNull("alertaController is null", alertaController); }// test methods MUST start with the prefix "test" public void testVibrarSomAlertas() { assertEquals(true, alertaController.getView().getVibrar().isChecked()); assertEquals(true, alertaController.getView().getSom().isChecked()); assertEquals(true, alertaController.getView().getAlertas().isChecked()); }// test methods MUST start with the prefix "test" public void testAlertasOff() { Switch alertas = alertaController.getView().getAlertas(); // because i want to simulate a click on a view, i must use the TouchUtils TouchUtils.clickView(this, alertas); // wait a little (1.5sec) because the UI needs its time // to change the switch's state and than check new state of the switches new Handler().postDelayed(new Runnable() { @Override public void run() { assertEquals(false, alertaController.getView().getVibrar().isChecked()); assertEquals(false, alertaController.getView().getSom().isChecked()); } }, 1500); } } Running the JUnit tests The only difference while running junit test in android is that you’ll be calling Run As > Android JUnit Test instead of just JUnit Test like you are used to in java.That’s all! Hope you like it!Reference: How to run junit tests inside the android project from our JCG partner Ricardo Ferreira at the Clean Code Development – Quality Seal blog....

For All You Know, It’s Just a Java Library

Blast from the past… I wrote this in May, 2008… and I’ve gotta say, I was pretty spot-on including Java 8 adopting some of Scala’s better features: It’s starting to happen… the FUD around Scala. The dangers of Scala. The “operational risks” of using Scala. It started back in November when I started doing lift and Scala projects for hire. The questions were reasonable and rational:  If you build this project in Scala and lift, who else can maintain it? A: the lift community has 500+ (300+ back in November) in it and at least 3 (now 10) of the people on the lift list are actively seeking Scala and lift related gigs. What does lift give us? A: a much faster time to market with Web 2.0, collaborative applications than anything else out where. Why not use Rails, it’s got great developer productivity? A: Rails is great from Person to Computer applications… when you’re building a chat app or something that’s person to person collaborative, lift’s Comet support is much more effective than Rails… plus if you have to scale your app to hundreds of users simultaneously using your app, you can do it on a single box with lift because the JVM gives great operational benefits.Then came the less rational questions:Why use new technology when we can outsource the coding to India and pay someone 5% of what we’re paying you? A: Because I’m more than 20 times better than the coders you buy for 5% of my hourly rate plus there’s a lot less management of my development efforts. Why not write it in Java to get the operational benefits of Java? A: Prototyping is hard. Until you know what you want, you need to be agile. Java is not agile. Ruby/Rails and Scala/lift are agile. Choose one of those to do prototyping and then port to Java if there’s a reason to. Will Scala be incompatible with our existing Java code? A: No. Read my lips… No. The same guy who wrote the program (javac) that converts Java to JVM byte code wrote the program that converts Scala to JVM byte code. There’s no one on this planet that could make a more compatible language than Martin Odersky.Okay… flash forward a few months. Buy a Feature has gone through prototyping, revisions, and all the normal growth that software goes through. It works (yeah… it still needs more, but that’s the nature of software versions.) There’s another developer (he mainly write ECMAScript) who picked up the lift code and made a dozen modifications in the heart of the code with 2 hours of walk-through from me and 20-30 IM and email questions. He mainly does Flash and Air work and found Scala and lift to be pretty straight forward. We also had an occasion to have 2,000 simultaneous (as in at the same time, pounding on their keyboards) users of Buy a Feature and we were able to, thanks to Jetty Continuations, service all 2,000 users with 2,000 open connections to our server and an average of 700 requests per second on a dual core opteron with a load average of around 0.24… try that with your Rails app. One of the customers of Buy a Feature wanted it integrated into their larger, Java-powered web portal along with 2 other systems. I did the integration. The customer asks “Where’s the Scala part?” I answer “It’s in this JAR file.” He goes “But, your program is written in Scala, but I looked at the byte-code and it’s just Java.” I answer “It’s Scala… but it compiles down to Java byte-code and it runs in a Java debugger and you can’t tell the difference.” “You’re right,” he says. So, to this customer’s JVM, the Scala and lift code looks, smells and tastes just like Java code. If I renamed the scala-library.jar file to apache-closures.jar, nobody would know the difference… at all. Okay… but each set of people I talk to, I hear a similar variation about the “operational risks” of using Scala. Let’s step back for a minute. There are development and team risks for using Scala. Some Java programmers can’t wrap their heads around the triple concepts of (1) type inference, (2) passing functions/higher order functions and (3) immutability as the default way of writing code. Most Ruby programmers that I’ve met don’t have the above limitations. So, find a Ruby program who knows some Java libraries or find a Java programmer who’s done some moonlighting with Rails or Python or JavaScript and you’ve got a developer who can pick up Scala in a week. Yes, the tools in Scala-land are not as rich as the tools in Java-land. But, once again, anyone who can program Ruby can program in Scala. There’s a fine Textmate bundle for Scala. I use jEdit. Steve Jenson uses emacs. Thanks to David Bernard’s continuous compilation Maven plugin, you save your file and your code is compiled. Oh… and there’s the old Eclipse plugin which more or less works and has access to the Eclipse debugger and the new Eclipse plugin is reported to work quite well. And then there’s the NetBeans plugin which is still raw, but getting better every week. Even with the limitation of weak IDE support, head-to-head people can write Scala code 2 to 10 times faster than they can write Java code and maintaining Scala code is much easier because of Scala’s strong type system and code conciseness. But, getting back to our old friend “you can’t tell it’s not Java”, I wrote a Scala program and compiled it with -g:vars (put all the symbols in the class file), started the program under jdb (the Java Debugger… a little more on this later) and set a breakpoint. This is what I got: > Step completed: "thread=main", foo.ScalaDB$$anonfun$main$1.apply(), line=6 bci=0 > 6 args.zipWithIndex.foreach(v => println(v)) main[1] dump v v = { _2: instance of java.lang.Integer(id=463) _1: "Hello" } main[1] where [1] foo.ScalaDB$$anonfun$main$1.apply (ScalaDB.scala:6) [2] foo.ScalaDB$$anonfun$main$1.apply (ScalaDB.scala:6) [3] scala.Iterator$class.foreach (Iterator.scala:387) [4] scala.runtime.BoxedArray$$anon$2.foreach (BoxedArray.scala:45) [5] scala.Iterable$class.foreach (Iterable.scala:256) [6] scala.runtime.BoxedArray.foreach (BoxedArray.scala:24) [7] foo.ScalaDB$.main (ScalaDB.scala:6) [8] foo.ScalaDB.main (null) main[1] print v v = "(Hello,0)" main[1] My code worked without any fancy footwork inside of the standard Java Debugger. The text of the line that I was on and the variables in the local scope were all there… just as if it was a Java program. The stack traces work the same way. The symbols work the same way. Everything works the same way. Scala code looks and smells and tastes to the JVM just like Java code… now, let’s explore why. A long time ago, when Java was Oak and it was being designed as a way to distribute untrusted code into set-top boxes (and later browsers), the rules defining how a program executed and what the means of the instruction set (byte codes) were was super important. Additionally, the semantics of the program had to be such that the Virtual Machine running the code could (1) verify that the code was well behaved and (2) that the source code and the object code had the same meaning. For example, the casting operation in Java compiles down to a byte code that checks that the class can actually be cast to the right thing and the verifier insures that there’s no code path that could put an unchecked value into a variable. Put another way, there’s no way to write verifiable byte code that can put a reference to a non-String into a variable that’s defined as a String. It’s not just at the compiler level, but at the actual Virtual Machine level that object typing is enforced. In Java 1.0 days, there was nearly a 1:1 correspondence between Java language code and Java byte code. Put another way, there was only one thing you could write in Java byte code that you could not write in Java source code (it has to do with calling super in a constructor.) There was one source code file per class file. Java 1.1 introduced inner classes which broke the 1:1 relationship between Java code and byte code. One of the things that inner classes introduced was access to private instance variables by the inner class. This was done without violating the JVM’s enforcement of the privacy of private variables by creating accessor methods that were compiler enforced (but not JVM enforced) ways for the anonymous classes to access private variables. But the horse was out of the barn at this point anyway, because 1.1 brought us reflection and private was no longer private. An interesting thing about the JVM. From 1.0 through 1.6, there has not been a new instruction added to the JVM. Wow. Think about it. Java came out when the 486 was around. How many instructions have been added to Intel machines since 1995? The Microsoft CLR has been around since 2000 and has gone through 3 revisions and new instructions have been added at every revision and source code compiled under an older revision does not work with newer revisions. On the other hand, I have Java 1.1 compiled code that works just fine under Java 1.6. Pretty amazing. Even to this day, Java Generics are implemented using the same JVM byte-codes that were used in 1996. This is why you get the “type erasure” warnings. The compiler knows the type, but the JVM does not… so a List<String> looks to the JVM like a List, even though the compiler will not let you pass a List<String> to something that expects a List<URL>. On the server side, where we trust the code, this is not an issue. If we were writing code for an untrusted world, we’d care a lot more about the semantics of the source code being enforced by the execution environment. So, there have been no new JVM instructions since Java was released. The JVM is perhaps the best specified piece of software this side of ADA-based military projects. There are specs and slow-moving JSRs for everything. Turns out, this works to our benefit. The JVM has a clearly defined interface to debugging. The information that a class file needs to provide to the JVM for line numbers, variable names, etc. is very clearly specified. Because the JVM has a limited instruction set and the type of each item on the stack and of each instance variable in a class is know and verified when the class loads, the debugging information works for anything that compiles down to Java byte code and has semantics of named local variables and named instance variables. Scala shares these semantics with Java and that’s why the Scala compiler can compile byte-code that has the appropriate debugging information so that it “just works” with jdb. And, just to be clear, jdb uses the standard, well documented interface into the JVM to do debugging and every other IDE for the JVM uses this same interface. That means that an IDE that compiles Scala can also hook into the JVM and debug Scala. That’s why debugging work with the Scala Eclipse plugin. But, let’s go back to the statement: nobody knows Scala’s operational characteristics. That’s just not true. Scala’s operational characteristics are the same as Java’s. The Scala compiler generates byte code that is nearly identical to the Java compiler. In fact, that you can decompile Scala code and wind up with readable Java code, with the exception of certain constructor operations. To the JVM, Scala code and Java code are indistinguishable. The only difference is that there’s a single extra library file to support Scala. Now, in most software projects, you don’t have CEOs and board members, and everybody’s grandmother asking what libraries you’re using. In fact, in every project I’ve stepped into, there have been at least 2 libraries that the senior developers did not add but somehow got introduced into the mix (I believe in library audits to make sure there’s no license violations in the library mix.) So, in the normal course of business, libraries are added to projects all the time. Any moderately complex project depends on dozens of libraries. I can tell you to a 100% degree of certainty that there are libraries in that mix that will not pass the “is the company that supports them going to be around in 5 years?” test. Period. Sure, memcached will be around in 5 years and most of the memcached clients will. Slide on the other hand is “retired”. And Mongrel… Making the choice to use Scala should be a deliberate, deliberated, well reasoned choice. It has to do with developer productivity, both to build the initial product and to maintain the product through a 2-5 year lifecycle. It has to do with maintaining existing QA and operations infrastructure (for existing JVM shops) or moving to the most scalable, flexible, predictable, well tested, and well supported web infrastructure around: the JVM. Recruiting team members who can do Scala may be a challenge. Standardizing on a development environment may be a challenge as the Scala IDE support is immature (but there’s always emacs, vi, jEdit and Textmate which work just fine.) Standardizing on a coding style is a challenge. These are all people challenges and all localized to recruiting and development and management thereof. The only rational parts of the debate are the trade-off between recruiting and organizing the team and the benefits to be gained from Scala. But, you say, what if Martin Odersky decides to take Scala in a wrong direction? Then freeze at Scala 2.7 or 2.8 or where-ever you feel the break is rational. It was only last year that Kaiser moved from Java 1.3 to 1.4. Working off of 2 or 3 year old technology is normal. Running against trunk-head is not the way of an organization that’s asking the question “where will Martin take Scala in 5 years?” And oh, by the way, if Martin gets off track or Scala for some reason languishes, it’s most likely to be the same scenario as GJ (Generics Java… Martin’s prior project that turned into Java Generics)… it’s because Java 8 or Java 9 has adopted enough of Scala’s features to make Scala marginal. In that case, you spend a couple of months porting the Scala code to Java XX and in the process fix some bugs. And not to put too fine a point on it, but Martin’s team runs one of the best ISVs I’ve ever seen. They crank out a new, feature-packed release every six months or so. They respond, sometimes within hours, to bug reports. There is an active support mechanism with some of the best coders around waiting to answer questions from newbies and old hands alike. If we were to measure the Scala team on commercial standards, they’ve got a longer funding runway than any private software company around and they’re more responsive than almost every ISV, public or private. So what if they’re academic… maybe that means they’re thinking through issues rather then being code-wage slaves. Bottom line… to anyone other than the folks with hands in the code and the folks who have to recruit and manage them, “For all you know, it’s just another Java library.”Reference: For All You Know, It’s Just a Java Library from our JCG partner David Pollak at the DPP’s Blog blog....

ChoiceFormat: Numeric Range Formatting

The Javadoc for the ChoiceFormat class states that ChoiceFormat “allows you to attach a format to a range of numbers” and is “generally used in a MessageFormat for handling plurals.” This post describes java.text.ChoiceFormat and provides some examples of applying it in Java code. One of the most noticeable differences between ChoiceFormat and other “format” classes in the java.text package is that ChoiceFormat does not provide static methods for accessing instances of ChoiceFormat. Instead, ChoiceFormat provides two constructors that are used for instantiating ChoiceFormat objects. The Javadoc for ChoiceFormat highlights and explains this:  ChoiceFormat differs from the other Format classes in that you create a ChoiceFormat object with a constructor (not with a getInstance style factory method). The factory methods aren’t necessary because ChoiceFormat doesn’t require any complex setup for a given locale. In fact, ChoiceFormat doesn’t implement any locale specific behavior.Constructing ChoiceFormat with Two Arrays The first of two constructors provided by ChoiceFormat accepts two arrays as its arguments. The first array is an array of primitive doubles that represent the smallest value (starting value) of each interval. The second array is an array of Strings that represent the names associated with each interval. The two arrays must have the same number of elements because there is an assumed one-to-one mapping between the numeric (double) intervals and the Strings describing those intervals. If the two arrays do not have the same number of elements, the following exception is encountered.Exception in thread “main” java.lang.IllegalArgumentException: Array and limit arrays must be of the same length.The Javadoc for the ChoiceFormat(double[], String[]) constructor states that the first array parameter is named “limits”, is of type double[], and is described as “limits in ascending order.” The second array parameter is named “formats”, is of type String[], and is described as “corresponding format strings.” According to the Javadoc, this constructor “constructs with the limits and the corresponding formats.” Use of the ChoiceFormat constructor accepting two array arguments is demonstrated in the next code listing (the writeGradeInformation(ChoiceFormat) method and fredsTestScores variable will be shown later). /** * Demonstrate ChoiceFormat instantiated with ChoiceFormat * constructor that accepts an array of double and an array * of String. */ public void demonstrateChoiceFormatWithDoublesAndStringsArrays() { final double[] minimumPercentages = {0, 60, 70, 80, 90}; final String[] letterGrades = {"F", "D", "C", "B", "A"}; final ChoiceFormat gradesFormat = new ChoiceFormat(minimumPercentages, letterGrades); writeGradeInformation(fredsTestScores, gradesFormat); } The example above satisfies the expectations of the illustrated ChoiceFormat constructor. The two arrays have the same number of elements, the first (double[]) array has its elements in ascending order, and the second (String[]) array has its “formats” in the same order as the corresponding interval-starting limits in the first array. The writeGradeInformation(ChoiceFormat) method referenced in the code snippet above demonstrates use of a ChoiceFormat instance based on the two arrays to “format” provided numerical values as Strings. The method’s implementation is shown next. /** * Write grade information to standard output * using the provided ChoiceFormat instance. * * @param testScores Test Scores to be displayed with formatting. * @param gradesFormat ChoiceFormat instance to be used to format output. */ public void writeGradeInformation( final Collection<Double> testScores, final ChoiceFormat gradesFormat) { double sum = 0; for (final Double testScore : testScores) { sum += testScore; out.println(testScore + " is a '" + gradesFormat.format(testScore) + "'."); } double average = sum / testScores.size(); out.println( "The average score (" + average + ") is a '" + gradesFormat.format(average) + "'."); } The code above uses the ChoiceFormat instance provided to “format” test scores. Instead of printing a numeric value, the “format” prints the String associated with the interval that numeric value falls within. The next code listing shows the definition of fredsTestScores used in these examples. private static List<Double> fredsTestScores; static { final ArrayList<Double> scores = new ArrayList<>(); scores.add(75.6); scores.add(88.8); scores.add(97.3); scores.add(43.3); fredsTestScores = Collections.unmodifiableList(scores); } Running these test scores through the ChoiceFormat instance instantiated with two arrays generates the following output: 75.6 is a 'C'. 88.8 is a 'B'. 97.3 is a 'A'. 43.3 is a 'F'. The average score (76.25) is a 'C'. Constructing ChoiceFormat with a Pattern String The ChoiceFormat(String) constructor that accepts a String-based pattern may be more appealing to developers who are comfortable using String-based pattern with similar formatting classes such as DateFormat and DecimalFormat. The next code listing demonstrates use of this constructor. The pattern provided to the constructor leads to an instance of ChoiceFormat that should format the same way as the ChoiceFormat instance created in the earlier example with the constructor that takes two arrays. /** * Demonstrate ChoiceFormat instantiated with ChoiceFormat * constructor that accepts a String pattern. */ public void demonstrateChoiceFormatWithStringPattern() { final String limitFormatPattern = "0#F | 60#D | 70#C | 80#B | 90#A"; final ChoiceFormat gradesFormat = new ChoiceFormat(limitFormatPattern); writeGradeInformation(fredsTestScores, gradesFormat); } The writeGradeInformation method called here is the same as the one called earlier and the output is also the same (not shown here because it is the same). ChoiceFormat Behavior on the Extremes and Boundaries The examples so far have worked well with test scores in the expected ranges. Another set of test scores will now be used to demonstrate some other features of ChoiceFormat. This new set of test scores is set up in the next code listing and includes an “impossible” negative score and another “likely impossible” score above 100. private static List<Double> boundaryTestScores; static { final ArrayList<Double> boundaryScores = new ArrayList<Double>(); boundaryScores.add(-25.0); boundaryScores.add(0.0); boundaryScores.add(20.0); boundaryScores.add(60.0); boundaryScores.add(70.0); boundaryScores.add(80.0); boundaryScores.add(90.0); boundaryScores.add(100.0); boundaryScores.add(115.0); boundaryTestScores = boundaryScores; } When the set of test scores above is run through either of the ChoiceFormat instances created earlier, the output is as shown next. -25.0 is a 'F '. 0.0 is a 'F '. 20.0 is a 'F '. 60.0 is a 'D '. 70.0 is a 'C '. 80.0 is a 'B '. 90.0 is a 'A'. 100.0 is a 'A'. 115.0 is a 'A'. The average score (56.666666666666664) is a 'F '. The output just shown demonstrates that the “limits” set in the ChoiceFormat constructors are “inclusive,” meaning that those limits apply to the specified limit and above (until the next limit). In other words, the range of number is defined as greater than or equal to the specified limit. The Javadoc documentation for ChoiceFormat describes this with a mathematical description:X matches j if and only if limit[j] ≤ X < limit[j+1]The output from the boundaries test scores example also demonstrates another characteristic of ChoiceFormat described in its Javadoc documentation: “If there is no match, then either the first or last index is used, depending on whether the number (X) is too low or too high.” Because there is no match for -25.0 in the provided ChoiceFormat instances, the lowest (‘F’ for limit of 0) range is applied to that number lower than the lowest range. In these test score examples, there is no higher limit specified than the “90” for an “A”, so all scores higher than 90 (including those above 100) are for “A”. Let’s suppose that we wanted to enforce the ranges of scores to be between 0 and 100 or else have the formatted result indicate “Invalid” for scores less than 0 or greater than 100. This can be done as shown in the next code listing. /** * Demonstrating enforcing of lower and upper boundaries * with ChoiceFormat instances. */ public void demonstrateChoiceFormatBoundariesEnforced() { // Demonstrating boundary enforcement with ChoiceFormat(double[], String[]) final double[] minimumPercentages = {Double.NEGATIVE_INFINITY, 0, 60, 70, 80, 90, 100.000001}; final String[] letterGrades = {"Invalid - Too Low", "F", "D", "C", "B", "A", "Invalid - Too High"}; final ChoiceFormat gradesFormat = new ChoiceFormat(minimumPercentages, letterGrades); writeGradeInformation(boundaryTestScores, gradesFormat);// Demonstrating boundary enforcement with ChoiceFormat(String) final String limitFormatPattern = "-\u221E#Invalid - Too Low | 0#F | 60#D | 70#C | 80#B | 90#A | 100.0<Invalid - Too High"; final ChoiceFormat gradesFormat2 = new ChoiceFormat(limitFormatPattern); writeGradeInformation(boundaryTestScores, gradesFormat2); } When the above method is executed, its output shows that both approaches enforce boundary conditions better. -25.0 is a 'Invalid - Too Low'. 0.0 is a 'F'. 20.0 is a 'F'. 60.0 is a 'D'. 70.0 is a 'C'. 80.0 is a 'B'. 90.0 is a 'A'. 100.0 is a 'A'. 115.0 is a 'Invalid - Too High'. The average score (56.666666666666664) is a 'F'. -25.0 is a 'Invalid - Too Low '. 0.0 is a 'F '. 20.0 is a 'F '. 60.0 is a 'D '. 70.0 is a 'C '. 80.0 is a 'B '. 90.0 is a 'A '. 100.0 is a 'A '. 115.0 is a 'Invalid - Too High'. The average score (56.666666666666664) is a 'F '. The last code listing demonstrates using Double.NEGATIVE_INFINITY and \u221E (Unicode INFINITY character) to establish a lowest possible limit boundary in each of the examples. For scores above 100.0 to be formatted as invalid, the arrays-based ChoiceFormat uses a number slightly bigger than 100 as the lower limit of that invalid range. The String/pattern-based ChoiceFormat instance provides greater flexibility and exactness in specifying the lower limit of the “Invalid – Too High” range as any number greater than 100.0 using the less-than symbol (<). Handling None, Singular, and Plural with ChoiceFormat I opened this post by quoting the Javadoc stating that ChoiceFormat is “generally used in a MessageFormat for handling plurals,” but have not yet demonstrated this common use in this post. I will demonstrate a portion of this (plurals without MessageFormat) very briefly here for completeness, but a much more complete explanation (plurals with MessageFormat) of this common usage of ChoiceFormat is available in the Java Tutorials‘ Handling Plurals lesson (part of the Internationalization trail). The next code listing demonstrates application of ChoiceFormat to handle singular and plural cases. /** * Demonstrate ChoiceFormat used for differentiation of * singular from plural and none. */ public void demonstratePluralAndSingular() { final double[] cactiLowerLimits = {0, 1, 2, 3, 4, 10}; final String[] cactiRangeDescriptions = {"no cacti", "a cactus", "a couple cacti", "a few cacti", "many cacti", "a plethora of cacti"}; final ChoiceFormat cactiFormat = new ChoiceFormat(cactiLowerLimits, cactiRangeDescriptions); for (int cactiCount = 0; cactiCount < 11; cactiCount++) { out.println(cactiCount + ": I own " + cactiFormat.format(cactiCount) + "."); } } Running the example in the last code listing leads to output that is shown next. 0: I own no cacti. 1: I own a cactus. 2: I own a couple cacti. 3: I own a few cacti. 4: I own many cacti. 5: I own many cacti. 6: I own many cacti. 7: I own many cacti. 8: I own many cacti. 9: I own many cacti. 10: I own a plethora of cacti. One Final Symbol Supported by ChoiceFormat’s Pattern One other symbol that ChoiceFormat pattern parsing recognizes for formatting strings from a generated numeric value is the \u2264 (≤). This is demonstrated in the next code listing and the output for that code that follows the code listing. Note that in this example the \u2264 works effectively the same as using the simpler # sign shown earlier. /** * Demonstrate using \u2264 in String pattern for ChoiceFormat * to represent >= sign. Treated differently than less-than * sign but similarly to #. */ public void demonstrateLessThanOrEquals() { final String limitFormatPattern = "0\u2264F | 60\u2264D | 70\u2264C | 80\u2264B | 90\u2264A"; final ChoiceFormat gradesFormat = new ChoiceFormat(limitFormatPattern); writeGradeInformation(fredsTestScores, gradesFormat); } 75.6 is a 'C '. 88.8 is a 'B '. 97.3 is a 'A'. 43.3 is a 'F '. The average score (76.25) is a 'C '. Observations in Review In this section, I summarize some of the observations regarding ChoiceFormat made during the course of this post and its examples.When using the ChoiceFormat(double[], String[]) constructor, the two passed-in arrays must be of equal size or else an IllegalArgumentException (“Array and limit arrays must be of the same length.”) will be thrown. The “limits” double[] array provided to the ChoiceFormat(double[], String[]) constructor constructor should have the limits listed from left-to-right in ascending numerical order. When this is not the case, no exception is thrown, but the logic is almost certainly not going to be correct as Strings being formatted against the instance of ChoiceFormat will “match” incorrectly. This same expectation applies to the constructor accepting a pattern. ChoiceFormat allows Double.POSITIVE_INFINITY and Double.NEGATIVE_INFINITY to be used for specifying lower range limits via its two-arrays constructor. ChoiceFormat allows \u221E and -\u221E to be used for specifying lower range limits via its single String (pattern) constructor. The ChoiceFormat constructor accepting a String pattern is a bit more flexible than the two-arrays constructor and allows one to specify lower limit boundaries as everything over a certain amount without including that certain amount exactly. Symbols and characters with special meaning in the String patterns provided to the single String ChoiceFormat constructor include #, <, \u2264 (≤), \u221E (∞), and |.Conclusion ChoiceFormat allows formatting of numeric ranges to be customized so that specific ranges can have different and specific representations. This post has covered several different aspects of numeric range formatting with ChoiceFormat, but parsing numeric ranges from Strings using ChoiceFormat was not covered in this post. Further ReadingChoiceFormat API Documentation Handling Plurals Text: Freedom with Message Format – Part 2: Choice Format Java i18n Pluralisation using ChoiceFormat What’s wrong with ChoiceFormat? (Lost in translation – part IV) More about what’s wrong with ChoiceFormatReference: ChoiceFormat: Numeric Range Formatting from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Reduce Boilerplate Code in your Java applications with Project Lombok

One of the most frequently voiced criticisms of the Java programming language is the amount of Boilerplate Code it requires. This is especially true for simple classes that should do nothing more than store a few values. You need getters and setters for these values, maybe you also need a constructor, overriding equals() and hashcode() is often required and maybe you want a more useful toString() implementation. In the end you might have 100 lines of code that could be rewritten with 10 lines of Scala or Groovy code. Java IDEs like Eclipse or IntelliJ try to reduce this problem by providing various types of code generation functionality. However, even if you do not have to write the code yourself, you always see it (and get distracted by it) if you open such a file in your IDE.   Project Lombok (don’t be frightened by the ugly web page) is a small Java library that can help reducing the amount of Boilerplate Code in Java Applications. Project Lombok provides a set of annotations that are processed at development time to inject code into your Java application. The injected code is immediately available in your development environment. Lets have a look at the following Eclipse Screenshot:The defined class is annotated with Lombok’s @Data annotation and does not contain any more than three private fields. @Data automatically injects getters, setters (for non final fields), equals(), hashCode(), toString() and a constructor for initializing the final dateOfBirth field. As you can see the generated methods are directly available in Eclipse and shown in the Outline view. Setup To set up Lombok for your application you have to put lombok.jar to your classpath. If you are using Maven you just have to add to following dependency to your pom.xml: <dependency>   <groupId>org.projectlombok</groupId>   <artifactId>lombok</artifactId>   <version>1.14.6</version>   <scope>provided</scope> </dependency> You also need to set up Lombok in the IDE you are using:NetBeans users just have to enable the Enable Annotation Processing in Editor option in their project properties (see: NetBeans instructions). Eclipse users can install Lombok by double clicking lombok.jar and following a quick installation wizard. For IntelliJ a Lombok Plugin is available.Getting started The @Data annotation shown in the introduction is actually a shortcut for various other Lombok annotations. Sometimes @Data does too much. In this case, you can fall back to more specific Lombok annotations that give you more flexibility. Generating only getters and setters can be achieved with @Getter and @Setter: @Getter @Setter public class Person {   private final LocalDate birthday;   private String firstName;   private String lastName;  public Person(LocalDate birthday) {     this.birthday = birthday;   } } Note that getter methods for boolean fields are prefixed with is instead of get (e.g. isFoo() instead of getFoo()). If you only want to generate getters and setters for specific fields you can annotate these fields instead of the class. Generating equals(), hashCode() and toString(): @EqualsAndHashCode @ToString public class Person {   ... } @EqualsAndHashCode and @ToString also have various properties that can be used to customize their behaviour: @EqualsAndHashCode(exclude = {"firstName"}) @ToString(callSuper = true, of = {"firstName", "lastName"}) public class Person {   ... } Here the field firstName will not be considered by equals() and hashCode(). toString() will call super.toString() first and only consider firstName and lastName. For constructor generation multiple annotations are available:@NoArgsConstructor generates a constructor that takes no arguments (default constructor). @RequiredArgsConstructor generates a constructor with one parameter for all non-initialized final fields. @AllArgsConstructor generates a constructor with one parameter for all fields in the class.The @Data annotation is actually an often used shortcut for @ToString, @EqualsAndHashCode, @Getter, @Setter and @RequiredArgsConstructor. If you prefer immutable classes you can use @Value instead of @Data: @Value public class Person {   LocalDate birthday;   String firstName;   String lastName; } @Value is a shortcut for @ToString, @EqualsAndHashCode, @AllArgsConstructor, @FieldDefaults(makeFinal = true, level = AccessLevel.PRIVATE) and @Getter. So, with @Value you get toString(), equals(), hashCode(), getters and a constructor with one parameter for each field. It also makes all fields private and final by default, so you do not have to add private or final modifiers. Looking into Lombok’s experimental features Besides the well supported annotations shown so far, Lombok has a couple of experimental features that can be found on the Experimental Features page. One of these features I like in particular is the @Builder annotation, which provides an implementation of the Builder Pattern. @Builder public class Person {   private final LocalDate birthday;   private String firstName;   private String lastName; } @Builder generates a static builder() method that returns a builder instance. This builder instance can be used to build an object of the class annotated with @Builder (here Person): Person p = Person.builder()   .birthday(LocalDate.of(1980, 10, 5))   .firstName("John")   .lastName("Smith")   .build(); By the way, if you wonder what this LocalDate class is, you should have a look at my blog post about the Java 8 date and time API! Conclusion Project Lombok injects generated methods, like getters and setters, based on annotations. It provides an easy way to significantly reduce the amount of Boilerplate code in Java applications. Be aware that there is a downside: According to reddit comments (including a comment of the project author), Lombok has to rely on various hacks to get the job done. So, there is a chance that future JDK or IDE releases will break the functionality of project Lombok. On the other hand, these comments where made 5 years ago and Project Lombok is still actively maintained.You can find the source of Project Lombok on GitHub.Reference: Reduce Boilerplate Code in your Java applications with Project Lombok from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....

3 Essential Ways To Start Your JBoss BPM Process

This episode of tips and tricks will help you to understand the best way to initiate your process instances for your needs. Planning your projects might include process projects, but have you thought about the various ways that you can initiate your process? Maybe you have JBoss BPM Suite running locally in your architecture, maybe you have it running in the Cloud, but wherever it is you will still need to make an informed choice about how to initiate a process. We will cover here three essential ways you can best start a JBoss BPM process:UI dashboard RestAPI client application (API)BPM Suite UI In the interest of completeness we have to mention the ability to start a process instance exists in the form of a button within JBoss BPM Suite dashboard tooling. When logged into JBoss BPM Suite and you have finished project development, your BPM project can then be built and deployed as follows.     AUTHORING -> PROJECT AUTHORING -> TOOLS -> PROJECT EDITOR -> BUILD&DEPLOY (button) The next step is to start a process instance in the process management perspective in one of two ways. 1. PROCESS MANAGEMENT -> PROCESS DEFINITIONS -> start-icon2. PROCESS MANAGEMENT -> PROCESS DEFINITIONS -> magnifying-glass-icon -> in DETAILS panel -> NEW INSTANCE (button)Both of these methods will result in a process instance being started, popping up a start form if data is to be submitted to the BPM process. RestAPI Assuming you are going to be calling for a start of your BPM process after deployment from various possible locations we wanted to show you how these might be easily integrated.It does not matter if you are starting a process from a web application, a mobile application or creating backend services for your enterprise to use as a starting point for processes. The exposed RestAPI provides the perfect way to trigger your BPM process and can be show in the following code example. This example is a very simple Rest client that, for clarity, will be embedding the various variables one might pass to such a client directly into the example code. There are no variables passed to the process being started, for that we will provide a more complete example in the section covering a client application. It sends a start process command and expects no feedback from the Customer Evaluation BPM process being called, as it is a Straight Through Process (STP). public class RestClientSimple { private static final String BASE_URL = "http://localhost:8080/business-central/rest/"; private static final String AUTH_URL = "http://localhost:8080/business-central/org.kie.workbench.KIEWebapp/j_security_check"; private static final String DEPLOYMENT_ID = "customer:evaluation:1.0"; private static final String PROCESS_DEF_ID = "customer.evaluation"; private static String username = "erics"; private static String password = "bpmsuite"; private static AuthenticationType type = AuthenticationType.FORM_BASED;public static void main(String[] args) throws Exception {System.out.println("Starting process instance: " + DEPLOYMENT_ID); System.out.println(); // start a process instance with no variables. startProcess();System.out.println(); System.out.println("Completed process instance: " + DEPLOYMENT_ID); }/** * Start a process using the rest api start call, no map variables passed. * * @throws Exception */ public static void startProcess() throws Exception { String newInstanceUrl = BASE_URL + "runtime/" + DEPLOYMENT_ID + "/process/" + PROCESS_DEF_ID + "/start"; String dataFromService = getDataFromService(newInstanceUrl, "POST"); System.out.println("newInstanceUrl:["+newInstanceUrl+"]"); System.out.println("--------"); System.out.println(dataFromService); System.out.println("--------"); }<...SNIPPED MORE CODE...> } The basics here are the setup of the business central URL to point to the start RestAPI call. In the main method one finds a method call to startProcess() which builds the RestAPI URL and captures the data reply sent from JBoss BPM Suite. To see the details of how that is accomplished, please refer to the class in its entirety within the JBoss BPM Suite and JBoss Fuse Integration Demo project. Intermezzo on testing An easy way to test your process once it has been built and deployed is to use curl to push a request to the process via the RestAPI. Such a request looks like the following, first in generic form and then a real run through the same Customer Evaluation project as used in the previous example. The generic RestAPI call and proper authentication request is done in curl as follows: $ curl -X POST -H 'Accept: application/json' -uerics 'http://localhost:8080/business-central/rest/runtime/customer:evaluation:1.1/process/customer.evaluation/start?map_par1=var1↦_par2=var2' For the Customer Evaluation process a full cycle of using curl to call the start process, authenticating our user and receiving a response from JBoss BPM Suite should provide the following output. $ curl -X POST -H 'Accept: application/json' -uerics 'http://localhost:8080/business-central/rest/runtime/customer:evaluation:1.1/process/customer.evaluation/start?map_employee=erics'Enter host password for user 'erics': bpmsuite1!{"status":"SUCCESS","url":"http://localhost:8080/business-central/rest/runtime/customer:evaluation:1.1/process/customer.evaluation/start?map_employee=erics","index":null,"commandName":null,"processId":"customer.evaluation","id":3,"state":2,"eventTypes":[]}We see the process instances complete in the process instance perspectives as shown. Client application The third and final way to start your JBoss BPM Suite process instances is more in line with injecting a bunch of pre-defined submissions to populate both the reporting history and could be based on historical data. The example shown here is available in most demo projects we provide but is taken from the Mortgage Demo project. This demo client is using static lines of data to be injected into the process one at a time. With a few minor adjustments one could pull in historical data from an existing data source and inject as many processes as desired in this format. It also is a nice way to stress test your process projects. We will skip the setup of the session and process details as these have been shown above, but provide instead a link to the entire demo client class and leave these details for the reader to pursue. Here we will just focus on how the individual start process calls will look. public static void populateSamples(String userId, String password, String applicationContext, String deploymentId) {RuntimeEngine runtimeEngine = getRuntimeEngine( applicationContext, deploymentId, userId, password ); KieSession kieSession = runtimeEngine.getKieSession(); Map processVariables;//qualify with very low interest rate, great credit, non-jumbo loan processVariables = getProcessArgs( "Amy", "12301 Wilshire", 333224449, 100000, 500000, 100000, 30 ); kieSession.startProcess( "com.redhat.bpms.examples.mortgage.MortgageApplication", processVariables );} As you can see the last line is where the individual mortgage submission is pushed to JBoss BPM Suite. If you examine the rest of the class you will find multiple entries being started one after another. We hope you now have a good understanding of the ways you can initiate a process and choose the one that best suits your project needs.Reference: 3 Essential Ways To Start Your JBoss BPM Process from our JCG partner Eric Schabell at the Eric Schabell’s blog blog....

Common Mistakes Junior Developers Do When Writing Unit Tests

It’s been 10 years since I wrote my first unit test. Since then, I can’t remember how many thousands of unit tests I’ve written. To be honest I don’t make any distinction between source code and test code. For me it’s the same thing. Test code is part of the source code. The last 3-4 years, I’ve worked with several development teams and I had the chance to review a lot of test code. In this post I’m summarizing the most common mistakes that in-experienced developers usually do when writing unit tests. Let’s take a look at the following simple example of a class that collects registration data, validates them and performs a user registration. Clearly the method is extremely simple and its purpose is to demonstrate the common mistakes of unit tests and not to provide a fully functional registration example: public class RegistrationForm { private String name,email,pwd,pwdVerification; // Setters - Getters are ommitted public boolean register(){ validate(); return doRegister(); } private void validate () { check(name, "email"); check(email, "email"); check(pwd, "email"); check(pwdVerification, "email"); if (!email.contains("@")) { throw new ValidationException(name + " cannot be empty."); } if ( !pwd.equals(pwdVerification)) throw new ValidationException("Passwords do not match."); } private void check(String value, String name) throws ValidationException { if ( value == null) { throw new ValidationException(name + " cannot be empty."); } if (value.length() == 0) { throw new ValidationException(name + " is too short."); } } private boolean doRegister() { //Do something with the persistent context return true; } Here’s a corresponding unit test for the register method to intentionally show the most common mistakes in unit testing. Actually I’ve seen many times very similar test code, so it’s not what I’d call science fiction: @Test public void test_register(){ RegistrationForm form = new RegistrationForm(); form.setEmail("Al.Pacino@example.com"); form.setName("Al Pacino"); form.setPwd("GodFather"); form.setPwdVerification("GodFather"); assertNotNull(form.getEmail()); assertNotNull(form.getName()); assertNotNull(form.getPwd()); assertNotNull(form.getPwdVerification()); form.register(); }Now, this test, obviously will pass, the developer will see the green light so thumbs up! Let’s move to the next method. However this test code has several important issues. The first one which is in my humble opinion, the biggest misuse of unit tests is that the test code is not adequately testing the register method. Actually it tests only one out of many possible paths. Are we sure that the method will correctly handle null arguments? How the method will behave if the email doesn’t contain the @ character or passwords don’t match? Developers tend to write unit tests only for the successful paths and my experience has shown that most of the bugs discovered in code are not related to the successful paths.  A very good rule to remember is that for every method you need N numbers of tests where N equals to the cyclomatic complexity of the method adding the cyclomatic complexity of all private method calls. Next is the name of the test method. For this one I partially blame all these modern IDEs that auto-generate stupid names for test methods like the one in the example. The test method should be named in such a way that explains to the reader what is going to be tested and under which conditions. In other words it should describe the path under testing. In our case a better name could be : should_register_when_all_registration_data_are_valid. In this article you can find several approaches on naming unit tests but for me the ‘should’ pattern is the closest to the human languages and easier to understand when reading test code. Now let’s see the meat of the code. There are several assertions and this violates the rule that each test method should assert one and only one thing. This one asserts the state of four(4) RegistrationForm attributes. This makes the test harder to maintain and read (oh yes, test code should be maintainable and readable just like the source code. Remember that for me there’s no distinction between them) and it makes difficult to understand which part of the test fails. This test code also asserts setters/getters. Is this really necessary? To answer that I will quote Roy Osherove’s saying from his famous book :” The Art of Unit Testing” Properties (getters/setters in Java) are good examples of code that usually doesn’t contain any logic, and doesn’t require testing. But watch out: once you add any check inside the property, you’ll want to make sure that logic is being tested. In our case there’s no business logic in our setters/getters so these assertions are completely useless. Moreover they wrong because they don’t even test the correctness of the setter. Imagine that an evil developer changes the code of the getEmail method to always return a constant String instead of the email attribute value. The test will still pass because it asserts that the setter is not null and it doesn’t assert for the expected value. So here’s a rule you might want to remember. Always try to be as much as specific you can when you assert the return value of a method. In other words try to avoid assertIsNull, assertIsNotNull unless you don’t care about the actual return value. The last but not least problem with the test code we’re looking at is that the actual method (register) that is under test, is never asserted. It’s called inside the test method but we never evaluate its result. A variation of this anti-pattern is even worse. The method under test is not even invoked in the test case. So just keep in mind that you should not only invoke the method under test but you should always assert the expected result, even if it’s just a Boolean value. One might ask : “what about void methods?”. Nice question but this is another discussion – maybe another post, but to give you a couple of tips testing of a void method might hide a bad design or it should be done using a framework that verifies method invocations ( such as Mockito.Verify ) As a bonus here’s a final rule you should remember. Imagine that the doRegister is actually implemented and do some real work with an external database. What will happen if some developer that has no database installed in her local environment tries to run the test. Correct! Everything will fail. Make sure that your test will have the same behavior even if it runs from the dumpiest terminal that has access only to the code and the JDK. No network, no services, no databases, no file system. Nothing!Reference: Common Mistakes Junior Developers Do When Writing Unit Tests from our JCG partner Patroklos Papapetrou at the Only Software matters blog....

Preventing lost updates in long conversations

Introduction All database statements are executed within the context of a physical transaction, even when we don’t explicitly declare transaction boundaries (BEGIN/COMMIT/ROLLBACK). Data integrity is enforced by the ACID properties of database transactions. Logical vs Physical transactions An logical transaction is an application-level unit of work that may span over multiple physical (database) transactions. Holding the database connection open throughout several user requests, including user think time, is definitely an anti-pattern. A database server can accommodate a limited number of physical connections, and often those are reused by using connection pooling. Holding limited resources for long periods of time hinders scalability. So database transactions must be short to ensure that both database locks and the pooled connections are released as soon as possible. Web applications entail a read-modify-write conversational pattern. A web conversation consists of multiple user requests, all operations being logically connected to the same application-level transaction. A typical use case goes like this:Alice requests a certain product for being displayed The product is fetched from the database and returned to the browser Alice requests a product modification The product must be updated and saved to the databaseAll these operations should be encapsulated in a single unit-of-work. We therefore need an application-level transaction that’s also ACID complaint, because other concurrent users might modify the same entities, long after shared locks had been released. In my previous post I introduced the perils of lost updates. The database transaction ACID properties can only prevent this phenomena within the boundaries of a single physical transaction. Pushing transaction boundaries into the application layer requires application-level ACID guarantees. To prevent lost updates we must have application-level repeatable reads along with a concurrency control mechanisms. Long conversations HTTP is a stateless protocol. Stateless applications are always easier to scale than stateful ones, but conversations can’t be stateless. Hibernate offers two strategies for implementing long conversations:Extended persistence context Detached objectsExtended persistence context After the first database transaction ends the JDBC connection is closed (usually going back to the connection pool) and the Hibernate session becomes disconnected. A new user request will reattach the original Session. Only the last physical transaction must issue DML operations, as otherwise the application-level transaction is not an atomic unit of work. For disabling persistence in the course of the application-level transaction, we have the following options:We can disable automatic flushing, by switching the Session FlushMode to MANUAL. At the end of the last physical transaction, we need to explicitly call Session#flush() to propagate the entity state transitions. All but the last transaction are marked read-only. For read-only transactions Hibernate disables both dirty checking and the default automatic flushing.The read-only flag might propagate to the underlying JDBC Connection, so the driver might enable some database-level read-only optimizations.The last transaction must be writeable so that all changes are flushed and committed.Using an extended persistence context is more convenient since entities remain attached across multiple user requests. The downside is the memory footprint. The persistence context might easily grow with every new fetched entity. Hibernate default dirty checking mechanism uses a deep-comparison strategy, comparing all properties of all managed entities. The larger the persistence context, the slower the dirty checking mechanism will get. This can be mitigated by evicting entities that don’t need to be propagated to the last physical transaction. Java Enterprise Edition offers a very convenient programming model through the use of @Stateful Session Beans along with an EXTENDED PersistenceContext. All extended persistence context examples set the default transaction propagation to NOT_SUPPORTED which makes it uncertain if the queries are enrolled in the context of a local transaction or each query is executed in a separate database transaction. Detached objects Another option is to bind the persistence context to the life-cycle of the intermediate physical transaction. Upon persistence context closing all entities become detached. For a detached entity to become managed, we have two options:The entity can be reattached using Hibernate specific Session.update() method. If there’s an already attached entity (same entity class and with the same identifier) Hibernate throws an exception, because a Session can have at most one reference of any given entity.There is no such equivalent in Java Persistence API. Detached entities can also be merged with their persistent object equivalent. If there’s no currently loaded persistence object, Hibernate will load one from the database. The detached entity will not become managed.By now you should know that this pattern smells like trouble:What if the loaded data doesn’t match what we have previously loaded? What if the entity has changed since we first loaded it? Overwriting new data with an older snapshot leads to lost updates. So the concurrency control mechanism is not an option when dealing with long conversations.Both Hibernate and JPA offer entity merging.Detached entities storage The detached entities must be available throughout the lifetime of a given long conversation. For this, we need a stateful context to make sure all conversation requests find the same detached entities. Therefore we can make use of:Stateful Session Beans: Stateful session beans is one of the greatest feature offered by Java Enterprise Edition. It hides all the complexity of saving/loading state between different user requests. Being a built-in feature, it automatically benefits from cluster replication, so the developer can concentrate on business logic instead. Seam is a Java EE application framework that has built-in support for web conversations. HttpSession: We can save the detached objects in the HttpSession. Most web/application servers offer session replication so this option can be used by non-JEE technologies, like Spring framework. Once the conversation is over, we should always discard all associated state, to make sure we don’t bloat the Session with unnecessary storage.You need to be careful to synchronize all HttpSession access (getAttribute/setAttribute), because for a very strange reason, this web storage is not thread-safe. Spring Web Flow is a Spring MVC companion that supports HttpSession web conversations. Hazelcast: Hazelcast is an in-memory clustered cache, so it’s a viable solution for the long conversation storage. We should always set an expiration policy, because in a web application, conversations might be started and abandoned. Expiration acts as the Http session invalidation.The stateless conversation anti-pattern Like with database transactions, we need repeatable reads as otherwise we might load an already modified record without realizing it so:Alice request a product to be displayed The product is fetched from the database and returned to the browser Alice request a product modification Because Alice hasn’t kept a copy of the previously displayed object, she has to reload it once again The product is updated and saved to the database The batch job update has been lost and Alice will never realize itThe stateful version-less conversation anti-pattern Preserving conversation state is a must if we want to ensure both isolation and consistency, but we can still run into lost updates situations:Even if we have application-level repeatable reads others can still modify the same entities. Within the context of a single database transaction, row-level locks can block concurrent modifications but this is not feasible for logical transactions. The only option is to allow others modify any rows, while preventing persisting stale data. Optimistic locking to the rescue Optimistic locking is a generic-purpose concurrency control technique, and it works for both physical and application-level transactions. Using JPA is only a matter of adding a @Version field to our domain models:Conclusion Pushing database transaction boundaries into the application layer requires an application-level concurrency control. To ensure application-level repeatable reads we need to preserve state across multiple user requests, but in the absence of database locking we need to rely on an application-level concurrency control. Optimistic locking works for both database and application-level transactions, and it doesn’t make use of any additional database locking. Optimistic locking can prevent lost updates and that’s why I always recommend all entities be annotated with the @Version attribute.Reference: Preventing lost updates in long conversations from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: