Featured FREE Whitepapers

What's New Here?

java-logo

Java SE 8 new features tour: The Big change, in Java Development world

I am proudly one of the adopt-OpenJDK members like others professional team members but joined from last 8 months, and we went through all stages of Java SE 8 development, compilations, coding, discussions … etc., until we bring it to the life. And it is released on March 18th 2014 and it is now available for you. I am happy to announce about this series “Java SE 8 new features tour”, which I am going to write it provided with examples to streamline the Java SE 8knowledge gaining, development experience, new features, and APIs that will leverage your knowledge, enhancing the way you code, and increase your productivity as well. I hope you enjoy it as I am doing when writing it.   We will take a tour of the new major and important features in Java SE 8 (projects and APIs), the platform designed to support faster, and easier Java development. We will learn about Project Lambda, a new syntax to support lambda expressions in Java code. Checking the new Stream API for processing collections and managing parallel processing. Calculating timespans with The DateTime API for representing, managing and calculating date and time values. In addition to Nashorn, a new engine to better support the use of JavaScript code with the Java Virtual Machine. Finally, I will also cover some lesser-known features such as new methods for joining strings into lists and other more features that will help you in daily tasks. For more about Java SE 8 features and tutorials, I advise you to consult the Java Tutorial the official site and Java SE 8 java API documentation too. The topics we are going to cover during this series will include:Installing Java SE 8, notes and advices. Introducing Java SE 8 main features, the big change. Working with lambda expressions and method references. Traversing collections with streams. Calculating timespans with the new DateTime API Running JavaScript from Java with Nashorn. Miscellaneous new features and API changes.Installing Java SE 8, notes and advices.Installing Java SE 8 on Windows In order to run Java SE 8 on Microsoft Windows, first check which version you have. Java SE 8 is supported on Windows 8, 7, Vista, and XP. Specifically, you’ll need these versions. For Windows 8 or 8.1, you’ll need the desktop version of Windows. Windows RT is not supported. You can run Java SE 8 on any version of Windows 7, and on the most recent versions of Windows Vista and Windows XP. On Server based versions of Windows, you can run 2008 and the 64-bit version of 2012.If you want to work on Java Applets you’ll need a 64-bit browser, these can include Internet Explorer 7.0 and above, Firefox 3.6 and above, and Google Chrome which is supported on Windows, but not on Mac.You can download the Java Developer Kit for Java SE 8 fromURL java.oracle.com That will take you to the current Java home page. Click Java SE. Under Top Downloads. Then click the Download link for Java 8.Installing Java SE 8 on Mac In order to work with Java SE 8 on Mac OS X, you must have an Intel-based Mac running Mac OS X 10.7.3, that’s Lion, or later. If you have older versions of Mac, you won’t be able to program or run Java 8 applications. In order to install Java SE 8 you’ll need administrative privileges on your Mac. And in order to run Java applets within a browser you’ll need to use a 64 bit browser, such as Safari or Firefox.Google Chrome is a 32 bit browser, and won’t work for this purpose.As described earlier on installing Java SE on windows, the same website has the MAC OS .dmg version to download and install. Actually contains all operating systems versions. However, our focus here would be on windows and MAC.Now you’re ready to start programming with Java SE 8 on both Windows and MAC OS X platforms. After we have installed Java SE 8 probably, let’s dive into the first point and have a look at Java SE 8 main features in a nutshell, to begin our coding tour on our favorite IDE.Introducing Java SE 8 main features, the big change. An overview of the JSR 337: Java SE 8 Release Contents Java SE 8 is a major release for the Java programming language and the Java virtual machine. It includes many changes. Some have gotten more coverage than others like Lambda expression, but I’m going to talk about both the major changes and a few of the minor ones. JSR 335: Lambda Expressions Probably the most attention has gone to Project Lambda, a set of new syntactical capabilities that let Java developers work as functional programmers. This includes lambda expressions, method references and a few other capabilities. JSR 310: Date and Time API There is a new API for managing dates and times. Replacing the older classes. Those older classes are still in the Java Runtime, but as you build new applications, you might want to move to this new set of capabilities, which let you streamline your code and be a little more intuitive in how you program. There are new classes to manage local dates and times and time zones and for calculating differences between different times. The Stream API Adds new tools for managing collections including lists, maps, sets, and so on.A stream allows you to deal with each item in a collection without having to write explicit looping code. It also lets you break your processing into multiple CPUs. So, for large, complex data sets you can see significant performance improvement. Project Nashorn The Nashorn JavaScript engine is new to Java SE 8 too. This is a completely new JavaScript engine written from scratch that lets you code in JavaScript but lets you integrate Java classes and objects.Nashorn’s goal is to implement a lightweight high-performance JavaScript runtime in Java with a native JVM. This Project intends to enable Java developers embedding of JavaScript in Java applications via JSR-223 and to develop freestanding JavaScript applications using the jrunscript command-line tool.In the article on Nashorn, I’ll describe how to run Nashorn code from the command line. But also how to write JavaScript in separate files, and then execute those files from your Java code. Concurrency API enhancements. There are also enhancements to the concurrency framework, which lets you manage and accumulate values in multiple threads. There are lots of smaller changes as well. String, numbers has new tools There are new tools for creating delimited lists in the string class and other new classes. There are tools for aggregating numbers including integers, lungs, doubles, and so on. Miscellaneous New Features There are also tools for doing a better job of detecting null situations, and I’ll describe all of these during the series. And I’ll describe how to work with files, using new convenience methods.So, when is Java SE 8 available? The answer is, now. It was released on March 18, 2014. For developers who use Java to build client site applications, the JavaFX rich internet application framework supports Java 8 now. And most of the Java enterprise edition vendors support Java 8 too. Whether you move to Java SE 8 right away depends on the kinds of project you’re working on. For many server and client site applications, it’s available immediately. Not for Android yet. Android developers beware; Java SE 8 syntax and APIs are not supported in Android at this point. It’s only very recently that Android moved to some of the newest Java 7 syntax. And so, it might take some time before Android supports this newest syntax or the newest APIs. But for all other Java developers, it’s worth taking a look at these new capabilities. What about IDEs? Java SE 8 is supported by all of the major Java development environments. Including Oracle’s Netbeans, Intellij Idea, and Eclipse. For this series I’ll be doing all of my demos in Netbeans, using Netbeans, version 8, which available to download from https://netbeans.org/downloads/. However before we start diving into this series, let’s check first, that we have installed Java SE 8 probably and start a new project under Netbeans, which will contains all code that we are going write. Then develop a lambda code to test our project if it is working or not probably with Java SE 8 . Alternatively you can download the series source code from my Github account, open it with Netbeans and follow what I am showing next, and in upcoming series code. Project on Github: https://github.com/mohamed-taman/JavaSE8-Features Hello world application on Java SE 8 with Lambda expression. Steps (not required if you navigating my code):Open NetBeans 8 –> from file –> New project –> from left, and choose Maven –> from right, and choose Java Application –> Click next. Follow the following screen shoot variables definition, or change to your favorite names and values –> then click finish.    If everything is okay you should have the following structure, on project navigator:    Click on Project “Java8Features” –> Click File, from upper menu –> then, Project properties. Under Category –> From left choose Source, then check that “Source/ Binary format” is 1.8. –> From left open Build, and choose Compiler, then check that “Java Platform” is pointing to your current JDK 8 installation –> Click Ok. If JDK 8 not presents then go to tools –> chooses, Java Platforms –> Add Platform –> Then chooses Java Standard Edition –> then point to your installed JDK 8. Now our project configured to work with Java 8 so let’s add some Lambda code. On Package “eg.com.tm.java8.features”, right click, and select New from menu –> Java Interface –> Name it Printable, under overview package “eg.com.tm.java8.features.overview” –> click finish. Implement Printable interface as the following: /* * Copyright (C) 2014 mohamed_taman * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. */ package eg.com.tm.java8.features.overview; /** * * @author mohamed_taman */ @FunctionalInterface public interface Printable { public void print(); }On the same package add the following class named “Print”, with main method as the following: /* * Copyright (C) 2014 mohamed_taman * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. */ package eg.com.tm.java8.features.overview; import static java.lang.System.out; /** * * @author mohamed_taman */ public class Print { public static void main(String[] args) { Printable job = ()-> out.println("Java SE 8 is working " + "and Lambda Expression too."); job.print(); } }Right click on Print class and choose Run. If every thing is okay then you should see the following output. ------------------------------------------------------------------------ Building Java8Features 1.0-SNAPSHOT ------------------------------------------------------------------------ --- exec-maven-plugin:1.2.1:exec (default-cli) @ Java8Features --- Java SE 8 is working and Lambda Expression too. ------------------------------------------------------------------------ BUILD SUCCESSCongratulation your Java SE 8 project works fine, let’s explain what we have written. Most of this code would work on Java 7, but there’s an annotation here that was added in Java SE 8, FunctionalInterface. If your Netbeans environment isn’t correctly configured for Java 8, this annotation will cause an error because it won’t be recognized as valid Java code. I don’t see an error, so that’s a good sign that Eclipse is working as I hoped. Next I’ll open this class definition named Print.java. This is a class with a main method so I can run it as a console application and it has a critical line of new Java 8 syntax. It’s creating an instance of that functional interface I just showed you using a lambda expression, a style of syntax that didn’t exist in Java prior to Java 8. I’ll explain what this syntax is doing very early in the next article. But all you need to know right now is that if this code isn’t causing any errors, then once again, Netbeans is recognizing it as valid Java syntax. I’m creating an instance of that interface and then calling that interface’s print method. And so, I’ll run the code. I’ll click the Run button on my tool bar and in my console I see a successful result. I’ve created an object, which is an instance of that interface using a lambda expression. And I’ve called its method and it’s outputting a string to the console. So, if this is all working, you’re in great shape. You’re ready to get started programming with Java SE 8 in Netbeans. If you had any problems along the way, go back to earlier steps and walk through the steps. One step at a time.Resources:The Java Tutorials, Lambda Expressions JSR 310: Date and Time API JSR 337: Java SE 8 Release Contents OpenJDK website Java Platform, Standard Edition 8, API SpecificationReference: Java SE 8 new features tour: The Big change, in Java Development world from our JCG partner Mohamed Taman at the Improve your life Through Science and Art blog....
career-logo

Oracle Certified Associate and Professional, Java SE 7 Programmer

This certification was one of the first exams I was considering after I was done with my college courses regarding Java and object-oriented programming. This was a time when I started working in programming and sort of needed to improve my rather basic knowledge in this area. However, it took me almost two years to make a decision to go for it (meaning the change to Java SE 7 and also revamp of the certification path by Oracle). This had both positive and negative effects. Upsides include more recent language knowledge being tested as well as a great way to prepare for both the certification and my thesis. On the other hand, the older SCJP exam for Java 6 was split into two exams increasing the overall price and also covered far more ground because of the additions in Java 7 release.   About certification Lets start with basic description of both exams. None of these exams requires any training, course or additional activity other than taking the exam itself. Based on lists of topic for each exam and also my experience, these exams do not overlap when it comes to areas being tested. However, you might get question testing also some objective from OCAJP, so remember that OCPJP expects you to know stuff from OCAJP and will use examples including syntax covered there. Exam is administered at the test center of your choosing using standard Oracle/PEARSON VUE testing software. When it comes to ordering and taking these exams it is pretty automated process and there were no problems at all. Following table presents the most important information regarding both exams.Basic informationAssociate (OCAJP 7) Professional (OCPJP 7) Upgrade (OCPJP 7)Exam Number 1Z0-803 1Z0-804 1Z0-805Prerequisites none 1Z0-803 SCJP 6.0 (by Sun as CX-310-065)Exam Topics associate topics professional topics upgrade topicsExam format Multiple Choice Multiple Choice Multiple ChoiceDuration 150 minutes 150 minutes 150 minutesNumber of Questions 90 90 80Passing Score 63% (57 questions) 65% (59 questions) 60% (48 questions)Price US$ 245 US$ 245 US$ 245  * There is possibility that you may have seen different passing score for OCAJP 7 exam. It was caused by few changes made in this exam and these are the official values as of writing this post. There is an upgrade exam for those of you that already own SCJP 6 certification. This exam should test your knowledge in areas missing in SCJP 6 and if you pass it, you will earn OCPJP 7 certificate. If you are new to this and have no experience with testing process it might come as a surprise to you – conditions during exam are pretty strict. You are going to be recorded by several cameras. Another regulation prohibits any items other than ID card and pen with blank sheet given to you by test center representative. Quality of test centers varies widely so be sure to ask people who have been already tested for their opinions and advice. The exam Both exams share 150 minutes to complete a set of 90 questions. All questions I have encountered in both exams were in form of either select correct answer or choose all that apply. Even though some people mentioned drag-and-drop questions in the exam, neither me nor me colleagues have seen any. When it comes to questions, please be careful. As always, read the question carefully and if in doubt go word for word until you get the point. OCAJP 7 has pretty evenly distributed questions throughout the exam objectives. However, OCPJP 7 exam presented a little twist in form of ever-present threads. I’m not saying every question included threads but the most of the questions did. Another thing that made this exam more interesting were questions regarding patterns and design principles. You have to be able to identify good and bad design (loose/tight coupling, high/low cohesion, …) and also be able to tell which is true regarding given example. The aspect of time and comfort during the exam changes drastically when you move on to OCPJP 7. Let me give an example from my exams. OCAJP 7 exam took me about an hour to complete and I took another hour to thoroughly check all of my answers. After doing so, I decided to turn the test in early since I felt there was nothing more to do (please note that this was the case in 2012 when I took the exam). However, when i was doing OCPJP 7 exam, it took me almost whole 150 minutes to complete, leaving me with time to check 4 first questions! Having said that, please, don’t get stuck on one question too long (unless you are a skilled veteran). You can always mark a question for review and come back to it after you are done with all the questions you can answer without further analysis. In case of OCPJP 7, getting stuck on few question can cause you some unanswered questions, so manage your time carefully. The complexity of questions raised dramatically and you need to take that into account during your preparation. Preparation OCAJP 7 My primary resource for this exam was so-called K&B 6 book (check out resources below). As you might have noticed this book was published for Java 6 so it is missing all additions in Java 7 – mainly project Coin (syntax changes), new frameworks for concurrency and IO as well as other improvements. However, the style of this book is suited for beginners and will prepare you for the exam in its respected areas. I spent several weeks preparing due to the length of this book and my workload at the time. This combined with a handful of mock tests, self studying of project Coin and playing around with code was enough to prepare me for the exam. OCPJP 7 In case of  OCPJP 7, I didn’t bet on a single book because K&B 6 covered only topics relevant to Java 6 and based on reviews and titles available at the time of the purchase, I decided to go with Oracle Certified Professional Java SE 7 Programmer Exams 1Z0-804 and 1Z0-805: A Comprehensive OCPJP 7 Certification Guide. These two books provided enough ground for me to start playing around with code. After 4 or 5 weeks I started with Enthuwares Lab and took the exam on 7th week. Unlike OCAJP 7, this exam really requires some coding experience due to parts that do not test your knowledge of code structure, compilation or program behavior. So keep in mind – best way to prepare for both exams is to code. Helpful notes from fellow bloggers: OCAJP 7jeanne’s oca/ocajp java programmer I experiences How To Prepare for OCAJP 7 Certification Exam Passed My OCAJP 7 Certification ExamOCPJP 7jeanne’s ocpjp java programmer II experiences Top 5 myths and misconceptions about OCPJP 7 exam OCPJP 7 1Z0-804 Oracle Certified Professional Java SE 7 Programmer Success StoryResources Books OCAJP 7SCJP Sun Certified Programmer for Java 6 Exam 310-065 by Bert Bates and Katherine Sierra (known as K&B book)Really great book, especially for beginners. The only downside is when you already know enough about certain topics reading becomes too long and kind of boring (since the book is suitable even for people learning Java). However, it is really good resource for anyone and it might even present information you have no idea are true about Java and compilation. Book also contains a handful of mock questions and whole tests. I was able to complete my preparation for OCAJP 7 almost solely using this book. The only downside of it is the fact, that it was written for Java 6 and does not incorporate syntax changes introduced in Java 7 like try-with-resources, strings in switch, multi-catch, exception rethrow and others. One of the authors published summary of topics covered by the book. You might also consider getting newer version OCA/OCP Java SE 7 Programmer I & II Study Guide.OCP Java SE 6 Programmer Practice Exams by Bert Bates and Katherine SierraYou might find yourself in doubt whether you are ready for the exam – you can check out this book from the same duo as previous one. However test do include topics now in OCPJP 7 so bare that in mind. I tried several of those test and I can recommend this book as well. It complements the first one pretty well. With these two at hand you have pretty solid foundation for your exam preparations.OCPJP 7Oracle Certified Professional Java SE 7 Programmer Exams 1Z0-804 and 1Z0-805: A Comprehensive OCPJP 7 Certification Guide by S.G.Ganesh andTushar SharmaWhen I started preparing for OCPJP 7 there have not been so many books or guides. One of the available at the time was this guide. Having read it, I can say it explained exam objectives pretty clearly with nice examples. It is shorter than K&B and more focused. This means it is not targeted at beginners any more and expects knowledge of concepts from OCAJP 7. One thing I really liked were all the examples throughout the book, especially in the area of concurrency and threads. In spite of a few grammatical errors, I can recommend this book since I used it as my primary resource.Pro Java 7 NIO.2 by Anghel LeonardThis book is not directly related to the certification, but I happened to read it during my thesis preparations. It is safe to say, that this book covers NIO.2 objectives quite nicely, but its scope is way broader than what is required for the exam. But it served me well, so I decided to include it in this list.Labs OCAJP 7NoneOCPJP 7Enthuware Labs for Oracle Certified Professional – Java SE 7 Programmer ExamThere are several companies producing labs with mock tests. Based on reviews online I decided to try out Enthuwares lab. The lab itself works well, you can track your progress, focus exam areas, benchmark your time and the usual stuff you would expect of this kind of software. All questions are marked based on their difficulty and this markup can be hidden in the settings. I found questions marked very easy and easy not worth my time, so I did not bother with these. The higher difficulties provide interesting questions and the opportunity to solidify your knowledge in respected areas. I would say it is a good product for the price.My own The last thing, I am going to highlight are some of my articles, that you may find useful. OCAJP 7 None so far. OCPJP 7Beauty and strangeness of genericsShort article about almost all gotchas present on OCPJP 7 (except interoperability between pre-generic and current collection code) regarding generics.NIO.2 series(Ongoing) Series of posts going into more detail than required on the actual exam. However, you will gain solid knowledge in covered areas.Certificate Oracle uses its Oracle University CertView application to manage your interaction with them so if you have not registered there yet, you will have to. When you are all done and received confirmation emails from Oracle, you should be able to see similar table in your CertView profile under Review My Exam History and Exam Results.Oracle Exam StatusTest Start Date Exam Number Exam Title Grade Indicator Score Report04-MAR-14 1Z0-804 Java SE 7 programmer II PASS View (link)12-SEP-12 1Z0-803 Java SE 7 programmer I PASS View (link)  You will be asked to fill in the address where a hard copy will be sent (you are not required to do this). In my experience it took one or two weeks to get the mail. PDF version is always available in CertView under Review My Certification History. Envelope contains the certificate along with card that proves your accomplishment (but I have not yet found any application for it).  The last thing that you are entitled to is using Oracle Certified Associate and Professional logos. They will be available in CertView so you can download them and use them in your CV or on your web page. This is my first time using them and they look as follows:   Conclusion Well, it was rather long way (as you might have noticed, it took me something more than two and a half-year to complete these) but also rewarding. Preparation for these exams is a long journey that offers a lot new insights on Java and the compiler. It is quite possible that you will develop certain love-hate relationship with the compiler itself (and will be able to replace it in many cases!). However, there were many areas I only new from my college years that needed improvements since I wasn’t using them for my work. After all the studies and playing around with little code snippets could honestly feel improvements in certain areas. You might learn things that will allow you to produce fewer lines of code, more readable and easily understandable code. And this is why I would recommend these exams to you – your general understanding of the code will increase (among other positive things). Only down side is rather big scope of the exam and time requirements for preparation. All in all, great learning experience and great way to discuss things you do and like with your friends and colleagues. So if you are considering these exams I would invite you to try them and wish you best of luck on your way of becoming Oracle Certified Professional.Reference: Oracle Certified Associate and Professional, Java SE 7 Programmer from our JCG partner Jakub Stas at the Jakub Stas blog....
enterprise-java-logo

A Tour Through elasticsearch-kopf

When I needed a plugin to display the cluster state of Elasticsearch or needed some insight into the indices I normally reached for the classic plugin elasticsearch-head. As it is recommended a lot and seems to be the unofficial successor I recently took a more detailed look at elasticsearch-kopf. And I liked it. I am not sure about why elasticsearch-kopf came into existence but it seems to be a clone of elasticsearch-head (kopf means head in German so it is even the same name).       Installation elasticsearch-kopf can be installed like most of the plugins, using the script in the Elasticsearch installation. This is the command that installs the version 1.1 which is suitable for the 1.1.x branch of Elasticsearch. bin/plugin --install lmenezes/elasticsearch-kopf/1.1 elasticsearch-kopf is then available on the url http://localhost:9200/_plugin/kopf/. Cluster On the front page you will see a similar diagram of what elasticsearch-head is providing. The overview of your cluster with all the shards and the distribution across the nodes. The page is being refreshed so you will see joining or leaving nodes immediately. You can adjust the refresh rate in the settings dropdown just next to the kopf logo (by the way, the header reflects the state of the cluster so it might change its color from green to yellow to red).Also, there are lots of different settings that can be reached via this page. On top of the node list there are 4 icons for creating a new index, deactivating shard allocation, for the cluster settings and the cluster diagnosis options. Creating a new index brings up a form for entering the index data. You can also load the settings from an existing index or just paste the settings json in the field on the right side.The icon for disabling the shard allocation just toggles it, disabling the shard allocation can be useful during a cluster restart. Using the cluster settings you can reach a form where you can adjust lots of values regarding your cluster, the routing and recovery. The cluster health button finally lets you load different json documents containing more details on the cluster health, e.g. the nodes stats and the hot threads. Using the little dropdown just next to the index name you can execute some operations on the index. You can view the settings, open and close the index, optimize and refresh the index, clear the caches, adjust the settings or delete the index.When opening the form for the index settings you will be overwhelmed at first. I didn’t know there are so many settings. What is really useful is that there is an info icon next to each field that will tell you what this field is about. A great opportunity to learn about some of the settings.What I find really useful is that you can adjust the slow index log settings directly. The slow log can also be used to log any incoming queries so it is sometimes useful for diagnostic purposes. Finally, back on the cluster page, you can get more detailed information on the nodes or shards when clicking on them. This will open a lightbox with more details.REST The rest menu entry on top brings you to another view which is similar to the one Sense provided. You can enter queries and let them be executed for you. There is a request history, you have highlighting and you can format the request document but unfortunately the interface is missing the autocompletion. Nevertheless I suppose this can be useful if you don’t like to fiddle with curl.Aliases Using the aliases tab you can have a convenient form for managing your index aliases and all the relevant additional information. You can add filter queries for your alias or influence the index or search routing. On the right side you can see the existing aliases and remove them if not needed.Analysis The analysis tab will bring you to a feature that is also very popular for the Solr administration view. You can test the analyzers for different values and different fields. This is a very valuable tool while building a more complex search application.Unfortunately the information you can get from Elasticsearch is not as detailed as the one you can get from Solr: It will only contain the end result so you can’t really see which tokenizer or filter caused a certain change. Percolator On the percolator tab you can use a form to register new percolator queries and view existing ones. There doesn’t seem to be a way to do the actual percolation but maybe this page can be useful for using the percolator extensively.Warmers The warmers tab can be used to register index warmer queries.Repository The final tab is for the snapshot and restore feature. You can create repositories and snapshots and restore them. Though I can imagine that most of the people are automating the snapshot creation this can be a very useful form.Conclusion I hope you could see in this post that elasticsearch-head can be really useful. It is very unlikely that you will ever need all of the forms but it is good to have them available. The cluster view and the rest interface can be very valuable for your daily work and I guess there will be new features coming in the future.Reference: A Tour Through elasticsearch-kopf from our JCG partner Florian Hopf at the Dev Time blog....
java-logo

Java 8 Friday: 10 Subtle Mistakes When Using the Streams API

At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem. Java 8 Friday Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, extension methods, and other great stuff. You’ll find the source code on GitHub.       10 Subtle Mistakes When Using the Streams API We’ve done all the SQL mistakes lists:10 Common Mistakes Java Developers Make when Writing SQL 10 More Common Mistakes Java Developers Make when Writing SQL Yet Another 10 Common Mistakes Java Developers Make When Writing SQL (You Won’t BELIEVE the Last One)But we haven’t done a top 10 mistakes list with Java 8 yet! For today’s occasion (it’s Friday the 13th), we’ll catch up with what will go wrong in YOUR application when you’re working with Java 8 (it won’t happen to us, as we’re stuck with Java 6 for another while). 1. Accidentally reusing streams Wanna bet, this will happen to everyone at least once. Like the existing “streams” (e.g. InputStream), you can consume streams only once. The following code won’t work: IntStream stream = IntStream.of(1, 2); stream.forEach(System.out::println);// That was fun! Let's do it again! stream.forEach(System.out::println); You’ll get a: java.lang.IllegalStateException: stream has already been operated upon or closed So be careful when consuming your stream. It can be done only once. 2. Accidentally creating “infinite” streams You can create infinite streams quite easily without noticing. Take the following example: // Will run indefinitely IntStream.iterate(0, i -> i + 1) .forEach(System.out::println); The whole point of streams is the fact that they can be infinite, if you design them to be. The only problem is, that you might not have wanted that. So, be sure to always put proper limits: // That's better IntStream.iterate(0, i -> i + 1) .limit(10) .forEach(System.out::println); 3. Accidentally creating “subtle” infinite streams We can’t say this enough. You WILL eventually create an infinite stream, accidentally. Take the following stream, for instance: IntStream.iterate(0, i -> ( i + 1 ) % 2) .distinct() .limit(10) .forEach(System.out::println); So…we generate alternating 0′s and 1′s then we keep only distinct values, i.e. a single 0 and a single 1 then we limit the stream to a size of 10 then we consume itWell… the distinct() operation doesn’t know that the function supplied to the iterate() method will produce only two distinct values. It might expect more than that. So it’ll forever consume new values from the stream, and the limit(10) will never be reached. Tough luck, your application stalls. 4. Accidentally creating “subtle” parallel infinite streams We really need to insist that you might accidentally try to consume an infinite stream. Let’s assume you believe that the distinct() operation should be performed in parallel. You might be writing this: IntStream.iterate(0, i -> ( i + 1 ) % 2) .parallel() .distinct() .limit(10) .forEach(System.out::println); Now, we’ve already seen that this will turn forever. But previously, at least, you only consumed one CPU on your machine. Now, you’ll probably consume four of them, potentially occupying pretty much all of your system with an accidental infinite stream consumption. That’s pretty bad. You can probably hard-reboot your server / development machine after that. Have a last look at what my laptop looked like prior to exploding:  5. Mixing up the order of operations So, why did we insist on your definitely accidentally creating infinite streams? It’s simple. Because you may just accidentally do it. The above stream can be perfectly consumed if you switch the order of limit() and distinct(): IntStream.iterate(0, i -> ( i + 1 ) % 2) .limit(10) .distinct() .forEach(System.out::println); This now yields: 0 1 Why? Because we first limit the infinite stream to 10 values (0 1 0 1 0 1 0 1 0 1), before we reduce the limited stream to the distinct values contained in it (0 1). Of course, this may no longer be semantically correct, because you really wanted the first 10 distinct values from a set of data (you just happened to have “forgotten” that the data is infinite). No one really wants 10 random values, and only then reduce them to be distinct. If you’re coming from a SQL background, you might not expect such differences. Take SQL Server 2012, for instance. The following two SQL statements are the same: -- Using TOP SELECT DISTINCT TOP 10 * FROM i ORDER BY ..-- Using FETCH SELECT * FROM i ORDER BY .. OFFSET 0 ROWS FETCH NEXT 10 ROWS ONLY So, as a SQL person, you might not be as aware of the importance of the order of streams operations.  6. Mixing up the order of operations (again) Speaking of SQL, if you’re a MySQL or PostgreSQL person, you might be used to the LIMIT .. OFFSET clause. SQL is full of subtle quirks, and this is one of them. The OFFSET clause is applied FIRST, as suggested in SQL Server 2012′s (i.e. the SQL:2008 standard’s) syntax. If you translate MySQL / PostgreSQL’s dialect directly to streams, you’ll probably get it wrong: IntStream.iterate(0, i -> i + 1) .limit(10) // LIMIT .skip(5) // OFFSET .forEach(System.out::println); The above yields 5 6 7 8 9 Yes. It doesn’t continue after 9, because the limit() is now applied first, producing (0 1 2 3 4 5 6 7 8 9). skip() is applied after, reducing the stream to (5 6 7 8 9). Not what you may have intended. BEWARE of the LIMIT .. OFFSET vs. "OFFSET .. LIMIT" trap! 7. Walking the file system with filters We’ve blogged about this before. What appears to be a good idea is to walk the file system using filters: Files.walk(Paths.get(".")) .filter(p -> !p.toFile().getName().startsWith(".")) .forEach(System.out::println); The above stream appears to be walking only through non-hidden directories, i.e. directories that do not start with a dot. Unfortunately, you’ve again made mistake #5 and #6. walk() has already produced the whole stream of subdirectories of the current directory. Lazily, though, but logically containing all sub-paths. Now, the filter will correctly filter out paths whose names start with a dot “.”. E.g. .git or .idea will not be part of the resulting stream. But these paths will be: .\.git\refs, or .\.idea\libraries. Not what you intended. Now, don’t fix this by writing the following: Files.walk(Paths.get(".")) .filter(p -> !p.toString().contains(File.separator + ".")) .forEach(System.out::println); While that will produce the correct output, it will still do so by traversing the complete directory subtree, recursing into all subdirectories of “hidden” directories. I guess you’ll have to resort to good old JDK 1.0 File.list() again. The good news is, FilenameFilter and FileFilter are both functional interfaces. 8. Modifying the backing collection of a stream While you’re iterating a List, you must not modify that same list in the iteration body. That was true before Java 8, but it might become more tricky with Java 8 streams. Consider the following list from 0..9: // Of course, we create this list using streams: List<Integer> list = IntStream.range(0, 10) .boxed() .collect(toCollection(ArrayList::new)); Now, let’s assume that we want to remove each element while consuming it: list.stream() // remove(Object), not remove(int)! .peek(list::remove) .forEach(System.out::println); Interestingly enough, this will work for some of the elements! The output you might get is this one: 0 2 4 6 8 null null null null null java.util.ConcurrentModificationException If we introspect the list after catching that exception, there’s a funny finding. We’ll get: [1, 3, 5, 7, 9] Heh, it “worked” for all the odd numbers. Is this a bug? No, it looks like a feature. If you’re delving into the JDK code, you’ll find this comment in ArrayList.ArraListSpliterator: /* * If ArrayLists were immutable, or structurally immutable (no * adds, removes, etc), we could implement their spliterators * with Arrays.spliterator. Instead we detect as much * interference during traversal as practical without * sacrificing much performance. We rely primarily on * modCounts. These are not guaranteed to detect concurrency * violations, and are sometimes overly conservative about * within-thread interference, but detect enough problems to * be worthwhile in practice. To carry this out, we (1) lazily * initialize fence and expectedModCount until the latest * point that we need to commit to the state we are checking * against; thus improving precision. (This doesn't apply to * SubLists, that create spliterators with current non-lazy * values). (2) We perform only a single * ConcurrentModificationException check at the end of forEach * (the most performance-sensitive method). When using forEach * (as opposed to iterators), we can normally only detect * interference after actions, not before. Further * CME-triggering checks apply to all other possible * violations of assumptions for example null or too-small * elementData array given its size(), that could only have * occurred due to interference. This allows the inner loop * of forEach to run without any further checks, and * simplifies lambda-resolution. While this does entail a * number of checks, note that in the common case of * list.stream().forEach(a), no checks or other computation * occur anywhere other than inside forEach itself. The other * less-often-used methods cannot take advantage of most of * these streamlinings. */ Now, check out what happens when we tell the stream to produce sorted() results: list.stream() .sorted() .peek(list::remove) .forEach(System.out::println); This will now produce the following, “expected” output 0 1 2 3 4 5 6 7 8 9 And the list after stream consumption? It is empty: [] So, all elements are consumed, and removed correctly. The sorted() operation is a “stateful intermediate operation”, which means that subsequent operations no longer operate on the backing collection, but on an internal state. It is now “safe” to remove elements from the list! Well… can we really? Let’s proceed with parallel(), sorted() removal: list.stream() .sorted() .parallel() .peek(list::remove) .forEach(System.out::println); This now yields: 7 6 2 5 8 4 1 0 9 3 And the list contains [8] Eek. We didn’t remove all elements!? Free beers (and jOOQ stickers) go to anyone who solves this streams puzzler! This all appears quite random and subtle, we can only suggest that you never actually do modify a backing collection while consuming a stream. It just doesn’t work. 9. Forgetting to actually consume the stream What do you think the following stream does? IntStream.range(1, 5) .peek(System.out::println) .peek(i -> { if (i == 5) throw new RuntimeException("bang"); }); When you read this, you might think that it will print (1 2 3 4 5) and then throw an exception. But that’s not correct. It won’t do anything. The stream just sits there, never having been consumed. As with any fluent API or DSL, you might actually forget to call the “terminal” operation. This might be particularly true when you use peek(), as peek() is an aweful lot similar to forEach(). This can happen with jOOQ just the same, when you forget to call execute() or fetch(): DSL.using(configuration) .update(TABLE) .set(TABLE.COL1, 1) .set(TABLE.COL2, "abc") .where(TABLE.ID.eq(3)); Oops. No execute()  Yes, the “best” way – with 1-2 caveats! 10. Parallel stream deadlock This is now a real goodie for the end! All concurrent systems can run into deadlocks, if you don’t properly synchronise things. While finding a real-world example isn’t obvious, finding a forced example is. The following parallel() stream is guaranteed to run into a deadlock: Object[] locks = { new Object(), new Object() };IntStream .range(1, 5) .parallel() .peek(Unchecked.intConsumer(i -> { synchronized (locks[i % locks.length]) { Thread.sleep(100);synchronized (locks[(i + 1) % locks.length]) { Thread.sleep(50); } } })) .forEach(System.out::println); Note the use of Unchecked.intConsumer(), which transforms the functional IntConsumer interface into a org.jooq.lambda.fi.util.function.CheckedIntConsumer, which is allowed to throw checked exceptions. Well. Tough luck for your machine. Those threads will be blocked forever! The good news is, it has never been easier to produce a schoolbook example of a deadlock in Java! For more details, see also Brian Goetz’s answer to this question on Stack Overflow. Conclusion With streams and functional thinking, we’ll run into a massive amount of new, subtle bugs. Few of these bugs can be prevented, except through practice and staying focused. You have to think about how to order your operations. You have to think about whether your streams may be infinite. Streams (and lambdas) are a very powerful tool. But a tool which we need to get a hang of, first.Reference: Java 8 Friday: 10 Subtle Mistakes When Using the Streams API from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
agile-logo

#NoEstimates

The main difficulty with forecasting the future is that it hasn’t yet happened. – James BurkeWhen I first heard about #NoEstimates, I thought it was not only provocative, it can also be damaging. The idea of working without estimates seems preposterous to many people. It did to me.   I mean, how can you plan anything without estimates? How we use estimates When I started my career as a software developer, there was a running joke in the company. For each level of management, we should multiply an estimation by 1.3. If a developer said a task will take 3 months, the team leader will “refine” the estimation for 5 months, and at the program level it was now even more. A joke, but not remote from what I did later as a manager. Let’s first get this out of the way: modifying an estimate is down-right disrespectful. As a manager I think I know better than the people who are actually doing the work. Another thing that happens a lot is estimates turn to commitments, in the eyes of management, which the team now need to meet. This post is not about these abusive things, although they do exist. Why did the estimation process work like that? A couple of assumptions:Work is comprised of learning and development, and these are not linear, or sequential. Estimating the work is complex. Developers are an optimistic bunch. They don’t think about things going wrong. They under promise, so they can over deliver. They cannot foresee all surprises, and we have more experience, so we’ll introduce some buffers. When were they right on the estimates last time?The results were task lengths in the project plan. So now we “know” the estimate is 6 months, instead of the original 3 months estimation. Of course, we don’t know, and we’re aware of that. After all, we know that plans change over time. The difference is now we have more confidence about the estimate, so we can plan ahead with dependent work. Why we need estimates The idea of estimates is to provide enough confidence in the organization in order to make decisions about the future.  To answer questions like:Do we have enough capacity to take on more work after the project? Should we do this project at all? When should marketing and sales be ready for the launch? What should the people do until then?These are very good business questions. The problem is our track record: we’re horrible estimators (I point you to the last bullet). We don’t know much about the future. The whole process of massaging estimates so we can feel better about them, seems like we’re relying on a set of crystal balls. And we use these balls to make business decisions. There should be a better way. So what are the alternatives? That is the interesting question. Once we understand that estimates are just one way of making business decisions, and a crappy one at that, we can have an open discussion. The alternative can be cost-of-delay. It could be empirical evidence to forecast against. It can be limited safe-to-fail experiments. It can be any combination or modification of these things, and It can be things we haven’t discovered yet. #NoEstimates is not really about estimates. It’s about making confident, rational, trust-worthy decisions. I know what results estimates give. Let’s seek out better ones. For more information about #NoEstimates, you can read more on the blogs of Woody Zuill, Neil Killick and Vasco Duarte.Reference: #NoEstimates from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
software-development-2-logo

The Simple Story Paradox

I’ve recently been following the #isTDDDead debate between Kent Beck (@kentbeck), David Heinemeier Hansson (@dhh), and Martin Fowler (@martinfowler) with some interest. I think that it’s particularly beneficial that ideas, which are often taken for granted, can be challenged in a constructive manner. That way you can figure out if they stand up to scrutiny or fall down flat on their faces. The discussion began with @dhh making the following points on TDD and test technique, which I hope I’ve got right. Firstly, the strict definition of TDD includes the following:    TTD is used to drive unit tests You can’t have collaborators You can’t touch the database You can’t touch the File system Fast Unit Tests, complete in the blink of an eye.He went on to say that you therefore drive your system’s architecture from the use of mocks and in that way the architecture suffers damage from the drive to isolate and mock everything, whilst the mandatory enforcement of the ‘red, green, refactor’ cycle is too prescriptive. He also stated that a lot of people mistake that you can’t have confidence in your code and you can’t deliver incremental functionality with tests unless you go through this mandated, well paved road of TDD. @Kent_Beck said that TDD didn’t necessarily include heavy mocking and the discussion continued… I’ve paraphrased a little here; however, it was the difference in the interpretation and experience of using TDD that got me thinking. Was it really a problem with TDD or was it with @dhh’s experience of other developer’s interpretation of TDD? I don’t want to put words into @dhh’s mouth, but it seems like the problem is the dogmatic application of the TDD technique even when it isn’t applicable. I came away with the impression that, in certain development houses, TDD had degenerated into little more than Cargo Cult Programming. The term Cargo Cult Programming seems to derive from a paper written by someone whom I found truly inspirational, the late Professor Richard Feynman. He presented a paper entitled Cargo Cult Science – Some remarks on science, pseudoscience and learning how not to fool yourself as part of Caltech’s 1974 commencement address. This later became part of his autobiography: Surely you must be joking Mr Feynman, a book that I implore you to read. In it, Feynman highlights experiments from several pseudosciences, such as educational science, psychology, parapsychology and physics, where the scientific approach of keeping an open mind, questioning everything and looking for flaws in your theory have been replaced by belief, ritualism and faith: a willingness to take other peoples results for granted in lieu of an experimental control. Taken from the 1974 paper, Feynman sums up Cargo Cult Science as: “In the South Seas there is a cargo cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to imitate things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas–he’s the controller–and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.” You can apply this idea to programming where you’ll find teams and individuals carrying out ritualised procedures and using techniques without really understanding the theory behind them in the hope that they’ll work and because they are the ‘right thing to do’. In the second talk in the series @dhh came up with an example of what he called “test induced design damage” and at this I got excited because it’s something I’ve seen a number of times. The only reservation I had about the gist code was that to me it didn’t seem to result from TDD, that argument seems a little limited; I’d say that it was more a result of Cargo Cult Programming and that’s because in the instances where I’ve come across this example TDD wasn’t used. If you’ve seen the Gist, you may know what I’m talking about; however, that code is in Ruby, which is something I’ve little experience of. In order to explore this in more detail, I thought that I’d create a Spring MVC version and go from there. The scenario here is one where we have a very simple story: all the code does is to read an object from the database and place it into the model for display. There’s no additional processing, no business logic and no calculations to perform. The agile story would go something like this:Title: View User Details As an admin user I want to click on a link So that I can verify a user's detailsIn this ‘Proper’ N tier sample, I have a User model object, a controller and service layer and DAO together with their interfaces and tests. And, there’s the paradox: you set out to write the best code you possibly can to implement the story, using the well known and probably most popular MVC ‘N’ layer pattern and end up with something that’s total overkill for such a simple scenario. Something, as @jhh would say, is damaged. In my sample code, I’m using the JdbcTemplate class to retrieve a user’s details from a MySQL database, but any DB access API will do. This is the sample code demonstrating the conventional, ‘right’ way of implementing the story; prepare to do a lot of scrolling… public class User {   public static User NULL_USER = new User(-1, "Not Available", "", new Date());   private final long id;   private final String name;   private final String email;   private final Date createDate;   public User(long id, String name, String email, Date createDate) {     this.id = id;     this.name = name;     this.email = email;     this.createDate = createDate;   }   public long getId() {     return id;   }   public String getName() {     return name;   }   public String getEmail() {     return email;   }   public Date getCreateDate() {     return createDate;   } }   @Controller public class UserController {   @Autowired   private UserService userService;   @RequestMapping("/find1")   public String findUser(@RequestParam("user") String name, Model model) {     User user = userService.findUser(name);     model.addAttribute("user", user);     return "user";   } }   public interface UserService {   public abstract User findUser(String name); }   @Service public class UserServiceImpl implements UserService {   @Autowired   private UserDao userDao;   /**    * @see com.captaindebug.cargocult.ntier.UserService#findUser(java.lang.String)    */   @Override   public User findUser(String name) {     return userDao.findUser(name);   } }   public interface UserDao {   public abstract User findUser(String name); }   @Repository public class UserDaoImpl implements UserDao {   private static final String FIND_USER_BY_NAME = "SELECT id, name,email,createdDate FROM Users WHERE name=?";   @Autowired   private JdbcTemplate jdbcTemplate;   /**    * @see com.captaindebug.cargocult.ntier.UserDao#findUser(java.lang.String)    */   @Override   public User findUser(String name) {     User user;     try {       FindUserMapper rowMapper = new FindUserMapper();       user = jdbcTemplate.queryForObject(FIND_USER_BY_NAME, rowMapper, name);     } catch (EmptyResultDataAccessException e) {       user = User.NULL_USER;     }     return user;   } } If you take a look at this code, paradoxically it looks fine; in fact it looks like a classic text book example of how to write an ‘N’ tier MVC application. The controller passes responsibility for sorting out the business rules to the service layer and the service layer retrieves data from the DB using a data access object, which in turn uses a RowMapper<> helper class to retrieve a User object. When the controller has a User object it injects it into the model ready for display. This pattern is clear and extensible; we’re isolating the database from the service and the service from the controller by using interfaces and we’re testing everything using both JUnit with Mockito, and integration tests. This should be the last word in text book MVC coding, or is it? Let’s look at the code. Firstly, there’s the unnecessary use of interfaces. Some would argue that it’s easy to switch database implementations, but who ever does that? 1 plus, modern mocking tools can create their proxies using Class definitions so, unless your design specifically requires multiple implementations of the same interface, then using interfaces is pointless. Next, there is the UserServiceImpl, which is a classic example of the lazy class anti-pattern, because it does nothing except pointlessly delegate to the data access object. Likewise,the controller is also pretty lazy as it delegates to the lazy UserServiceImpl before adding the resulting User class to the model: in fact, all these classes are examples of the lazy class anti pattern. Having written some lazy classes, they are now needlessly tested to death, even the non-event UserServiceImpl class. It’s only worth writing tests for classes that actually perform some logic. public class UserControllerTest {   private static final String NAME = "Woody Allen";   private UserController instance;   @Mock   private Model model;   @Mock   private UserService userService;   @Before   public void setUp() throws Exception {     MockitoAnnotations.initMocks(this);     instance = new UserController();     ReflectionTestUtils.setField(instance, "userService", userService);   }   @Test   public void testFindUser_valid_user() {     User expected = new User(0L, NAME, "aaa@bbb.com", new Date());     when(userService.findUser(NAME)).thenReturn(expected);     String result = instance.findUser(NAME, model);     assertEquals("user", result);     verify(model).addAttribute("user", expected);   }   @Test   public void testFindUser_null_user() {     when(userService.findUser(null)).thenReturn(User.NULL_USER);     String result = instance.findUser(null, model);     assertEquals("user", result);     verify(model).addAttribute("user", User.NULL_USER);   }   @Test   public void testFindUser_empty_user() {     when(userService.findUser("")).thenReturn(User.NULL_USER);     String result = instance.findUser("", model);     assertEquals("user", result);     verify(model).addAttribute("user", User.NULL_USER);   } }   public class UserServiceTest {   private static final String NAME = "Annie Hall";   private UserService instance;   @Mock   private UserDao userDao;   @Before   public void setUp() throws Exception {     MockitoAnnotations.initMocks(this);     instance = new UserServiceImpl();     ReflectionTestUtils.setField(instance, "userDao", userDao);   }   @Test   public void testFindUser_valid_user() {     User expected = new User(0L, NAME, "aaa@bbb.com", new Date());     when(userDao.findUser(NAME)).thenReturn(expected);     User result = instance.findUser(NAME);     assertEquals(expected, result);   }   @Test   public void testFindUser_null_user() {     when(userDao.findUser(null)).thenReturn(User.NULL_USER);     User result = instance.findUser(null);     assertEquals(User.NULL_USER, result);   }   @Test   public void testFindUser_empty_user() {     when(userDao.findUser("")).thenReturn(User.NULL_USER);     User result = instance.findUser("");     assertEquals(User.NULL_USER, result);   } }   public class UserDaoTest {   private static final String NAME = "Woody Allen";   private UserDao instance;   @Mock   private JdbcTemplate jdbcTemplate;   @Before   public void setUp() throws Exception {     MockitoAnnotations.initMocks(this);     instance = new UserDaoImpl();     ReflectionTestUtils.setField(instance, "jdbcTemplate", jdbcTemplate);   }   @SuppressWarnings({ "unchecked", "rawtypes" })   @Test   public void testFindUser_valid_user() {     User expected = new User(0L, NAME, "aaa@bbb.com", new Date());     when(jdbcTemplate.queryForObject(anyString(), (RowMapper) anyObject(), eq(NAME))).thenReturn(expected);     User result = instance.findUser(NAME);     assertEquals(expected, result);   }   @SuppressWarnings({ "unchecked", "rawtypes" })   @Test   public void testFindUser_null_user() {     when(jdbcTemplate.queryForObject(anyString(), (RowMapper) anyObject(), isNull())).thenReturn(User.NULL_USER);     User result = instance.findUser(null);     assertEquals(User.NULL_USER, result);   }   @SuppressWarnings({ "unchecked", "rawtypes" })   @Test   public void testFindUser_empty_user() {     when(jdbcTemplate.queryForObject(anyString(), (RowMapper) anyObject(), eq(""))).thenReturn(User.NULL_USER);     User result = instance.findUser("");     assertEquals(User.NULL_USER, result);   } }   @RunWith(SpringJUnit4ClassRunner.class) @WebAppConfiguration @ContextConfiguration({ "file:src/main/webapp/WEB-INF/spring/appServlet/servlet-context.xml",     "file:src/test/resources/test-datasource.xml" }) public class UserControllerIntTest {   @Autowired   private WebApplicationContext wac;   private MockMvc mockMvc;   /**    * @throws java.lang.Exception    */   @Before   public void setUp() throws Exception {     mockMvc = MockMvcBuilders.webAppContextSetup(wac).build();   }   @Test   public void testFindUser_happy_flow() throws Exception {     ResultActions resultActions = mockMvc.perform(get("/find1").accept(MediaType.ALL).param("user", "Tom"));     resultActions.andExpect(status().isOk());     resultActions.andExpect(view().name("user"));     resultActions.andExpect(model().attributeExists("user"));     resultActions.andDo(print());     MvcResult result = resultActions.andReturn();     ModelAndView modelAndView = result.getModelAndView();     Map<String, Object> model = modelAndView.getModel();     User user = (User) model.get("user");     assertEquals("Tom", user.getName());     assertEquals("tom@gmail.com", user.getEmail());   } } In writing this sample code, I’ve added everything I could think of into the mix. You may think that this example is ‘over the top’ in its construction especially with the inclusion on redundant interface, but I have seen code like this. The benefits of this pattern are that it follows a distinct design understood by most developers; it’s clean and extensible. The down side is that there are lots of classes. More classes take more time to write and,you ever have to maintain or enhance this code, they’re more difficult to get to grips with. So, what’s the solution? That’s difficult to answer. In the #IsTTDDead debate @dhh gives the solution as placing all the code in one class, mixing the data access with the population of the model. If you implement this solution for our user story you still get a User class, but the number of classes you need shrinks dramatically. @Controller public class UserAccessor {   private static final String FIND_USER_BY_NAME = "SELECT id, name,email,createdDate FROM Users WHERE name=?";   @Autowired   private JdbcTemplate jdbcTemplate;   @RequestMapping("/find2")   public String findUser2(@RequestParam("user") String name, Model model) {     User user;     try {       FindUserMapper rowMapper = new FindUserMapper();       user = jdbcTemplate.queryForObject(FIND_USER_BY_NAME, rowMapper, name);     } catch (EmptyResultDataAccessException e) {       user = User.NULL_USER;     }     model.addAttribute("user", user);     return "user";   }   private class FindUserMapper implements RowMapper<User>, Serializable {     private static final long serialVersionUID = 1L;     @Override     public User mapRow(ResultSet rs, int rowNum) throws SQLException {       User user = new User(rs.getLong("id"), //           rs.getString("name"), //           rs.getString("email"), //           rs.getDate("createdDate"));       return user;     }   } }   @RunWith(SpringJUnit4ClassRunner.class) @WebAppConfiguration @ContextConfiguration({ "file:src/main/webapp/WEB-INF/spring/appServlet/servlet-context.xml",     "file:src/test/resources/test-datasource.xml" }) public class UserAccessorIntTest {   @Autowired   private WebApplicationContext wac;   private MockMvc mockMvc;   /**    * @throws java.lang.Exception    */   @Before   public void setUp() throws Exception {     mockMvc = MockMvcBuilders.webAppContextSetup(wac).build();   }   @Test   public void testFindUser_happy_flow() throws Exception {     ResultActions resultActions = mockMvc.perform(get("/find2").accept(MediaType.ALL).param("user", "Tom"));     resultActions.andExpect(status().isOk());     resultActions.andExpect(view().name("user"));     resultActions.andExpect(model().attributeExists("user"));     resultActions.andDo(print());     MvcResult result = resultActions.andReturn();     ModelAndView modelAndView = result.getModelAndView();     Map<String, Object> model = modelAndView.getModel();     User user = (User) model.get("user");     assertEquals("Tom", user.getName());     assertEquals("tom@gmail.com", user.getEmail());   }   @Test   public void testFindUser_empty_user() throws Exception {     ResultActions resultActions = mockMvc.perform(get("/find2").accept(MediaType.ALL).param("user", ""));     resultActions.andExpect(status().isOk());     resultActions.andExpect(view().name("user"));     resultActions.andExpect(model().attributeExists("user"));     resultActions.andExpect(model().attribute("user", User.NULL_USER));     resultActions.andDo(print());   } } The solution above cuts the number of first level classes to two: an implementation class and a test class. All test scenarios are catered for in a very few end to end integration tests. These tests will access the database, but is that so bad in this case? If each trip to the DB takes around 20ms or less then they’ll still complete within a fraction of a second; that should be fast enough. In terms of enhancing or maintaining this code, one small single class is easier to learn than several even smaller classes. If you did have to add in a whole bunch of business rules or other complexity then changing this code into the ‘N’ layer pattern will not be difficult; however the problem is that if/when a change is necessary it may be given to an inexperienced developer who’ll not be confident enough to carry out the necessary refactoring. The upshot is, and you must have seen this lots of times, that the new change could be shoehorned on top of this one class solution leading to a mess of spaghetti code. In implementing a solution like this, you may not be very popular, because the code is unconventional. That’s one of the reasons that I think that this single class solution is something that a lot of people would see as contentious. It’s this idea of a standard ‘right way’ and ‘wrong way’ of writing code, rigorously applied in every case, that has lead to this perfectly good design becoming a problem. I guess that it’s all a matter of horses for courses; choosing the right design for the right situation. If I was implementing a complex story, then I wouldn’t hesitate to split up the various responsibilities, but in the simple case it’s just not worth it. I’ll therefore end by asking if any one has a better solution for the Simple Story Paradox shown above, please let me know.   1 I’ve worked on a project once in umpteen years of programming where the underlying database was changed to meet a customer requirement. That was many years and many thousands of miles away and the code was written in C++ and Visual Basic.The code for this blog is available on Github at https://github.com/roghughe/captaindebug/tree/master/cargo-cultReference: The Simple Story Paradox from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....
software-development-2-logo

gonsole weeks: content assist for git commands

While Eclipse ships with a comprehensive Git tool, it seems that for certain tasks many developers switch to the command line. This gave Frank and me the idea, to start an open source project to provide a git console integration for the IDE. What happened so far during the gonsole weeks can be read in git init gonsole and eclipse egit integration. The recent days we have been working on content assist and made the appearance more colorful. The color setttings aren’t yet configurable so that you’d have to live with what we found appropriate!      Furthermore we have basic content assist in place. In its current state it helps you with finding the right git commands. If you type ‘s’ followed by Ctrl+Space for example, it will show you that there is a show, a show-ref, and a status command.  While the feature itself might not look very impressive, it provides the basis for more content assists. Showing the documentation for a selected command isn’t far away. The same accounts for completion proposals for command arguments like branches and repositories, which might look like this:  The content assist code was written only backed by end-to-end tests, which turned out to be a quite effective way to work exploratively. Now we will re-construct the functionality test driven while the end-to-end tests ensure that we do not break the overall features. In the meanwhile you may want to try the software by yourself and install it from this update site: http://rherrmann.github.io/gonsole/repository/ That’s it for now, let’s get back to TDD. And maybe the next time we can show you more content assist features…Reference: gonsole weeks: content assist for git commands from our JCG partner Rudiger Herrmann at the Code Affine blog....
spring-logo

Spring Professional Study Notes

Before we get down to my own additions to existing resources mentioned in my certification guide post I would like to reiterate all resources that I used for my preparation:Course-ware Spring in Action, Third Edition by Craig Walls Jeanne’s study notes Spring Framework Reference Documentation    Bean life-cycle Bean initialization process One of the key areas of the certification is the life-cycle management provided by Spring. By the time of the exam you should know this diagram by heart. This picture describes the process of loading beans into the application context. Starting point is definition of beans in beans XML file (but also works for programmatic configuration as well). When the context gets initialized all configuration files and classes are loaded into the application. Since all these sources contain different representations of beans there needs to be a merging step that unifies bean definitions into one internal format. After initialization of whole configuration it needs to be checked for errors and invalid configuration. When the configuration is validated then a dependency tree is built and indexed. As a next step, Spring applies BeanFactoryPostProcessors (BFPP from now on). BFPP allows for custom modification of an application context’s bean definitions. Application contexts can auto-detect BFPP beans in their bean definitions and apply them before any other beans get created. Classic examples of BFPPs include PropertyResourceConfigurer and PropertyPlaceHolderConfigurer. When this phase is over and Spring owns all the final bean definitions then the process leaves the ‘happens once’ part and enters ‘happens for each bean’ part.Please note that even though I used graphical elements usually depicting UML elements, this picture is not UML compliant. When in second phase, the first thing that is performed is the evaluation of all SPEL expressions. The Spring Expression Language (SPEL for short) is an expression language that supports querying and manipulating an object graph at runtime. Now, that all SPEL expressions are evaluated, Spring performs dependency injection (constructor and setter). As a next step Spring applies BeanPostProcessors (BPP from now on). BPP is a factory hook that allows for custom modification of new bean instances. Application contexts can autodetect BPP beans in their bean definitions and apply them to any beans subsequently created. Calling postProcessBeforeInitialization and postProcessAfterInitialization provides bean life-cycle hooks from outside of bean definition with no regard to bean type. To specify bean life-cycle hooks that are bean specific you can choose from three possible ways to do so (ordered with respect to execution priority):@PostConstructAnnotating method with JSR-250 annotation @PostConstructafterPropertiesSetImplementing InitializingBean interface and providing implementation for the afterPropertiesSet methodinit-methodDefining the init-method attribute of bean element in XML configuration fileWhen a bean goes through this process it gets stored in a store which is basically a map with bean ID as a key and bean object as a value. When all beans get processed the context is initialized and we can call getBean on the context and retrieve bean instances. Bean destruction process Whole process begins when ApplicationContext is closed (whether by calling method close explicitly or from container where the application is running). When this happens all beans and the context itself are destroyed. Just like with bean initialization, Spring provides life-cycle hooks for beans destruction with three possible ways to do so (ordered with respect to execution priority):@PreDestroyAnnotating method with JSR-250 annotation @PreDestroydestroyImplementing DisposableBean interface and providing implementation for the destroy methoddestroy-methodDefining the destroy-method attribute of bean element in XML configuration fileHowever there is one tricky part to this. When you are dealing with prototype bean an interesting behavior emerges upon context closing. After prototype bean is fully initialized and all initializing life-cycle hooks are executed, container hands over the reference and has no reference to this prototype bean since. This means that no destruction life-cycle hooks will be executed. Request processing in Spring MVC When it comes to Spring MVC it is important to be familiar with basic principals of how Spring turns requests into responses. It all begins with ContextLoaderListener that ties the life-cycle of the ApplicationContext to the life-cycle of the ServletContext. Then there is a DelegatingFilterProxy which is a proxy for standard servlet filter delegating to a Spring-managed bean implementing javax.servlet.Filter interface. What DelegatingFilterProxy does is delegate the Filter‘s methods through to a bean which is obtained from the Spring application context. This enables the bean to benefit from the Spring web application context life-cycle support and configuration flexibility. The bean must implement javax.servlet.Filter and it must have the same name as that in the filter-name element. DelegatingFilterProxy delegates all mapped requests to a central Servlet that dispatches requests to controllers and offers other functionality that facilitates the development of web applications. Spring’s DispatcherServlet however, does more than just that. It is completely integrated with the Spring IoC container. Each DispatcherServlet has its own WebApplicationContext, which inherits all the beans already defined in the root WebApplicationContext. WebApplicationContext is an extension of plain old ApplicationContext that owns few specific beans. These beans provide handy bundle of tools I named ‘Common things’ that include support for things like: resolving the locale a client is using, resolving themes of your web application,  mapping of exceptions to views, parsing multipart request from HTML form uploads and few others. After all these things are taken care of, DispatcherServlet needs to determine where to dispatch incoming request. In order to do so DispatcherServlet turns to HandlerMapping which (in turn) maps requests to controllers. Spring’s handler mapping mechanism includes handler interceptors, which are useful when you want to apply specific functionality to certain requests, for example, checking for a principal.Please note that even though I used graphical elements usually depicting UML elements, this picture is not UML compliant. Before the execution reaches the controller there are certain steps that must happen like resolving various annotations. The main purpose of a HandlerAdapter is to shield the DispatcherServlet from such details. The HandlerExecutionChain object wraps the handler (controller or method). It may also contain a set of interceptor objects of type HandlerInterceptor. Each interceptor may veto the execution of the handling request. By the time execution reaches a controller or method, HandlerAdapter has already performed dependency injection, type conversion, validation according to JSR-303 and so on. When inside controller, you can call bean methods just like from any standard bean in your application. When controller finishes its logic and fills Model with relevant data, HandlerInterceptor retrieves a string (but could be a special object type as well) that is later resolved to a View object. In order to do so, Spring performs mapping of returned string to a view. Views in Spring are addressed by a logical view name and are resolved by a view resolver. In the end, Spring inserts the model object in a view and renders the results which are in turn returned to the client in form of response. Remote method invocation protocols When it comes to remote method invocation and the protocols it supports, it is useful to know basic properties and limitations of said protocols.RMI protocols informationProtocol Port SerializationRMI 1099 + ‘random’ IO SerializableHessian 80 + 443 Binary XML (compressed)Burlap 80 + 443 Plain XML (overhead!)HttpInvoker 80 + 443 IO SerializableReference: Spring Professional Study Notes from our JCG partner Jakub Stas at the Jakub Stas blog....
android-logo

5 tips to improve performance in Android applications

If your application has many time-intensive operations, here are some tricks to improve the performance and provide a better experience for your users.                  Operations that can take a long time should run on their own thread and not in the main (UI) thread. If an operation takes too long while it runs on the main thread, the Android OS may show an Application not responding (ANR) dialog : from there, the user may choose to wait or to close your application. This message is not very user-friendly and your application should never have an occasion to trigger it. In particular, web services calls to an external API are especially sensitive to this and should always be on their own thread, since a network slowdown or a problem on their end can trigger an ANR, blocking the execution of your application. You can also taken advantages of threads to pre-calculate graphics that are displayed later on on the main thread. If your application requires a lot of call to external APIs, avoid sending the calls again and again if the wifi and cellular networks are not available. It is a waste of resources to prepare the whole request, send it off and wait for a timeout when it is sure to fail. You can pool the status of the connexion regularly, switch to an offline mode if no network is available, and reactivate it as soon as the network comes back. Take advantage of caching to reduce the impact of expensive operations. Calculations that are long but for which the result won’t change or graphics that will be reused can be kept in memory. You can also cache the result of calls to external APIs to a local database so you won’t depend on that resource being available at all times. A call to a local database can be faster, will not use up your users’ data plan and will work even it the device is offline. On the other hand, you should plan for a way to fresh that data from time to time, for example keeping a time and date stamp and refreshing it when it’s getting old. Save the current state of your activities to avoid having to recalculate it when the application is opened again. The data loaded by your activities or the result of any long-running operation should be saved when the onSaveInstanceState event is raised and restored when the onRestoreInstanceState event is raised.Since the state is saved with a serializable Bundle object, the easiest way to manage state is to have a serializable state object containing all the information needed to restore the activity so only this object needs to be saved. The information entered by the user in View controls is already saved automatically by the Android SDK and does not need to be kept in the state. Remember, the activity state may be lost when the user leaves your application or rotates the screen, not only when the user navigates to another activity. Make sure your layouts are as simple as possible, without unnecessary layout elements. When the view hierarchy gets too deep, the UI engine have trouble traversing all the views and calculating the position of all elements. For example, if you create a custom control and include it in another layout element, it can add an extra view that is not necessary to display the UI and that will slightly slow down the appication. You can analyse your view hierarchy to see where your layout can be flattened with the Hierarchy Viewer tool. The tool can be opened from Eclipse using the Dump View Hierarchy for UI Automator icon in the DDMS perspective, or launch the standalone tool hierarchyviewer in the <sdk>\tools\ directory.If you have other unexplained slowdown in your application, you should profile it to help identify bottlenecks. In that case, you should take a look at my article about profiling Android applications.Reference: 5 tips to improve performance in Android applications from our JCG partner Cindy Potvin at the Web, Mobile and Android Programming blog....
groovy-logo

MongoDB and Grails

So recently, I had a requirement to store unstructured JSON data that was coming back from a web service. The web service was returning back various soccer teams from around the world. Amongst the data contained in most of the soccer teams was a list of soccer players, who were part of the team. Some of the teams had 12 players, some had 20 some had even more than 20. The players had their own attribute, some were easy to predict some impossible. For the entire data structure, the only attribute that I knew would definitely be coming back was team’s teamname. After that, it depended on each team.       { "teams": [{ "teamname":"Kung fu pirates", "founded":1962, "players": [ {"name": "Robbie Fowler", "age": 56}, {"name": "Larry David", "age": 55} ... ]}, { "teamname":"Climate change observers", "founded":1942, "players": [ {"name": "Jim Carrey", "age": 26}, {"name": "Carl Craig", "age": 35} ... ]}, ... ]} There are several different ways to do store this data. I decided to go for MongoDB. Main reasons:I wanted to store the data in an as close as possible format to the JSON responses I was getting back from the web service. This would mean, less code, less bugs, less hassle. I wanted something that had a low learning curve, had good documentation and good industry support (stackoverflow threads, blog posts etc) Something that had a grails plugin that was documented, had footfall and looked like it was maintained Features such as text stemming were nice to have’s. Some support would have been nice, but it didn’t need to be cutting age. Would support good JSON search facilities, indexing, etc.MongoDB ticked all the boxes. So this is how I got it all working. After I installed MongoDB as per Mongo’s instructions and the MongoDB Grails plugin, it was time to write some code. Now here’s the neat part, there was hardly any code. I created a domain object for the Team. class Team implements Serializable {static mapWith = "mongo"static constraints = { }static mapping = { teamname index: true }String teamnameList players static embedded = ['players'] } Regarding the Team domain object:The first point to make about the Team domain object was that I didn’t even need to create it. The reason why I did use this approach was so that I could use GORM style api’s such as Team.find() if I wanted to. Players are just a List of object. I didn’t bother creating a Player object. I like the idea of always ensuring the players for the team were always in a List data structure, but I didn’t see the need to type anything further. The players are marked as embedded. This means the team and players are stored in a single denormalised data structure. This allows – amongst other things – the ability to retrieve and manipulate the team data in a single database operation. I marked the teamname as an index. I marked the domain object as: static mapWith = "mongo" This means that if I was also using another persistence solution with my GORM (postgres, MySQL, etc.) I am telling the GORM that this Team domain class is only for Mongo – keep your relational hands off it. See here for info. Note: This is a good reminder that the GORM is a higher level of abstraction than hibernate. It is possible to have a GORM object that doesn’t use hibernate but instead goes to a NoSQL store and doesn’t go near hibernate.You’ll note that in the JSON there are team attributes such as founded that haven’t been explicitly declared in the Team class. This is where Groovy and NoSQL play really well with each other. We can use some of the Meta programming features of Groovy to dynamically add attributes to the Team domain object. private List importTeams(int page) { def rs = restClient.get("teams") // invoke web service List teams = rs.responseData.teams.collect { teamResponse -> Team team = new Team(teamname: teamResponse.teamname) team.save(); // Save is needed to dynamically add the attribute teamname.each {key, value -> team["$key"] = value } teamname.save(); // We need the second save to ensure the variants get saved. return teamname } log.info("importTeams(),teams=teams); teams } Ok, so the main points in our importTeams() method:After getting our JSON response we run a collect function on the teams array. This will create the Team domain objects. We use some meta programming to dynamically add any attribute that comes back in the JSON team structure to the Team object. Note: we have to invoke save() first to be able to dynamically add the attributes that are declared in the Team domain object to the Team domain object. We also have to invoke save() again to ensure that attributes that are declared in the Team domain object to ensure they are saved. This may change in future versions of the MongoDB plugin, but it is what I had to do to get it working (I was using MongoDB plugin version 3.0.1)So what’s next? Write some queries. Ok so two choices here. First, you can use the dynamic finders and criteria queries with the GORM thanks to the MongoDB plugin. But, I didn’t do this. Why? I wanted to write the queries as close as possible to how they are supposed to be written in Mongo. There were a number of reasons for this:A leaky abstraction is inevitable here. Sooner or later you are going to have to write a query that the GORM won’t do very well. Better to approach this heads on. I wanted to be able to run the queries in the Mongo console first, check explain plans if I needed to and then use the same query in my code. Easier to do this, if I write the query directly without having to worry about what the GORM is going to do.The general format of queries is: teams = Team.collection.find(queryMap) // where queryMap is a map of fields and the various values you are searching for. Ok, some examples of queries… Team.collection.find(["teamname":"hicks"]) // Find a team name hicks Team.collection.find(["teamname":"hicks", "players.name": "Robbie Fowler"] // As above but also has Robbie Fowler Team.collection.find(["players.name": "Robbie Fowler"] // Any teams that has a Robbie Fowler Team.collection.find(["teamname":"hicks", "players.name": "Robbie Fowler", {"players.$":1}] // Returns matching player only Team.collection.find(["teamname":"/ick/"]) // Match on the regular expression /ick/, i.e. any team that contains text ick. Anything else? Yeah sure. I wanted to connect to a Mongo instance on my own machine when in development but to a Mongo machine on a dedicated server in other environments (CI, stage, production). To do this, I updated my DataSource.groovy as: environments { development { grails { mongo { host = "localhost" port = 27017 username = "test" password = "test" databaseName = "mydb" } } dataSource { dbCreate = "create-drop" // one of 'create', 'create-drop', 'update', 'validate', '' url = "jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000" } } ci { println("In bamboo environment") grails { mongo { host = "10.157.192.99" port = 27017 username = "shop" password = "shop" databaseName = "tony" } } dataSource { dbCreate = "create-drop" // one of 'create', 'create-drop', 'update', 'validate', '' url = "jdbc:h2:mem:devDb;MVCC=TRUE;LOCK_TIMEOUT=10000" } } } You’ll see I have configured multiple datasources (MongoDB and PostGres). I am not advocating using both MongoDB and a relational database, just pointing out it is possible. The other point is that the MongoDB configuration is always under: grails { mongo { Ok this is a simple introductory post, I will try to post up something more sophisticated soon. Until the next time, take care of yourselves.Reference: MongoDB and Grails from our JCG partner Alex Staveley at the Dublin’s Tech Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books