Featured FREE Whitepapers

What's New Here?

groovy-logo

Java 8 Lambdas vs Groovy Closures Compactness: Grouping And Summing

Java 8 is featuring lambdas, which are similar to a construction Groovy has already for some time: closures. In Groovy we could already do this:               def list = ['a', 'b', 'c'] print list.collect { it.toUpperCase() } // [A, B, C] where { it.toUpperCase() } is the closure. In Java 8 we can achieve the same functionality now in a concise way. list.stream().map( s -> s.toUpperCase() ) Although you could argue that with proper use of the new Stream API, bulk operations and method references, at least the intent of a piece of code is conveyed more clearly now – Java’s verboseness can still cause sore eyes.Here are some other examples. Some Groovy animals class Animal { String name BigDecimal price String farmer String toString() { name } }def animals = [] animals << new Animal(name: "Buttercup", price: 2, farmer: "john") animals << new Animal(name: "Carmella", price: 5, farmer: "dick") animals << new Animal(name: "Cinnamon", price: 2, farmer: "dick")   Example 1: Summing the total price of all animals Groovy assert 9 == animals.sum { it.price } // or animals.price.sum() What Groovy you see here:sum can be called on a List and optionally passed a closure defining the property of “it” – the animal being iterated over – to sort on. or sum can be called on a List without any arguments, which is equivalent to invoking the “plus” method on all items in the collection.Java 8 Optional<BigDecimal> sum = animals. stream(). map(Animal::getPrice). reduce((l, r) -> l.add(r)); assert BigDecimal.valueOf(9) == sum.get(); What Java you see here:Through the Stream API’s stream method we can create a pipeline of operations, such as map and reduce The argument to the map operation is a method reference to the getPrice() method of the currently iterated animal. We could also replace this part with the expression a -> a.getPrice() reduce is a general reduction operation (also called a fold) in which the BigDecimals of the prices are added up. This is also giving us an Optional with the total sum. BTW, if we were to use a double for price – which we don’t because I want to give a good example – we could have used an existing DoubleStream with a sum() on it e.g. double sum = animals.stream().mapToDouble(Animal::getPrice).sum();  Example 2: Grouping all animals by farmer Groovy def animalsByFarmer = animals.groupBy { it.farmer } // [john:[Buttercup], dick:[Carmella, Cinnamon]] Java 8 Map<String, List<Animal>> animalsByFarmer = animals .stream() .collect( Collectors.groupingBy(Animal::getFarmer)); // {dick=[Carmella, Cinnamon], john=[Buttercup]}   Example 3: Summing the total price of all animals grouped by farmer Groovy def totalPriceByFarmer = animals .groupBy { it.farmer } .collectEntries { k, v -> [k, v.price.sum()] } // [john:2, dick:7] What Groovy you see here:collectEntries iterates through the “groupBy” map transforming each map entry using the k, v -> ... closure returning a map of the transformed entries. v.price is actually a List of prices (per farmer) – such as in example 1 – on which we can call sum().Java 8 Map<String, BigDecimal> totalPriceByFarmer = animals .stream() .collect( Collectors.groupingBy( Animal::getFarmer, Collectors.reducing( BigDecimal.ZERO, Animal::getPrice, BigDecimal::add))); // {dick=7, john=2} This Java code again yields the same results. Since IDE’s, Eclipse at least, don’t format this properly, you’ll have to indent these kinds of constructions for readability a bit yourself.Reference: Java 8 Lambdas vs Groovy Closures Compactness: Grouping And Summing from our JCG partner Ted Vinke at the Ted Vinke’s Blog blog....
career-logo

SpringSource Certified Spring Professional

When I first heard about this certification I was really excited and wanted to find out what I can about this exam before I start any initiative to pass it. What I found however was the complete lack of resources, guides, information or mock exams of any kind. The only reasonable resource online I was able to find before my exam was a blog post about certification experiences by Jeanne Boyarsky on her well-known blog. However these information are almost 3 years old and few things have changed since then, so please allow me to update you on the current state of the certification process. The first thing that struck me was the ordering process before any certification-related stuff even started. The whole ordering process was really long and required extensive email communication with VMware’s support (support people were actually quite helpful and made it more bearable). Given my employer at a time was in VMware’s partner program I don’t know how the experience is for regular customers. It might be caused by recent VMware’s acquisition of SpringSource. To paint a complete picture I was ordering the certification for me and two other colleagues. So beware, there might be dragons there. About certification Back in a day, there were two possible ways of getting certified: by attending the course or being a grandfathered candidate. Grandfathering is no longer allowed so the only way to get the certification is to attend the course and afterwards pass the exam. When you register for the course you pay about 1 375 € (around 1 900 $). This price covers the course (+ the course-ware) and one free exam attempt. Any following retake attempt costs 150 $. Unlike with Oracle’s Java Programmer certification there are no requirements for prior certification/course (besides experience with Java development). The course Mandatory part of the certification is a 4 day course that focuses on following topics:Introduction to Spring Advanced XML Dependency Injection Annotation-Based Dependency Injection Java-Based Dependency Injection Bean Life Cycle: How Does Spring Work Internally? Testing a Spring-Based Application Aspect-Oriented Programming Data Access and JDBC with Spring Database Transactions with Spring Integrating Spring with JPA and Hibernate Spring in a Web Application (MVC) Spring Security Advanced Topics (remoting, JMS, JMX)Each attendee gets a starter kit depicted below. It contains Core Spring Lecture Manual full of slides presented throughout the course. Most of the chapters are required for the certification however the last few chapters are there to help you understand some of the related issues to main topics. The second book you will receive is a notebook for your notes and pictures/diagrams. These two books come with USB stick that contains all materials in digital form as well as all labs that are necessary to complete the course. Tablet is not part of the kit, but I can highly recommend using it if you have one, since all the slides are available in PDF form.  Please note that the course is prepared the way that even person with no prior Spring experience can complete it with proper studying before the exam. Having only a year worth of experience with Spring when I started the order process I was really glad that even if you have never worked with for example AOP you could practice the exercises prepared in labs. The course is divided 50% lectures and 50% lab time. This brings me to labs – a series of workspaces. Each workspace comes in two favors: incomplete/non-functional workspace and fully working workspace providing a solution to all problems simulated in the previous one. This is an essential part of the course since you get a hands-on experience with all of the topics. Course is led by an experienced Spring lecturer that is able to answer any question and if not they are willing to try out / simulate any twisted idea you are able to think of (based on the course I attended). I must say that the course was a really pleasant experience for me. I found the materials easy to work with (mostly based on the fact that their business domain remains the same throughout the course). Even if you happen to be an experienced Spring user there are always some aspects of Spring framework you might (willingly) have no experience with. The course provides you with enough theory and examples to gain basic orientation in given area. The ultimate goal of your attendance should be a state when you understand everything in your course book. If you feel uncertain, ask questions – course trainer will surely explain any problematic areas. The exam Exam is 90 minutes long including 2 minutes for legal agreements and stuff like this. You are expected to answer 50 questions in 88 minutes which I found plenty of time (it took me about an hour with one final revision at the end). Passing score for the exam is 76%. Always read every single word in a question very carefully. Some questions ask which of the following is/is not correct and you have to be absolutely sure that you know what you are asked to do. There were only single choice and multiple choice questions. Most questions were addressing container and testing, AOP and transactions (smaller number of questions from other areas). There are no questions about REST (contrary to the Jeanne’s version of test). Don’t expect tricky and syntax heavy questions like in Oracle’s Java Programmer certification. Questions are straightforward and you either know the answer or not. Exam software allows you to browse questions, mark them for a review and final review. You will receive the results of your exam right after submitting your answers. You will learn your overall score, section scores and final grade. Preparation When it comes to the exam preparation I found 3 weeks to be enough to get myself prepared for the exam (passing with 94%). My first move was to ditch course-ware for a week and hit a well-written book about Spring called Spring in Action, Third Edition by Craig Walls. Having read first and second parts of the book (excluding chapter about Spring Web Flow) I gained a solid understanding of all mentioned areas of Spring. This was my first week of studies after the course. After this I decided to study official Spring documentation to gain even more insight into the container capabilities (not necessary for the exam, but definitely worth reading for real world applications – there is no such thing as over-studying). As final step I did all the labs and went through the course book. As an additional resources I would like to include works of others sharing their experience with this certification. My study plan:1 week4 day course2 weekRead first two parts of Spring in Action, Third Edition3 weekRead most of The IoC container documentation4 weekRead course book and did all the labs5 weekTook the examHelpful notes from fellow bloggers:jeanne’s core spring 3 certification experiences After the Spring Core 3.0 Exam Spring Professional Certification Exam from Java CodeBookStudy notes Even though the course-ware combined with Spring in Action, official documentation and Jeanne’s study notes provided quite a robust preparation material I felt the need to create a small addition to these. There were certain areas that needed to be cleared for me so I did my own addition to those study notes. It is rather brief and short (based on the fact I had quite a lot of material already). However I find them to be a valuable resource since they provide big picture of certain areas of Spring.My study notesCertificate I find it slightly disappointing that VMware is not providing a hard-copy of the certificate. So do not expect to receive any mail from them. The only thing you will receive is a soft-copy that you yourself can print out. Since I am still waiting for mine I can’t provide more information.(update 12.04.2014) It seems that unless you explicitly ask for PDF version of your certificate you won’t receive anything. So I did write to VMware’s certification support, provided all documents and scores regarding my exam and got my certificate in a day or two. Conclusion Definitely a positive experience – all the way. When I started with the certification process my Spring knowledge was quite limited and I was lacking understanding of some core Spring principles and concepts. However after the course I felt more confident and grew my knowledge by studying resources introduced in previous chapters. I can honestly say that now I am using many things learned there in my daily work. It is rather expensive certification, but definitely worth the trouble. So if you are planning on taking the exam I wish you good luck and hope this post helps you on your way to becoming SpringSource Certified Spring Professional.Reference: SpringSource Certified Spring Professional from our JCG partner Jakub Stas at the Jakub Stas blog....
software-development-2-logo

An Alternative to the Twitter River – Index Tweets in Elasticsearch with Logstash

For some time now I’ve been using the Elasticsearch Twitter river for streaming conference tweets to Elasticsearch. The river runs on an Elasticsearch node, tracks the Twitter streaming API for keywords and directly indexes the documents in Elasticsearch. As the rivers are about to be deprecated it is time to move on to the recommended replacement: Logstash. With Logstash the retrieval of the Twitter data is executed in a different process, probably even on a different machine. This helps in scaling Logstash and Elasticsearch seperately.     Installation The installation of Logstash is nearly as easy as the one for Elasticsearch though you can’t start it without a configuration that tells it what you want it to do. You can download it, unpack the archive and there are scripts to start it. If you are fine with using the embedded Elasticsearch instance you don’t even need to install this separately. But you need to have a configuration file in place that tells Logstash what to do exactly. Configuration The configuration for Logstash normally consists of three sections: The input, optional filters and the output section. There is a multitude of existing components for each of those available. The structure of a config file looks like this (taken from the documentation): # This is a comment. You should use comments to describe # parts of your configuration. input { ... }filter { ... }output { ... } We are using the Twitter input, the elasticsearch_http output and no filters. Twitter As with any Twitter API interaction you need to have an account and configure the access tokens. input { twitter { # add your data consumer_key => "" consumer_secret => "" oauth_token => "" oauth_token_secret => "" keywords => ["elasticsearch"] full_tweet => true } } You need to pass in all the credentials as well as the keywords to track. By enabling the full_tweet option you can index a lot more data, by default there are only a few fields and interesting information like hashtags or mentions are missing. The Twitter river seems to have different names than the ones that are sent with the raw tweets so it doesn’t seem to be possible to easily index Twitter logstash data along with data created by the Twitter river. But it should be no big deal to change the Logstash field names as well with a filter. Elasticsearch There are three plugins that are providing an output to Elasticsearch: elasticsearch, elasticsearch_http and elasticsearch_river. elasticsearch provides the opportunity to bind to an Elasticsearch cluster as a node or via transport, elasticsearch_http uses the HTTP API and elasticsearch_river communicates via the RabbitMQ river. The http version lets you use different Elasticsearch versions for Logstash and Elasticsearch, this is the one I am using. Note that the elasticsearch plugin also provides an option for setting the protocol to http that also seems to work. output { elasticsearch_http { host => "localhost" index => "conf" index_type => "tweet" } } In contrast to the Twitter river the Logstash plugin does not create a special mapping for the tweets. I didn’t go through all the fields but for example the coordinates don’t seem to be mapped correctly to geo_point and some fields are analyzed that probably shouldn’t be (urls, usernames). If you are using those you might want to prepare your index by supplying it with a custom mapping. By default tweets will be pushed to Elasticsearch every second which should be enough for any analysis. You can even think about reducing this with the property idle_flush_time. Running Finally, when all of the configuration is in place you can execute Logstash using the following command (assuming the configuration is in a file twitter.conf): bin/logstash agent -f twitter.conf Nothing left to do but wait for the first tweets to arrive in your local instance at http://localhost:9200/conf/tweet/_search?q=*:*pretty=true. For the future it would be really useful to prepare a mapping for the fields and a filter that removes some of the unused data. For now you have to check what you would like to use of the data and prepare a mapping in advance.Reference: An Alternative to the Twitter River – Index Tweets in Elasticsearch with Logstash from our JCG partner Florian Hopf at the Dev Time blog....
software-development-2-logo

Database primary key flavors

Types of primary keys All database tables must have one primary key column. The primary key uniquely identifies a row within a table therefore it’s bound by the following constraints:UNIQUE NOT NULL IMMUTABLEWhen choosing a primary key we must take into consideration the following aspects:  the primary key may be used for joining other tables through a foreign key relationship the primary key usually has an associated default index, so the more compact the data type the less space the index will take a simple key performs better than a compound one the primary key assignment must ensure uniqueness even in highly concurrent environmentsWhen choosing a primary key generator strategy the options are:natural keys, using a column combination that guarantees individual rows uniqueness surrogate keys, that are generated independently of the current row dataNatural keys Natural keys’ uniqueness is enforced by external factors (e.g. person unique identifiers, social security numbers, vehicle identification numbers). Natural keys are convenient because they have an outside world equivalent and they don’t require any extra database processing. We can therefore know the primary key even before inserting the actual row into the database, which simplifies batch inserts. If the natural key is a single numeric value the performance is comparable to that of surrogate keys. For compound keys we must be aware of possible performance penalties:compound key joins are slower than single key ones compound key indexes require more space than their single key counterpartsNon-numerical keys are less efficient than numeric ones (integer, bigint), for both indexing and joining. A CHAR(17) natural key (e.g. vehicle identification number) occupies 17 bytes as opposed to 4 bytes (32 bit integer) or 8 bytes (64 bit bigint). The initial schema design uniqueness assumptions may not forever hold true. Let’s say we’d used one specific country citizen numeric code for identifying all application users. If we now need to support other countries that don’t have such citizen numeric code or the code clashed with existing entries, than we can conclude that the schema evolution is possibly hindered. If the natural key uniqueness constraints change it’s going to be very difficult to update both the primary keys (if we manage to drop the primary key constraints anyway) and all associated foreign key relationships. Surrogate keys Surrogate keys are generated independently of the current row data, so the other column constraints may freely evolve according to the application business requirements. The database system may manage the surrogate key generation and most often the key is of a numeric type (e.g. integer or bigint), being incremented whenever there is a need for a new key. If we want to control the surrogate key generation we can employ a 128-bit GUID or UUID. This simplifies batching and may improve the insert performance since the additional database key generation processing is no longer required. Even if this strategy is not so widely adopted it’s worth considering when designing the database model. When the database identifier generation responsibility falls to the database system, there are several strategies for auto incrementing surrogate keys:Database engine Auto incrementing strategyOracle SEQUENCEMSSQL IDENTITY, SEQUENCEPostgreSQL SEQUENCE, SERIAL TYPEMySQL AUTO_INCREMENTDB2 IDENTITY, SEQUENCEHSQLDB IDENTITY, SEQUENCEDesign aspectsBecause sequences may be called concurrently from different transactions they are usually transaction-less.  Database engine QuoteOracle When a sequence number is generated, the sequence is incremented, independent of the transaction committing or rolling backMSSQL Sequence numbers are generated outside the scope of the current transaction. They are consumed whether the transaction using the sequence number is committed or rolled backPostgreSQL Because sequences are non-transactional, changes made by setval are not undone if the transaction rolls back  The IDENTITY type is defined by the SQL:2003 standard, so it’s the standrad primary key generator strategy.Some database engines allow you to choose between IDENTITY and SEQUENCE so you have to decide which one better suits your current schema requirements. Hibernate disables JDBC insert batching when using the IDENTITY generator strategy.Reference: Database primary key flavors from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
salesforce-logo

Interfacing Salesforce with Android

In this article we are going to explore building a simple native Android application that utilizes the Chatter REST API within the Salesforce Platform. To accomplish this, we will use the Salesforce Mobile SDK 2.1, which acts as a wrapper for low-level HTTP functions, allowing us to easily handle OAuth and subsequent REST API calls. The TemplateApp provided in the SDK as a base is really going to be your cleanest starting point. My tutorial essentially uses the structure of the TemplateApp and build upon it by borrowing and modifying from the REST Explorer sample application; this helps to ensure things are as straightforward as possible. We aren’t going to touch on every aspect of building this application, but instead cover the salient points, giving the reader a good starting point and trying to expand on the salesforce.com documentation.  This tutorial attempts to serve as a bit of a shim for developers are not overly familiar with the platform to use the API in a way that is presumably more familiar. A lot of what we’ll cover will complement the Salesforce Mobile SDK Developer Guide; throughout this tutorial I will reference the relevant page numbers from that document instead of reproducing that information here in its entirety. Getting Set Up I’m using IntelliJ IDEA for this tutorial, this is the IDE that Android Studio is based on. If you’re already using Android Studio, there will be no appreciable difference in workflow as we proceed; Eclipse users are good to go. Once you have your IDE setup we can go about installing the Salesforce Mobile SDK 2.1 (see link in paragraph above). Salesforce.com recommends a Node.js based installation using the node package manager. We will go an alternate route; instead we are going to clone the repo from Github [Page 16]. Once you have your basic environment setup, go to https://developer.salesforce.com/signup, and sign up for your Developer Edition (DE) account. For the purposes of this example, I recommend sign up for a Developer Edition even if you already have an account. This ensures you get a clean environment with the latest features enabled. Then, navigate to http://login.salesforce.com to log into your developer account. After you’ve completed your registration, follow the instructions in the Mobile SDK Guide for creating a Connected App [Page 13]. For the purposes of this tutorial you only need to fill out the required fields.The Callback URL provided for OAuth does not have to be a valid URL; it only has to match what the app expects in this field. You can use any custom prefix, such as sfdc://. Important: For a native app you MUST put “Perform requests on your behalf at any time (refresh_token)” in your selected OAuth scopes or the server will reject you, and nobody likes rejection. The Mobile SDK Guide kind of glosses over this point, for more details see: [ http://github.com/forcedotcom/SalesforceMobileSDK-iOS/issues/211#issuecomment-23953366 ] When you’re done, you should be shown a page that contains your Consumer Key and Secret among other things.Now that we’ve taken care of things on the server side, let’s shift our focus over to setting up our phone app. First, we’re going to start a new Project in IntelliJ; make sure you choose Application Module and not Gradle: Android Application Module,as the way the project will be structured doesn’t play nice with the Gradle build system.Name it whatever you want, be sure to uncheck the box that says Create “Hello World!” Activity, as we won’t be needing that. Now that you’ve created your project, go to File -> Import Module… Navigate to the directory where you cloned the Mobile SDK repo, expand the native directory and you should see a project named “SalesforceSDK” with an IntelliJ logo next to it.Select it and hit ok. On the next screen, make sure the option to import from external model is selected, and that the Eclipse list item is highlighted. Click next, and then click next again on the following screen without making any changes. When you reach the final screen, Check the box next to SalesforceSDK and then click finish. IntelliJ will now import the Eclipse project (Salesforce SDK) in your project as a module. The Salesforce Mobile SDK is now yours to command….almost; go to File -> Project Structure… Select ‘Facets’ under ‘Project Settings’, now choose the one that has Salesforce SDK in parenthesis; make sure Library module box is checked [IMG]. Now, select the other, then select the Packaging tab, and make sure the Enable manifest merging box is checked.Next, select ‘Modules’ from the ‘Project Settings‘ list, then select the SalesforceSDK module. Under the dependencies tab there should be an item with red text; right-click on it and remove it. From there, click on <your module name>; under the dependencies tab click the green ‘+’, select ‘Module Dependency…’, Salesforce SDK should be your only option, click ‘Ok’. Now select ‘Apply’ in the Project Structure window and then click ‘Ok’.Making the calls Create a file named bootconfig.xml in res/values/; the content of that file should be as follows: <?xml version="1.0" encoding="utf-8"?><resources> <string name="remoteAccessConsumerKey"> YOUR CONSUMER KEY </string> <string name="oauthRedirectURI"> YOUR REDIRECT URI </string> <string-array name="oauthScopes"> <item>chatter_api</item> </string-array> <string name="androidPushNotificationClientId"></string> </resources> Remember the connected app we created earlier? That’s where you will find the consumer key and redirect (callback) uri. For the curious, despite the fact that we specified refresh_token in our OAuth Scopes server-side, we don’t need to define it here. The reasoning behind this is that this scope is always required to access the platform from a native app, so the Mobile SDK includes it automatically. Next, make sure your String.xml file look something like this: <?xml version="1.0" encoding="utf-8"?> <resources> <string name="account_type">com.salesforce.samples.templateapp.login</string> <string name="app_name"><b>Template</b></string> <string name="app_package">com.salesforce.samples.templateapp</string> <string name="api_version">v30.0</string> </resources>The above values should be unique to your app. Now create another class named KeyImpl. public class KeyImpl implements KeyInterface {@Override public String getKey(String name) { return Encryptor.hash(name + "12s9adpahk;n12-97sdainkasd=012", name + "12kl0dsakj4-cxh1qewkjasdol8"); } } Once you have done this, create an arbitrary activity with a corresponding layout that extends SalesforceActivity, and populate it as follows: ...
devops-logo

Fabric8 HTTP Gateway

I recently put together a quick Github project to show the Fabric8 HTTP gateway in action. It shows a sample project that you can use to test out the HTTP Gateway. The example/camel/cxf profile that comes with Fabric8 basically does the same thing now. Fabric8 Gateway The Fabric8 project — pronounced fabricate — is a practical DevOps framework for services running on the JVM. Things like automated deployment and centralized configuration management come out of the box and are consistent regardless of JVM container (or no container — microservices) you use.   One of the other cool features that Fabric8 gives you out of the box is the ability to dynamically lookup, load balance, and version your services (MQ, REST/http SOAP/http, etc). Clients that live within a “fabric” created by “fabric8″ can automatically take advantage of this. Your external clients can too with the Fabric8 Gateway feature. When combined with Apache Camel routes that expose CXF you can get very powerful service discovery using Fabric8. The sample project comes with three simple REST implementations and deployments that you can use to exercise and test out the Gateway for yourself. How To First, start by grabbing Fabric8 or its downstream, supported by Red Hat, cousin: JBoss Fuse. Start it up: fabric8-home$ ./bin/fabric8 Or on JBoss Fuse: fuse-home$ ./bin/fuse Next, you’ll need to build this project: project-home$ mvn clean install And navigate to one of the sub-projects in the sample distro (example: beer-service) Now you’ll have to invoke the fabric8-maven-plugin to install the profile into Fabric8/JBoss Fuse. See the fabric8-maven-plugin for more details on what it does and how to set it up: beer-service$ mvn fabric8:deploy Now navigate to the web console (http://localhost:8181) and go to the Wiki tab. You should see your profile there under the loadbalancer group:  These profiles are the declarative description of what resources need to be deployed to a JVM container. You can read more about Fabric8 Profiles to get a more thorough understanding. In this case, we’re deploying some Camel routes and describing its dependencies on some features that provide automatically registering the CXF endpoints into the API registry. Now create a new container with that profile. This new container will host your Camel routes that implement this REST service functionality.  You should have a new beer container:  Now add a new container and give it the http gateway profile:  Now you have your beercontainer and your http gateway container:  Now you can ping the beer service through the gateway at: http://localhost:9000/cxf/beer:If you have any questions about this that the screen shots don’t capture, please let me know in the comments. The HTTP Gateway is a very powerful feature of Fabric8. For JBoss Fuse this feature is in tech preview.Reference: Fabric8 HTTP Gateway from our JCG partner Christian Posta at the Christian Posta – Software Blog blog....
java-logo

Java SE 8 new features tour: The Big change, in Java Development world

I am proudly one of the adopt-OpenJDK members like others professional team members but joined from last 8 months, and we went through all stages of Java SE 8 development, compilations, coding, discussions … etc., until we bring it to the life. And it is released on March 18th 2014 and it is now available for you. I am happy to announce about this series “Java SE 8 new features tour”, which I am going to write it provided with examples to streamline the Java SE 8knowledge gaining, development experience, new features, and APIs that will leverage your knowledge, enhancing the way you code, and increase your productivity as well. I hope you enjoy it as I am doing when writing it.   We will take a tour of the new major and important features in Java SE 8 (projects and APIs), the platform designed to support faster, and easier Java development. We will learn about Project Lambda, a new syntax to support lambda expressions in Java code. Checking the new Stream API for processing collections and managing parallel processing. Calculating timespans with The DateTime API for representing, managing and calculating date and time values. In addition to Nashorn, a new engine to better support the use of JavaScript code with the Java Virtual Machine. Finally, I will also cover some lesser-known features such as new methods for joining strings into lists and other more features that will help you in daily tasks. For more about Java SE 8 features and tutorials, I advise you to consult the Java Tutorial the official site and Java SE 8 java API documentation too. The topics we are going to cover during this series will include:Installing Java SE 8, notes and advices. Introducing Java SE 8 main features, the big change. Working with lambda expressions and method references. Traversing collections with streams. Calculating timespans with the new DateTime API Running JavaScript from Java with Nashorn. Miscellaneous new features and API changes.Installing Java SE 8, notes and advices.Installing Java SE 8 on Windows In order to run Java SE 8 on Microsoft Windows, first check which version you have. Java SE 8 is supported on Windows 8, 7, Vista, and XP. Specifically, you’ll need these versions. For Windows 8 or 8.1, you’ll need the desktop version of Windows. Windows RT is not supported. You can run Java SE 8 on any version of Windows 7, and on the most recent versions of Windows Vista and Windows XP. On Server based versions of Windows, you can run 2008 and the 64-bit version of 2012.If you want to work on Java Applets you’ll need a 64-bit browser, these can include Internet Explorer 7.0 and above, Firefox 3.6 and above, and Google Chrome which is supported on Windows, but not on Mac.You can download the Java Developer Kit for Java SE 8 fromURL java.oracle.com That will take you to the current Java home page. Click Java SE. Under Top Downloads. Then click the Download link for Java 8.Installing Java SE 8 on Mac In order to work with Java SE 8 on Mac OS X, you must have an Intel-based Mac running Mac OS X 10.7.3, that’s Lion, or later. If you have older versions of Mac, you won’t be able to program or run Java 8 applications. In order to install Java SE 8 you’ll need administrative privileges on your Mac. And in order to run Java applets within a browser you’ll need to use a 64 bit browser, such as Safari or Firefox.Google Chrome is a 32 bit browser, and won’t work for this purpose.As described earlier on installing Java SE on windows, the same website has the MAC OS .dmg version to download and install. Actually contains all operating systems versions. However, our focus here would be on windows and MAC.Now you’re ready to start programming with Java SE 8 on both Windows and MAC OS X platforms. After we have installed Java SE 8 probably, let’s dive into the first point and have a look at Java SE 8 main features in a nutshell, to begin our coding tour on our favorite IDE.Introducing Java SE 8 main features, the big change. An overview of the JSR 337: Java SE 8 Release Contents Java SE 8 is a major release for the Java programming language and the Java virtual machine. It includes many changes. Some have gotten more coverage than others like Lambda expression, but I’m going to talk about both the major changes and a few of the minor ones. JSR 335: Lambda Expressions Probably the most attention has gone to Project Lambda, a set of new syntactical capabilities that let Java developers work as functional programmers. This includes lambda expressions, method references and a few other capabilities. JSR 310: Date and Time API There is a new API for managing dates and times. Replacing the older classes. Those older classes are still in the Java Runtime, but as you build new applications, you might want to move to this new set of capabilities, which let you streamline your code and be a little more intuitive in how you program. There are new classes to manage local dates and times and time zones and for calculating differences between different times. The Stream API Adds new tools for managing collections including lists, maps, sets, and so on.A stream allows you to deal with each item in a collection without having to write explicit looping code. It also lets you break your processing into multiple CPUs. So, for large, complex data sets you can see significant performance improvement. Project Nashorn The Nashorn JavaScript engine is new to Java SE 8 too. This is a completely new JavaScript engine written from scratch that lets you code in JavaScript but lets you integrate Java classes and objects.Nashorn’s goal is to implement a lightweight high-performance JavaScript runtime in Java with a native JVM. This Project intends to enable Java developers embedding of JavaScript in Java applications via JSR-223 and to develop freestanding JavaScript applications using the jrunscript command-line tool.In the article on Nashorn, I’ll describe how to run Nashorn code from the command line. But also how to write JavaScript in separate files, and then execute those files from your Java code. Concurrency API enhancements. There are also enhancements to the concurrency framework, which lets you manage and accumulate values in multiple threads. There are lots of smaller changes as well. String, numbers has new tools There are new tools for creating delimited lists in the string class and other new classes. There are tools for aggregating numbers including integers, lungs, doubles, and so on. Miscellaneous New Features There are also tools for doing a better job of detecting null situations, and I’ll describe all of these during the series. And I’ll describe how to work with files, using new convenience methods.So, when is Java SE 8 available? The answer is, now. It was released on March 18, 2014. For developers who use Java to build client site applications, the JavaFX rich internet application framework supports Java 8 now. And most of the Java enterprise edition vendors support Java 8 too. Whether you move to Java SE 8 right away depends on the kinds of project you’re working on. For many server and client site applications, it’s available immediately. Not for Android yet. Android developers beware; Java SE 8 syntax and APIs are not supported in Android at this point. It’s only very recently that Android moved to some of the newest Java 7 syntax. And so, it might take some time before Android supports this newest syntax or the newest APIs. But for all other Java developers, it’s worth taking a look at these new capabilities. What about IDEs? Java SE 8 is supported by all of the major Java development environments. Including Oracle’s Netbeans, Intellij Idea, and Eclipse. For this series I’ll be doing all of my demos in Netbeans, using Netbeans, version 8, which available to download from https://netbeans.org/downloads/. However before we start diving into this series, let’s check first, that we have installed Java SE 8 probably and start a new project under Netbeans, which will contains all code that we are going write. Then develop a lambda code to test our project if it is working or not probably with Java SE 8 . Alternatively you can download the series source code from my Github account, open it with Netbeans and follow what I am showing next, and in upcoming series code. Project on Github: https://github.com/mohamed-taman/JavaSE8-Features Hello world application on Java SE 8 with Lambda expression. Steps (not required if you navigating my code):Open NetBeans 8 –> from file –> New project –> from left, and choose Maven –> from right, and choose Java Application –> Click next. Follow the following screen shoot variables definition, or change to your favorite names and values –> then click finish.    If everything is okay you should have the following structure, on project navigator:    Click on Project “Java8Features” –> Click File, from upper menu –> then, Project properties. Under Category –> From left choose Source, then check that “Source/ Binary format” is 1.8. –> From left open Build, and choose Compiler, then check that “Java Platform” is pointing to your current JDK 8 installation –> Click Ok. If JDK 8 not presents then go to tools –> chooses, Java Platforms –> Add Platform –> Then chooses Java Standard Edition –> then point to your installed JDK 8. Now our project configured to work with Java 8 so let’s add some Lambda code. On Package “eg.com.tm.java8.features”, right click, and select New from menu –> Java Interface –> Name it Printable, under overview package “eg.com.tm.java8.features.overview” –> click finish. Implement Printable interface as the following: /* * Copyright (C) 2014 mohamed_taman * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. */ package eg.com.tm.java8.features.overview; /** * * @author mohamed_taman */ @FunctionalInterface public interface Printable { public void print(); }On the same package add the following class named “Print”, with main method as the following: /* * Copyright (C) 2014 mohamed_taman * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. */ package eg.com.tm.java8.features.overview; import static java.lang.System.out; /** * * @author mohamed_taman */ public class Print { public static void main(String[] args) { Printable job = ()-> out.println("Java SE 8 is working " + "and Lambda Expression too."); job.print(); } }Right click on Print class and choose Run. If every thing is okay then you should see the following output. ------------------------------------------------------------------------ Building Java8Features 1.0-SNAPSHOT ------------------------------------------------------------------------ --- exec-maven-plugin:1.2.1:exec (default-cli) @ Java8Features --- Java SE 8 is working and Lambda Expression too. ------------------------------------------------------------------------ BUILD SUCCESSCongratulation your Java SE 8 project works fine, let’s explain what we have written. Most of this code would work on Java 7, but there’s an annotation here that was added in Java SE 8, FunctionalInterface. If your Netbeans environment isn’t correctly configured for Java 8, this annotation will cause an error because it won’t be recognized as valid Java code. I don’t see an error, so that’s a good sign that Eclipse is working as I hoped. Next I’ll open this class definition named Print.java. This is a class with a main method so I can run it as a console application and it has a critical line of new Java 8 syntax. It’s creating an instance of that functional interface I just showed you using a lambda expression, a style of syntax that didn’t exist in Java prior to Java 8. I’ll explain what this syntax is doing very early in the next article. But all you need to know right now is that if this code isn’t causing any errors, then once again, Netbeans is recognizing it as valid Java syntax. I’m creating an instance of that interface and then calling that interface’s print method. And so, I’ll run the code. I’ll click the Run button on my tool bar and in my console I see a successful result. I’ve created an object, which is an instance of that interface using a lambda expression. And I’ve called its method and it’s outputting a string to the console. So, if this is all working, you’re in great shape. You’re ready to get started programming with Java SE 8 in Netbeans. If you had any problems along the way, go back to earlier steps and walk through the steps. One step at a time.Resources:The Java Tutorials, Lambda Expressions JSR 310: Date and Time API JSR 337: Java SE 8 Release Contents OpenJDK website Java Platform, Standard Edition 8, API SpecificationReference: Java SE 8 new features tour: The Big change, in Java Development world from our JCG partner Mohamed Taman at the Improve your life Through Science and Art blog....
career-logo

Oracle Certified Associate and Professional, Java SE 7 Programmer

This certification was one of the first exams I was considering after I was done with my college courses regarding Java and object-oriented programming. This was a time when I started working in programming and sort of needed to improve my rather basic knowledge in this area. However, it took me almost two years to make a decision to go for it (meaning the change to Java SE 7 and also revamp of the certification path by Oracle). This had both positive and negative effects. Upsides include more recent language knowledge being tested as well as a great way to prepare for both the certification and my thesis. On the other hand, the older SCJP exam for Java 6 was split into two exams increasing the overall price and also covered far more ground because of the additions in Java 7 release.   About certification Lets start with basic description of both exams. None of these exams requires any training, course or additional activity other than taking the exam itself. Based on lists of topic for each exam and also my experience, these exams do not overlap when it comes to areas being tested. However, you might get question testing also some objective from OCAJP, so remember that OCPJP expects you to know stuff from OCAJP and will use examples including syntax covered there. Exam is administered at the test center of your choosing using standard Oracle/PEARSON VUE testing software. When it comes to ordering and taking these exams it is pretty automated process and there were no problems at all. Following table presents the most important information regarding both exams.Basic informationAssociate (OCAJP 7) Professional (OCPJP 7) Upgrade (OCPJP 7)Exam Number 1Z0-803 1Z0-804 1Z0-805Prerequisites none 1Z0-803 SCJP 6.0 (by Sun as CX-310-065)Exam Topics associate topics professional topics upgrade topicsExam format Multiple Choice Multiple Choice Multiple ChoiceDuration 150 minutes 150 minutes 150 minutesNumber of Questions 90 90 80Passing Score 63% (57 questions) 65% (59 questions) 60% (48 questions)Price US$ 245 US$ 245 US$ 245  * There is possibility that you may have seen different passing score for OCAJP 7 exam. It was caused by few changes made in this exam and these are the official values as of writing this post. There is an upgrade exam for those of you that already own SCJP 6 certification. This exam should test your knowledge in areas missing in SCJP 6 and if you pass it, you will earn OCPJP 7 certificate. If you are new to this and have no experience with testing process it might come as a surprise to you – conditions during exam are pretty strict. You are going to be recorded by several cameras. Another regulation prohibits any items other than ID card and pen with blank sheet given to you by test center representative. Quality of test centers varies widely so be sure to ask people who have been already tested for their opinions and advice. The exam Both exams share 150 minutes to complete a set of 90 questions. All questions I have encountered in both exams were in form of either select correct answer or choose all that apply. Even though some people mentioned drag-and-drop questions in the exam, neither me nor me colleagues have seen any. When it comes to questions, please be careful. As always, read the question carefully and if in doubt go word for word until you get the point. OCAJP 7 has pretty evenly distributed questions throughout the exam objectives. However, OCPJP 7 exam presented a little twist in form of ever-present threads. I’m not saying every question included threads but the most of the questions did. Another thing that made this exam more interesting were questions regarding patterns and design principles. You have to be able to identify good and bad design (loose/tight coupling, high/low cohesion, …) and also be able to tell which is true regarding given example. The aspect of time and comfort during the exam changes drastically when you move on to OCPJP 7. Let me give an example from my exams. OCAJP 7 exam took me about an hour to complete and I took another hour to thoroughly check all of my answers. After doing so, I decided to turn the test in early since I felt there was nothing more to do (please note that this was the case in 2012 when I took the exam). However, when i was doing OCPJP 7 exam, it took me almost whole 150 minutes to complete, leaving me with time to check 4 first questions! Having said that, please, don’t get stuck on one question too long (unless you are a skilled veteran). You can always mark a question for review and come back to it after you are done with all the questions you can answer without further analysis. In case of OCPJP 7, getting stuck on few question can cause you some unanswered questions, so manage your time carefully. The complexity of questions raised dramatically and you need to take that into account during your preparation. Preparation OCAJP 7 My primary resource for this exam was so-called K&B 6 book (check out resources below). As you might have noticed this book was published for Java 6 so it is missing all additions in Java 7 – mainly project Coin (syntax changes), new frameworks for concurrency and IO as well as other improvements. However, the style of this book is suited for beginners and will prepare you for the exam in its respected areas. I spent several weeks preparing due to the length of this book and my workload at the time. This combined with a handful of mock tests, self studying of project Coin and playing around with code was enough to prepare me for the exam. OCPJP 7 In case of  OCPJP 7, I didn’t bet on a single book because K&B 6 covered only topics relevant to Java 6 and based on reviews and titles available at the time of the purchase, I decided to go with Oracle Certified Professional Java SE 7 Programmer Exams 1Z0-804 and 1Z0-805: A Comprehensive OCPJP 7 Certification Guide. These two books provided enough ground for me to start playing around with code. After 4 or 5 weeks I started with Enthuwares Lab and took the exam on 7th week. Unlike OCAJP 7, this exam really requires some coding experience due to parts that do not test your knowledge of code structure, compilation or program behavior. So keep in mind – best way to prepare for both exams is to code. Helpful notes from fellow bloggers: OCAJP 7jeanne’s oca/ocajp java programmer I experiences How To Prepare for OCAJP 7 Certification Exam Passed My OCAJP 7 Certification ExamOCPJP 7jeanne’s ocpjp java programmer II experiences Top 5 myths and misconceptions about OCPJP 7 exam OCPJP 7 1Z0-804 Oracle Certified Professional Java SE 7 Programmer Success StoryResources Books OCAJP 7SCJP Sun Certified Programmer for Java 6 Exam 310-065 by Bert Bates and Katherine Sierra (known as K&B book)Really great book, especially for beginners. The only downside is when you already know enough about certain topics reading becomes too long and kind of boring (since the book is suitable even for people learning Java). However, it is really good resource for anyone and it might even present information you have no idea are true about Java and compilation. Book also contains a handful of mock questions and whole tests. I was able to complete my preparation for OCAJP 7 almost solely using this book. The only downside of it is the fact, that it was written for Java 6 and does not incorporate syntax changes introduced in Java 7 like try-with-resources, strings in switch, multi-catch, exception rethrow and others. One of the authors published summary of topics covered by the book. You might also consider getting newer version OCA/OCP Java SE 7 Programmer I & II Study Guide.OCP Java SE 6 Programmer Practice Exams by Bert Bates and Katherine SierraYou might find yourself in doubt whether you are ready for the exam – you can check out this book from the same duo as previous one. However test do include topics now in OCPJP 7 so bare that in mind. I tried several of those test and I can recommend this book as well. It complements the first one pretty well. With these two at hand you have pretty solid foundation for your exam preparations.OCPJP 7Oracle Certified Professional Java SE 7 Programmer Exams 1Z0-804 and 1Z0-805: A Comprehensive OCPJP 7 Certification Guide by S.G.Ganesh andTushar SharmaWhen I started preparing for OCPJP 7 there have not been so many books or guides. One of the available at the time was this guide. Having read it, I can say it explained exam objectives pretty clearly with nice examples. It is shorter than K&B and more focused. This means it is not targeted at beginners any more and expects knowledge of concepts from OCAJP 7. One thing I really liked were all the examples throughout the book, especially in the area of concurrency and threads. In spite of a few grammatical errors, I can recommend this book since I used it as my primary resource.Pro Java 7 NIO.2 by Anghel LeonardThis book is not directly related to the certification, but I happened to read it during my thesis preparations. It is safe to say, that this book covers NIO.2 objectives quite nicely, but its scope is way broader than what is required for the exam. But it served me well, so I decided to include it in this list.Labs OCAJP 7NoneOCPJP 7Enthuware Labs for Oracle Certified Professional – Java SE 7 Programmer ExamThere are several companies producing labs with mock tests. Based on reviews online I decided to try out Enthuwares lab. The lab itself works well, you can track your progress, focus exam areas, benchmark your time and the usual stuff you would expect of this kind of software. All questions are marked based on their difficulty and this markup can be hidden in the settings. I found questions marked very easy and easy not worth my time, so I did not bother with these. The higher difficulties provide interesting questions and the opportunity to solidify your knowledge in respected areas. I would say it is a good product for the price.My own The last thing, I am going to highlight are some of my articles, that you may find useful. OCAJP 7 None so far. OCPJP 7Beauty and strangeness of genericsShort article about almost all gotchas present on OCPJP 7 (except interoperability between pre-generic and current collection code) regarding generics.NIO.2 series(Ongoing) Series of posts going into more detail than required on the actual exam. However, you will gain solid knowledge in covered areas.Certificate Oracle uses its Oracle University CertView application to manage your interaction with them so if you have not registered there yet, you will have to. When you are all done and received confirmation emails from Oracle, you should be able to see similar table in your CertView profile under Review My Exam History and Exam Results.Oracle Exam StatusTest Start Date Exam Number Exam Title Grade Indicator Score Report04-MAR-14 1Z0-804 Java SE 7 programmer II PASS View (link)12-SEP-12 1Z0-803 Java SE 7 programmer I PASS View (link)  You will be asked to fill in the address where a hard copy will be sent (you are not required to do this). In my experience it took one or two weeks to get the mail. PDF version is always available in CertView under Review My Certification History. Envelope contains the certificate along with card that proves your accomplishment (but I have not yet found any application for it).  The last thing that you are entitled to is using Oracle Certified Associate and Professional logos. They will be available in CertView so you can download them and use them in your CV or on your web page. This is my first time using them and they look as follows:   Conclusion Well, it was rather long way (as you might have noticed, it took me something more than two and a half-year to complete these) but also rewarding. Preparation for these exams is a long journey that offers a lot new insights on Java and the compiler. It is quite possible that you will develop certain love-hate relationship with the compiler itself (and will be able to replace it in many cases!). However, there were many areas I only new from my college years that needed improvements since I wasn’t using them for my work. After all the studies and playing around with little code snippets could honestly feel improvements in certain areas. You might learn things that will allow you to produce fewer lines of code, more readable and easily understandable code. And this is why I would recommend these exams to you – your general understanding of the code will increase (among other positive things). Only down side is rather big scope of the exam and time requirements for preparation. All in all, great learning experience and great way to discuss things you do and like with your friends and colleagues. So if you are considering these exams I would invite you to try them and wish you best of luck on your way of becoming Oracle Certified Professional.Reference: Oracle Certified Associate and Professional, Java SE 7 Programmer from our JCG partner Jakub Stas at the Jakub Stas blog....
enterprise-java-logo

A Tour Through elasticsearch-kopf

When I needed a plugin to display the cluster state of Elasticsearch or needed some insight into the indices I normally reached for the classic plugin elasticsearch-head. As it is recommended a lot and seems to be the unofficial successor I recently took a more detailed look at elasticsearch-kopf. And I liked it. I am not sure about why elasticsearch-kopf came into existence but it seems to be a clone of elasticsearch-head (kopf means head in German so it is even the same name).       Installation elasticsearch-kopf can be installed like most of the plugins, using the script in the Elasticsearch installation. This is the command that installs the version 1.1 which is suitable for the 1.1.x branch of Elasticsearch. bin/plugin --install lmenezes/elasticsearch-kopf/1.1 elasticsearch-kopf is then available on the url http://localhost:9200/_plugin/kopf/. Cluster On the front page you will see a similar diagram of what elasticsearch-head is providing. The overview of your cluster with all the shards and the distribution across the nodes. The page is being refreshed so you will see joining or leaving nodes immediately. You can adjust the refresh rate in the settings dropdown just next to the kopf logo (by the way, the header reflects the state of the cluster so it might change its color from green to yellow to red).Also, there are lots of different settings that can be reached via this page. On top of the node list there are 4 icons for creating a new index, deactivating shard allocation, for the cluster settings and the cluster diagnosis options. Creating a new index brings up a form for entering the index data. You can also load the settings from an existing index or just paste the settings json in the field on the right side.The icon for disabling the shard allocation just toggles it, disabling the shard allocation can be useful during a cluster restart. Using the cluster settings you can reach a form where you can adjust lots of values regarding your cluster, the routing and recovery. The cluster health button finally lets you load different json documents containing more details on the cluster health, e.g. the nodes stats and the hot threads. Using the little dropdown just next to the index name you can execute some operations on the index. You can view the settings, open and close the index, optimize and refresh the index, clear the caches, adjust the settings or delete the index.When opening the form for the index settings you will be overwhelmed at first. I didn’t know there are so many settings. What is really useful is that there is an info icon next to each field that will tell you what this field is about. A great opportunity to learn about some of the settings.What I find really useful is that you can adjust the slow index log settings directly. The slow log can also be used to log any incoming queries so it is sometimes useful for diagnostic purposes. Finally, back on the cluster page, you can get more detailed information on the nodes or shards when clicking on them. This will open a lightbox with more details.REST The rest menu entry on top brings you to another view which is similar to the one Sense provided. You can enter queries and let them be executed for you. There is a request history, you have highlighting and you can format the request document but unfortunately the interface is missing the autocompletion. Nevertheless I suppose this can be useful if you don’t like to fiddle with curl.Aliases Using the aliases tab you can have a convenient form for managing your index aliases and all the relevant additional information. You can add filter queries for your alias or influence the index or search routing. On the right side you can see the existing aliases and remove them if not needed.Analysis The analysis tab will bring you to a feature that is also very popular for the Solr administration view. You can test the analyzers for different values and different fields. This is a very valuable tool while building a more complex search application.Unfortunately the information you can get from Elasticsearch is not as detailed as the one you can get from Solr: It will only contain the end result so you can’t really see which tokenizer or filter caused a certain change. Percolator On the percolator tab you can use a form to register new percolator queries and view existing ones. There doesn’t seem to be a way to do the actual percolation but maybe this page can be useful for using the percolator extensively.Warmers The warmers tab can be used to register index warmer queries.Repository The final tab is for the snapshot and restore feature. You can create repositories and snapshots and restore them. Though I can imagine that most of the people are automating the snapshot creation this can be a very useful form.Conclusion I hope you could see in this post that elasticsearch-head can be really useful. It is very unlikely that you will ever need all of the forms but it is good to have them available. The cluster view and the rest interface can be very valuable for your daily work and I guess there will be new features coming in the future.Reference: A Tour Through elasticsearch-kopf from our JCG partner Florian Hopf at the Dev Time blog....
java-logo

Java 8 Friday: 10 Subtle Mistakes When Using the Streams API

At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem. Java 8 Friday Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, extension methods, and other great stuff. You’ll find the source code on GitHub.       10 Subtle Mistakes When Using the Streams API We’ve done all the SQL mistakes lists:10 Common Mistakes Java Developers Make when Writing SQL 10 More Common Mistakes Java Developers Make when Writing SQL Yet Another 10 Common Mistakes Java Developers Make When Writing SQL (You Won’t BELIEVE the Last One)But we haven’t done a top 10 mistakes list with Java 8 yet! For today’s occasion (it’s Friday the 13th), we’ll catch up with what will go wrong in YOUR application when you’re working with Java 8 (it won’t happen to us, as we’re stuck with Java 6 for another while). 1. Accidentally reusing streams Wanna bet, this will happen to everyone at least once. Like the existing “streams” (e.g. InputStream), you can consume streams only once. The following code won’t work: IntStream stream = IntStream.of(1, 2); stream.forEach(System.out::println);// That was fun! Let's do it again! stream.forEach(System.out::println); You’ll get a: java.lang.IllegalStateException: stream has already been operated upon or closed So be careful when consuming your stream. It can be done only once. 2. Accidentally creating “infinite” streams You can create infinite streams quite easily without noticing. Take the following example: // Will run indefinitely IntStream.iterate(0, i -> i + 1) .forEach(System.out::println); The whole point of streams is the fact that they can be infinite, if you design them to be. The only problem is, that you might not have wanted that. So, be sure to always put proper limits: // That's better IntStream.iterate(0, i -> i + 1) .limit(10) .forEach(System.out::println); 3. Accidentally creating “subtle” infinite streams We can’t say this enough. You WILL eventually create an infinite stream, accidentally. Take the following stream, for instance: IntStream.iterate(0, i -> ( i + 1 ) % 2) .distinct() .limit(10) .forEach(System.out::println); So…we generate alternating 0′s and 1′s then we keep only distinct values, i.e. a single 0 and a single 1 then we limit the stream to a size of 10 then we consume itWell… the distinct() operation doesn’t know that the function supplied to the iterate() method will produce only two distinct values. It might expect more than that. So it’ll forever consume new values from the stream, and the limit(10) will never be reached. Tough luck, your application stalls. 4. Accidentally creating “subtle” parallel infinite streams We really need to insist that you might accidentally try to consume an infinite stream. Let’s assume you believe that the distinct() operation should be performed in parallel. You might be writing this: IntStream.iterate(0, i -> ( i + 1 ) % 2) .parallel() .distinct() .limit(10) .forEach(System.out::println); Now, we’ve already seen that this will turn forever. But previously, at least, you only consumed one CPU on your machine. Now, you’ll probably consume four of them, potentially occupying pretty much all of your system with an accidental infinite stream consumption. That’s pretty bad. You can probably hard-reboot your server / development machine after that. Have a last look at what my laptop looked like prior to exploding:  5. Mixing up the order of operations So, why did we insist on your definitely accidentally creating infinite streams? It’s simple. Because you may just accidentally do it. The above stream can be perfectly consumed if you switch the order of limit() and distinct(): IntStream.iterate(0, i -> ( i + 1 ) % 2) .limit(10) .distinct() .forEach(System.out::println); This now yields: 0 1 Why? Because we first limit the infinite stream to 10 values (0 1 0 1 0 1 0 1 0 1), before we reduce the limited stream to the distinct values contained in it (0 1). Of course, this may no longer be semantically correct, because you really wanted the first 10 distinct values from a set of data (you just happened to have “forgotten” that the data is infinite). No one really wants 10 random values, and only then reduce them to be distinct. If you’re coming from a SQL background, you might not expect such differences. Take SQL Server 2012, for instance. The following two SQL statements are the same: -- Using TOP SELECT DISTINCT TOP 10 * FROM i ORDER BY ..-- Using FETCH SELECT * FROM i ORDER BY .. OFFSET 0 ROWS FETCH NEXT 10 ROWS ONLY So, as a SQL person, you might not be as aware of the importance of the order of streams operations.  6. Mixing up the order of operations (again) Speaking of SQL, if you’re a MySQL or PostgreSQL person, you might be used to the LIMIT .. OFFSET clause. SQL is full of subtle quirks, and this is one of them. The OFFSET clause is applied FIRST, as suggested in SQL Server 2012′s (i.e. the SQL:2008 standard’s) syntax. If you translate MySQL / PostgreSQL’s dialect directly to streams, you’ll probably get it wrong: IntStream.iterate(0, i -> i + 1) .limit(10) // LIMIT .skip(5) // OFFSET .forEach(System.out::println); The above yields 5 6 7 8 9 Yes. It doesn’t continue after 9, because the limit() is now applied first, producing (0 1 2 3 4 5 6 7 8 9). skip() is applied after, reducing the stream to (5 6 7 8 9). Not what you may have intended. BEWARE of the LIMIT .. OFFSET vs. "OFFSET .. LIMIT" trap! 7. Walking the file system with filters We’ve blogged about this before. What appears to be a good idea is to walk the file system using filters: Files.walk(Paths.get(".")) .filter(p -> !p.toFile().getName().startsWith(".")) .forEach(System.out::println); The above stream appears to be walking only through non-hidden directories, i.e. directories that do not start with a dot. Unfortunately, you’ve again made mistake #5 and #6. walk() has already produced the whole stream of subdirectories of the current directory. Lazily, though, but logically containing all sub-paths. Now, the filter will correctly filter out paths whose names start with a dot “.”. E.g. .git or .idea will not be part of the resulting stream. But these paths will be: .\.git\refs, or .\.idea\libraries. Not what you intended. Now, don’t fix this by writing the following: Files.walk(Paths.get(".")) .filter(p -> !p.toString().contains(File.separator + ".")) .forEach(System.out::println); While that will produce the correct output, it will still do so by traversing the complete directory subtree, recursing into all subdirectories of “hidden” directories. I guess you’ll have to resort to good old JDK 1.0 File.list() again. The good news is, FilenameFilter and FileFilter are both functional interfaces. 8. Modifying the backing collection of a stream While you’re iterating a List, you must not modify that same list in the iteration body. That was true before Java 8, but it might become more tricky with Java 8 streams. Consider the following list from 0..9: // Of course, we create this list using streams: List<Integer> list = IntStream.range(0, 10) .boxed() .collect(toCollection(ArrayList::new)); Now, let’s assume that we want to remove each element while consuming it: list.stream() // remove(Object), not remove(int)! .peek(list::remove) .forEach(System.out::println); Interestingly enough, this will work for some of the elements! The output you might get is this one: 0 2 4 6 8 null null null null null java.util.ConcurrentModificationException If we introspect the list after catching that exception, there’s a funny finding. We’ll get: [1, 3, 5, 7, 9] Heh, it “worked” for all the odd numbers. Is this a bug? No, it looks like a feature. If you’re delving into the JDK code, you’ll find this comment in ArrayList.ArraListSpliterator: /* * If ArrayLists were immutable, or structurally immutable (no * adds, removes, etc), we could implement their spliterators * with Arrays.spliterator. Instead we detect as much * interference during traversal as practical without * sacrificing much performance. We rely primarily on * modCounts. These are not guaranteed to detect concurrency * violations, and are sometimes overly conservative about * within-thread interference, but detect enough problems to * be worthwhile in practice. To carry this out, we (1) lazily * initialize fence and expectedModCount until the latest * point that we need to commit to the state we are checking * against; thus improving precision. (This doesn't apply to * SubLists, that create spliterators with current non-lazy * values). (2) We perform only a single * ConcurrentModificationException check at the end of forEach * (the most performance-sensitive method). When using forEach * (as opposed to iterators), we can normally only detect * interference after actions, not before. Further * CME-triggering checks apply to all other possible * violations of assumptions for example null or too-small * elementData array given its size(), that could only have * occurred due to interference. This allows the inner loop * of forEach to run without any further checks, and * simplifies lambda-resolution. While this does entail a * number of checks, note that in the common case of * list.stream().forEach(a), no checks or other computation * occur anywhere other than inside forEach itself. The other * less-often-used methods cannot take advantage of most of * these streamlinings. */ Now, check out what happens when we tell the stream to produce sorted() results: list.stream() .sorted() .peek(list::remove) .forEach(System.out::println); This will now produce the following, “expected” output 0 1 2 3 4 5 6 7 8 9 And the list after stream consumption? It is empty: [] So, all elements are consumed, and removed correctly. The sorted() operation is a “stateful intermediate operation”, which means that subsequent operations no longer operate on the backing collection, but on an internal state. It is now “safe” to remove elements from the list! Well… can we really? Let’s proceed with parallel(), sorted() removal: list.stream() .sorted() .parallel() .peek(list::remove) .forEach(System.out::println); This now yields: 7 6 2 5 8 4 1 0 9 3 And the list contains [8] Eek. We didn’t remove all elements!? Free beers (and jOOQ stickers) go to anyone who solves this streams puzzler! This all appears quite random and subtle, we can only suggest that you never actually do modify a backing collection while consuming a stream. It just doesn’t work. 9. Forgetting to actually consume the stream What do you think the following stream does? IntStream.range(1, 5) .peek(System.out::println) .peek(i -> { if (i == 5) throw new RuntimeException("bang"); }); When you read this, you might think that it will print (1 2 3 4 5) and then throw an exception. But that’s not correct. It won’t do anything. The stream just sits there, never having been consumed. As with any fluent API or DSL, you might actually forget to call the “terminal” operation. This might be particularly true when you use peek(), as peek() is an aweful lot similar to forEach(). This can happen with jOOQ just the same, when you forget to call execute() or fetch(): DSL.using(configuration) .update(TABLE) .set(TABLE.COL1, 1) .set(TABLE.COL2, "abc") .where(TABLE.ID.eq(3)); Oops. No execute()  Yes, the “best” way – with 1-2 caveats! 10. Parallel stream deadlock This is now a real goodie for the end! All concurrent systems can run into deadlocks, if you don’t properly synchronise things. While finding a real-world example isn’t obvious, finding a forced example is. The following parallel() stream is guaranteed to run into a deadlock: Object[] locks = { new Object(), new Object() };IntStream .range(1, 5) .parallel() .peek(Unchecked.intConsumer(i -> { synchronized (locks[i % locks.length]) { Thread.sleep(100);synchronized (locks[(i + 1) % locks.length]) { Thread.sleep(50); } } })) .forEach(System.out::println); Note the use of Unchecked.intConsumer(), which transforms the functional IntConsumer interface into a org.jooq.lambda.fi.util.function.CheckedIntConsumer, which is allowed to throw checked exceptions. Well. Tough luck for your machine. Those threads will be blocked forever! The good news is, it has never been easier to produce a schoolbook example of a deadlock in Java! For more details, see also Brian Goetz’s answer to this question on Stack Overflow. Conclusion With streams and functional thinking, we’ll run into a massive amount of new, subtle bugs. Few of these bugs can be prevented, except through practice and staying focused. You have to think about how to order your operations. You have to think about whether your streams may be infinite. Streams (and lambdas) are a very powerful tool. But a tool which we need to get a hang of, first.Reference: Java 8 Friday: 10 Subtle Mistakes When Using the Streams API from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books