Featured FREE Whitepapers

What's New Here?

software-development-2-logo

An Alternative to the Twitter River – Index Tweets in Elasticsearch with Logstash

For some time now I’ve been using the Elasticsearch Twitter river for streaming conference tweets to Elasticsearch. The river runs on an Elasticsearch node, tracks the Twitter streaming API for keywords and directly indexes the documents in Elasticsearch. As the rivers are about to be deprecated it is time to move on to the recommended replacement: Logstash. With Logstash the retrieval of the Twitter data is executed in a different process, probably even on a different machine. This helps in scaling Logstash and Elasticsearch seperately.     Installation The installation of Logstash is nearly as easy as the one for Elasticsearch though you can’t start it without a configuration that tells it what you want it to do. You can download it, unpack the archive and there are scripts to start it. If you are fine with using the embedded Elasticsearch instance you don’t even need to install this separately. But you need to have a configuration file in place that tells Logstash what to do exactly. Configuration The configuration for Logstash normally consists of three sections: The input, optional filters and the output section. There is a multitude of existing components for each of those available. The structure of a config file looks like this (taken from the documentation): # This is a comment. You should use comments to describe # parts of your configuration. input { ... }filter { ... }output { ... } We are using the Twitter input, the elasticsearch_http output and no filters. Twitter As with any Twitter API interaction you need to have an account and configure the access tokens. input { twitter { # add your data consumer_key => "" consumer_secret => "" oauth_token => "" oauth_token_secret => "" keywords => ["elasticsearch"] full_tweet => true } } You need to pass in all the credentials as well as the keywords to track. By enabling the full_tweet option you can index a lot more data, by default there are only a few fields and interesting information like hashtags or mentions are missing. The Twitter river seems to have different names than the ones that are sent with the raw tweets so it doesn’t seem to be possible to easily index Twitter logstash data along with data created by the Twitter river. But it should be no big deal to change the Logstash field names as well with a filter. Elasticsearch There are three plugins that are providing an output to Elasticsearch: elasticsearch, elasticsearch_http and elasticsearch_river. elasticsearch provides the opportunity to bind to an Elasticsearch cluster as a node or via transport, elasticsearch_http uses the HTTP API and elasticsearch_river communicates via the RabbitMQ river. The http version lets you use different Elasticsearch versions for Logstash and Elasticsearch, this is the one I am using. Note that the elasticsearch plugin also provides an option for setting the protocol to http that also seems to work. output { elasticsearch_http { host => "localhost" index => "conf" index_type => "tweet" } } In contrast to the Twitter river the Logstash plugin does not create a special mapping for the tweets. I didn’t go through all the fields but for example the coordinates don’t seem to be mapped correctly to geo_point and some fields are analyzed that probably shouldn’t be (urls, usernames). If you are using those you might want to prepare your index by supplying it with a custom mapping. By default tweets will be pushed to Elasticsearch every second which should be enough for any analysis. You can even think about reducing this with the property idle_flush_time. Running Finally, when all of the configuration is in place you can execute Logstash using the following command (assuming the configuration is in a file twitter.conf): bin/logstash agent -f twitter.conf Nothing left to do but wait for the first tweets to arrive in your local instance at http://localhost:9200/conf/tweet/_search?q=*:*pretty=true. For the future it would be really useful to prepare a mapping for the fields and a filter that removes some of the unused data. For now you have to check what you would like to use of the data and prepare a mapping in advance.Reference: An Alternative to the Twitter River – Index Tweets in Elasticsearch with Logstash from our JCG partner Florian Hopf at the Dev Time blog....
software-development-2-logo

Database primary key flavors

Types of primary keys All database tables must have one primary key column. The primary key uniquely identifies a row within a table therefore it’s bound by the following constraints:UNIQUE NOT NULL IMMUTABLEWhen choosing a primary key we must take into consideration the following aspects:  the primary key may be used for joining other tables through a foreign key relationship the primary key usually has an associated default index, so the more compact the data type the less space the index will take a simple key performs better than a compound one the primary key assignment must ensure uniqueness even in highly concurrent environmentsWhen choosing a primary key generator strategy the options are:natural keys, using a column combination that guarantees individual rows uniqueness surrogate keys, that are generated independently of the current row dataNatural keys Natural keys’ uniqueness is enforced by external factors (e.g. person unique identifiers, social security numbers, vehicle identification numbers). Natural keys are convenient because they have an outside world equivalent and they don’t require any extra database processing. We can therefore know the primary key even before inserting the actual row into the database, which simplifies batch inserts. If the natural key is a single numeric value the performance is comparable to that of surrogate keys. For compound keys we must be aware of possible performance penalties:compound key joins are slower than single key ones compound key indexes require more space than their single key counterpartsNon-numerical keys are less efficient than numeric ones (integer, bigint), for both indexing and joining. A CHAR(17) natural key (e.g. vehicle identification number) occupies 17 bytes as opposed to 4 bytes (32 bit integer) or 8 bytes (64 bit bigint). The initial schema design uniqueness assumptions may not forever hold true. Let’s say we’d used one specific country citizen numeric code for identifying all application users. If we now need to support other countries that don’t have such citizen numeric code or the code clashed with existing entries, than we can conclude that the schema evolution is possibly hindered. If the natural key uniqueness constraints change it’s going to be very difficult to update both the primary keys (if we manage to drop the primary key constraints anyway) and all associated foreign key relationships. Surrogate keys Surrogate keys are generated independently of the current row data, so the other column constraints may freely evolve according to the application business requirements. The database system may manage the surrogate key generation and most often the key is of a numeric type (e.g. integer or bigint), being incremented whenever there is a need for a new key. If we want to control the surrogate key generation we can employ a 128-bit GUID or UUID. This simplifies batching and may improve the insert performance since the additional database key generation processing is no longer required. Even if this strategy is not so widely adopted it’s worth considering when designing the database model. When the database identifier generation responsibility falls to the database system, there are several strategies for auto incrementing surrogate keys:Database engine Auto incrementing strategyOracle SEQUENCEMSSQL IDENTITY, SEQUENCEPostgreSQL SEQUENCE, SERIAL TYPEMySQL AUTO_INCREMENTDB2 IDENTITY, SEQUENCEHSQLDB IDENTITY, SEQUENCEDesign aspectsBecause sequences may be called concurrently from different transactions they are usually transaction-less.  Database engine QuoteOracle When a sequence number is generated, the sequence is incremented, independent of the transaction committing or rolling backMSSQL Sequence numbers are generated outside the scope of the current transaction. They are consumed whether the transaction using the sequence number is committed or rolled backPostgreSQL Because sequences are non-transactional, changes made by setval are not undone if the transaction rolls back  The IDENTITY type is defined by the SQL:2003 standard, so it’s the standrad primary key generator strategy.Some database engines allow you to choose between IDENTITY and SEQUENCE so you have to decide which one better suits your current schema requirements. Hibernate disables JDBC insert batching when using the IDENTITY generator strategy.Reference: Database primary key flavors from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
salesforce-logo

Interfacing Salesforce with Android

In this article we are going to explore building a simple native Android application that utilizes the Chatter REST API within the Salesforce Platform. To accomplish this, we will use the Salesforce Mobile SDK 2.1, which acts as a wrapper for low-level HTTP functions, allowing us to easily handle OAuth and subsequent REST API calls. The TemplateApp provided in the SDK as a base is really going to be your cleanest starting point. My tutorial essentially uses the structure of the TemplateApp and build upon it by borrowing and modifying from the REST Explorer sample application; this helps to ensure things are as straightforward as possible. We aren’t going to touch on every aspect of building this application, but instead cover the salient points, giving the reader a good starting point and trying to expand on the salesforce.com documentation.  This tutorial attempts to serve as a bit of a shim for developers are not overly familiar with the platform to use the API in a way that is presumably more familiar. A lot of what we’ll cover will complement the Salesforce Mobile SDK Developer Guide; throughout this tutorial I will reference the relevant page numbers from that document instead of reproducing that information here in its entirety. Getting Set Up I’m using IntelliJ IDEA for this tutorial, this is the IDE that Android Studio is based on. If you’re already using Android Studio, there will be no appreciable difference in workflow as we proceed; Eclipse users are good to go. Once you have your IDE setup we can go about installing the Salesforce Mobile SDK 2.1 (see link in paragraph above). Salesforce.com recommends a Node.js based installation using the node package manager. We will go an alternate route; instead we are going to clone the repo from Github [Page 16]. Once you have your basic environment setup, go to https://developer.salesforce.com/signup, and sign up for your Developer Edition (DE) account. For the purposes of this example, I recommend sign up for a Developer Edition even if you already have an account. This ensures you get a clean environment with the latest features enabled. Then, navigate to http://login.salesforce.com to log into your developer account. After you’ve completed your registration, follow the instructions in the Mobile SDK Guide for creating a Connected App [Page 13]. For the purposes of this tutorial you only need to fill out the required fields.The Callback URL provided for OAuth does not have to be a valid URL; it only has to match what the app expects in this field. You can use any custom prefix, such as sfdc://. Important: For a native app you MUST put “Perform requests on your behalf at any time (refresh_token)” in your selected OAuth scopes or the server will reject you, and nobody likes rejection. The Mobile SDK Guide kind of glosses over this point, for more details see: [ http://github.com/forcedotcom/SalesforceMobileSDK-iOS/issues/211#issuecomment-23953366 ] When you’re done, you should be shown a page that contains your Consumer Key and Secret among other things.Now that we’ve taken care of things on the server side, let’s shift our focus over to setting up our phone app. First, we’re going to start a new Project in IntelliJ; make sure you choose Application Module and not Gradle: Android Application Module,as the way the project will be structured doesn’t play nice with the Gradle build system.Name it whatever you want, be sure to uncheck the box that says Create “Hello World!” Activity, as we won’t be needing that. Now that you’ve created your project, go to File -> Import Module… Navigate to the directory where you cloned the Mobile SDK repo, expand the native directory and you should see a project named “SalesforceSDK” with an IntelliJ logo next to it.Select it and hit ok. On the next screen, make sure the option to import from external model is selected, and that the Eclipse list item is highlighted. Click next, and then click next again on the following screen without making any changes. When you reach the final screen, Check the box next to SalesforceSDK and then click finish. IntelliJ will now import the Eclipse project (Salesforce SDK) in your project as a module. The Salesforce Mobile SDK is now yours to command….almost; go to File -> Project Structure… Select ‘Facets’ under ‘Project Settings’, now choose the one that has Salesforce SDK in parenthesis; make sure Library module box is checked [IMG]. Now, select the other, then select the Packaging tab, and make sure the Enable manifest merging box is checked.Next, select ‘Modules’ from the ‘Project Settings‘ list, then select the SalesforceSDK module. Under the dependencies tab there should be an item with red text; right-click on it and remove it. From there, click on <your module name>; under the dependencies tab click the green ‘+’, select ‘Module Dependency…’, Salesforce SDK should be your only option, click ‘Ok’. Now select ‘Apply’ in the Project Structure window and then click ‘Ok’.Making the calls Create a file named bootconfig.xml in res/values/; the content of that file should be as follows: <?xml version="1.0" encoding="utf-8"?><resources> <string name="remoteAccessConsumerKey"> YOUR CONSUMER KEY </string> <string name="oauthRedirectURI"> YOUR REDIRECT URI </string> <string-array name="oauthScopes"> <item>chatter_api</item> </string-array> <string name="androidPushNotificationClientId"></string> </resources> Remember the connected app we created earlier? That’s where you will find the consumer key and redirect (callback) uri. For the curious, despite the fact that we specified refresh_token in our OAuth Scopes server-side, we don’t need to define it here. The reasoning behind this is that this scope is always required to access the platform from a native app, so the Mobile SDK includes it automatically. Next, make sure your String.xml file look something like this: <?xml version="1.0" encoding="utf-8"?> <resources> <string name="account_type">com.salesforce.samples.templateapp.login</string> <string name="app_name"><b>Template</b></string> <string name="app_package">com.salesforce.samples.templateapp</string> <string name="api_version">v30.0</string> </resources>The above values should be unique to your app. Now create another class named KeyImpl. public class KeyImpl implements KeyInterface {@Override public String getKey(String name) { return Encryptor.hash(name + "12s9adpahk;n12-97sdainkasd=012", name + "12kl0dsakj4-cxh1qewkjasdol8"); } } Once you have done this, create an arbitrary activity with a corresponding layout that extends SalesforceActivity, and populate it as follows: ...
devops-logo

Fabric8 HTTP Gateway

I recently put together a quick Github project to show the Fabric8 HTTP gateway in action. It shows a sample project that you can use to test out the HTTP Gateway. The example/camel/cxf profile that comes with Fabric8 basically does the same thing now. Fabric8 Gateway The Fabric8 project — pronounced fabricate — is a practical DevOps framework for services running on the JVM. Things like automated deployment and centralized configuration management come out of the box and are consistent regardless of JVM container (or no container — microservices) you use.   One of the other cool features that Fabric8 gives you out of the box is the ability to dynamically lookup, load balance, and version your services (MQ, REST/http SOAP/http, etc). Clients that live within a “fabric” created by “fabric8″ can automatically take advantage of this. Your external clients can too with the Fabric8 Gateway feature. When combined with Apache Camel routes that expose CXF you can get very powerful service discovery using Fabric8. The sample project comes with three simple REST implementations and deployments that you can use to exercise and test out the Gateway for yourself. How To First, start by grabbing Fabric8 or its downstream, supported by Red Hat, cousin: JBoss Fuse. Start it up: fabric8-home$ ./bin/fabric8 Or on JBoss Fuse: fuse-home$ ./bin/fuse Next, you’ll need to build this project: project-home$ mvn clean install And navigate to one of the sub-projects in the sample distro (example: beer-service) Now you’ll have to invoke the fabric8-maven-plugin to install the profile into Fabric8/JBoss Fuse. See the fabric8-maven-plugin for more details on what it does and how to set it up: beer-service$ mvn fabric8:deploy Now navigate to the web console (http://localhost:8181) and go to the Wiki tab. You should see your profile there under the loadbalancer group:  These profiles are the declarative description of what resources need to be deployed to a JVM container. You can read more about Fabric8 Profiles to get a more thorough understanding. In this case, we’re deploying some Camel routes and describing its dependencies on some features that provide automatically registering the CXF endpoints into the API registry. Now create a new container with that profile. This new container will host your Camel routes that implement this REST service functionality.  You should have a new beer container:  Now add a new container and give it the http gateway profile:  Now you have your beercontainer and your http gateway container:  Now you can ping the beer service through the gateway at: http://localhost:9000/cxf/beer:If you have any questions about this that the screen shots don’t capture, please let me know in the comments. The HTTP Gateway is a very powerful feature of Fabric8. For JBoss Fuse this feature is in tech preview.Reference: Fabric8 HTTP Gateway from our JCG partner Christian Posta at the Christian Posta – Software Blog blog....
java-logo

Java SE 8 new features tour: The Big change, in Java Development world

I am proudly one of the adopt-OpenJDK members like others professional team members but joined from last 8 months, and we went through all stages of Java SE 8 development, compilations, coding, discussions … etc., until we bring it to the life. And it is released on March 18th 2014 and it is now available for you. I am happy to announce about this series “Java SE 8 new features tour”, which I am going to write it provided with examples to streamline the Java SE 8knowledge gaining, development experience, new features, and APIs that will leverage your knowledge, enhancing the way you code, and increase your productivity as well. I hope you enjoy it as I am doing when writing it.   We will take a tour of the new major and important features in Java SE 8 (projects and APIs), the platform designed to support faster, and easier Java development. We will learn about Project Lambda, a new syntax to support lambda expressions in Java code. Checking the new Stream API for processing collections and managing parallel processing. Calculating timespans with The DateTime API for representing, managing and calculating date and time values. In addition to Nashorn, a new engine to better support the use of JavaScript code with the Java Virtual Machine. Finally, I will also cover some lesser-known features such as new methods for joining strings into lists and other more features that will help you in daily tasks. For more about Java SE 8 features and tutorials, I advise you to consult the Java Tutorial the official site and Java SE 8 java API documentation too. The topics we are going to cover during this series will include:Installing Java SE 8, notes and advices. Introducing Java SE 8 main features, the big change. Working with lambda expressions and method references. Traversing collections with streams. Calculating timespans with the new DateTime API Running JavaScript from Java with Nashorn. Miscellaneous new features and API changes.Installing Java SE 8, notes and advices.Installing Java SE 8 on Windows In order to run Java SE 8 on Microsoft Windows, first check which version you have. Java SE 8 is supported on Windows 8, 7, Vista, and XP. Specifically, you’ll need these versions. For Windows 8 or 8.1, you’ll need the desktop version of Windows. Windows RT is not supported. You can run Java SE 8 on any version of Windows 7, and on the most recent versions of Windows Vista and Windows XP. On Server based versions of Windows, you can run 2008 and the 64-bit version of 2012.If you want to work on Java Applets you’ll need a 64-bit browser, these can include Internet Explorer 7.0 and above, Firefox 3.6 and above, and Google Chrome which is supported on Windows, but not on Mac.You can download the Java Developer Kit for Java SE 8 fromURL java.oracle.com That will take you to the current Java home page. Click Java SE. Under Top Downloads. Then click the Download link for Java 8.Installing Java SE 8 on Mac In order to work with Java SE 8 on Mac OS X, you must have an Intel-based Mac running Mac OS X 10.7.3, that’s Lion, or later. If you have older versions of Mac, you won’t be able to program or run Java 8 applications. In order to install Java SE 8 you’ll need administrative privileges on your Mac. And in order to run Java applets within a browser you’ll need to use a 64 bit browser, such as Safari or Firefox.Google Chrome is a 32 bit browser, and won’t work for this purpose.As described earlier on installing Java SE on windows, the same website has the MAC OS .dmg version to download and install. Actually contains all operating systems versions. However, our focus here would be on windows and MAC.Now you’re ready to start programming with Java SE 8 on both Windows and MAC OS X platforms. After we have installed Java SE 8 probably, let’s dive into the first point and have a look at Java SE 8 main features in a nutshell, to begin our coding tour on our favorite IDE.Introducing Java SE 8 main features, the big change. An overview of the JSR 337: Java SE 8 Release Contents Java SE 8 is a major release for the Java programming language and the Java virtual machine. It includes many changes. Some have gotten more coverage than others like Lambda expression, but I’m going to talk about both the major changes and a few of the minor ones. JSR 335: Lambda Expressions Probably the most attention has gone to Project Lambda, a set of new syntactical capabilities that let Java developers work as functional programmers. This includes lambda expressions, method references and a few other capabilities. JSR 310: Date and Time API There is a new API for managing dates and times. Replacing the older classes. Those older classes are still in the Java Runtime, but as you build new applications, you might want to move to this new set of capabilities, which let you streamline your code and be a little more intuitive in how you program. There are new classes to manage local dates and times and time zones and for calculating differences between different times. The Stream API Adds new tools for managing collections including lists, maps, sets, and so on.A stream allows you to deal with each item in a collection without having to write explicit looping code. It also lets you break your processing into multiple CPUs. So, for large, complex data sets you can see significant performance improvement. Project Nashorn The Nashorn JavaScript engine is new to Java SE 8 too. This is a completely new JavaScript engine written from scratch that lets you code in JavaScript but lets you integrate Java classes and objects.Nashorn’s goal is to implement a lightweight high-performance JavaScript runtime in Java with a native JVM. This Project intends to enable Java developers embedding of JavaScript in Java applications via JSR-223 and to develop freestanding JavaScript applications using the jrunscript command-line tool.In the article on Nashorn, I’ll describe how to run Nashorn code from the command line. But also how to write JavaScript in separate files, and then execute those files from your Java code. Concurrency API enhancements. There are also enhancements to the concurrency framework, which lets you manage and accumulate values in multiple threads. There are lots of smaller changes as well. String, numbers has new tools There are new tools for creating delimited lists in the string class and other new classes. There are tools for aggregating numbers including integers, lungs, doubles, and so on. Miscellaneous New Features There are also tools for doing a better job of detecting null situations, and I’ll describe all of these during the series. And I’ll describe how to work with files, using new convenience methods.So, when is Java SE 8 available? The answer is, now. It was released on March 18, 2014. For developers who use Java to build client site applications, the JavaFX rich internet application framework supports Java 8 now. And most of the Java enterprise edition vendors support Java 8 too. Whether you move to Java SE 8 right away depends on the kinds of project you’re working on. For many server and client site applications, it’s available immediately. Not for Android yet. Android developers beware; Java SE 8 syntax and APIs are not supported in Android at this point. It’s only very recently that Android moved to some of the newest Java 7 syntax. And so, it might take some time before Android supports this newest syntax or the newest APIs. But for all other Java developers, it’s worth taking a look at these new capabilities. What about IDEs? Java SE 8 is supported by all of the major Java development environments. Including Oracle’s Netbeans, Intellij Idea, and Eclipse. For this series I’ll be doing all of my demos in Netbeans, using Netbeans, version 8, which available to download from https://netbeans.org/downloads/. However before we start diving into this series, let’s check first, that we have installed Java SE 8 probably and start a new project under Netbeans, which will contains all code that we are going write. Then develop a lambda code to test our project if it is working or not probably with Java SE 8 . Alternatively you can download the series source code from my Github account, open it with Netbeans and follow what I am showing next, and in upcoming series code. Project on Github: https://github.com/mohamed-taman/JavaSE8-Features Hello world application on Java SE 8 with Lambda expression. Steps (not required if you navigating my code):Open NetBeans 8 –> from file –> New project –> from left, and choose Maven –> from right, and choose Java Application –> Click next. Follow the following screen shoot variables definition, or change to your favorite names and values –> then click finish.    If everything is okay you should have the following structure, on project navigator:    Click on Project “Java8Features” –> Click File, from upper menu –> then, Project properties. Under Category –> From left choose Source, then check that “Source/ Binary format” is 1.8. –> From left open Build, and choose Compiler, then check that “Java Platform” is pointing to your current JDK 8 installation –> Click Ok. If JDK 8 not presents then go to tools –> chooses, Java Platforms –> Add Platform –> Then chooses Java Standard Edition –> then point to your installed JDK 8. Now our project configured to work with Java 8 so let’s add some Lambda code. On Package “eg.com.tm.java8.features”, right click, and select New from menu –> Java Interface –> Name it Printable, under overview package “eg.com.tm.java8.features.overview” –> click finish. Implement Printable interface as the following: /* * Copyright (C) 2014 mohamed_taman * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. */ package eg.com.tm.java8.features.overview; /** * * @author mohamed_taman */ @FunctionalInterface public interface Printable { public void print(); }On the same package add the following class named “Print”, with main method as the following: /* * Copyright (C) 2014 mohamed_taman * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. */ package eg.com.tm.java8.features.overview; import static java.lang.System.out; /** * * @author mohamed_taman */ public class Print { public static void main(String[] args) { Printable job = ()-> out.println("Java SE 8 is working " + "and Lambda Expression too."); job.print(); } }Right click on Print class and choose Run. If every thing is okay then you should see the following output. ------------------------------------------------------------------------ Building Java8Features 1.0-SNAPSHOT ------------------------------------------------------------------------ --- exec-maven-plugin:1.2.1:exec (default-cli) @ Java8Features --- Java SE 8 is working and Lambda Expression too. ------------------------------------------------------------------------ BUILD SUCCESSCongratulation your Java SE 8 project works fine, let’s explain what we have written. Most of this code would work on Java 7, but there’s an annotation here that was added in Java SE 8, FunctionalInterface. If your Netbeans environment isn’t correctly configured for Java 8, this annotation will cause an error because it won’t be recognized as valid Java code. I don’t see an error, so that’s a good sign that Eclipse is working as I hoped. Next I’ll open this class definition named Print.java. This is a class with a main method so I can run it as a console application and it has a critical line of new Java 8 syntax. It’s creating an instance of that functional interface I just showed you using a lambda expression, a style of syntax that didn’t exist in Java prior to Java 8. I’ll explain what this syntax is doing very early in the next article. But all you need to know right now is that if this code isn’t causing any errors, then once again, Netbeans is recognizing it as valid Java syntax. I’m creating an instance of that interface and then calling that interface’s print method. And so, I’ll run the code. I’ll click the Run button on my tool bar and in my console I see a successful result. I’ve created an object, which is an instance of that interface using a lambda expression. And I’ve called its method and it’s outputting a string to the console. So, if this is all working, you’re in great shape. You’re ready to get started programming with Java SE 8 in Netbeans. If you had any problems along the way, go back to earlier steps and walk through the steps. One step at a time.Resources:The Java Tutorials, Lambda Expressions JSR 310: Date and Time API JSR 337: Java SE 8 Release Contents OpenJDK website Java Platform, Standard Edition 8, API SpecificationReference: Java SE 8 new features tour: The Big change, in Java Development world from our JCG partner Mohamed Taman at the Improve your life Through Science and Art blog....
career-logo

Oracle Certified Associate and Professional, Java SE 7 Programmer

This certification was one of the first exams I was considering after I was done with my college courses regarding Java and object-oriented programming. This was a time when I started working in programming and sort of needed to improve my rather basic knowledge in this area. However, it took me almost two years to make a decision to go for it (meaning the change to Java SE 7 and also revamp of the certification path by Oracle). This had both positive and negative effects. Upsides include more recent language knowledge being tested as well as a great way to prepare for both the certification and my thesis. On the other hand, the older SCJP exam for Java 6 was split into two exams increasing the overall price and also covered far more ground because of the additions in Java 7 release.   About certification Lets start with basic description of both exams. None of these exams requires any training, course or additional activity other than taking the exam itself. Based on lists of topic for each exam and also my experience, these exams do not overlap when it comes to areas being tested. However, you might get question testing also some objective from OCAJP, so remember that OCPJP expects you to know stuff from OCAJP and will use examples including syntax covered there. Exam is administered at the test center of your choosing using standard Oracle/PEARSON VUE testing software. When it comes to ordering and taking these exams it is pretty automated process and there were no problems at all. Following table presents the most important information regarding both exams.Basic informationAssociate (OCAJP 7) Professional (OCPJP 7) Upgrade (OCPJP 7)Exam Number 1Z0-803 1Z0-804 1Z0-805Prerequisites none 1Z0-803 SCJP 6.0 (by Sun as CX-310-065)Exam Topics associate topics professional topics upgrade topicsExam format Multiple Choice Multiple Choice Multiple ChoiceDuration 150 minutes 150 minutes 150 minutesNumber of Questions 90 90 80Passing Score 63% (57 questions) 65% (59 questions) 60% (48 questions)Price US$ 245 US$ 245 US$ 245  * There is possibility that you may have seen different passing score for OCAJP 7 exam. It was caused by few changes made in this exam and these are the official values as of writing this post. There is an upgrade exam for those of you that already own SCJP 6 certification. This exam should test your knowledge in areas missing in SCJP 6 and if you pass it, you will earn OCPJP 7 certificate. If you are new to this and have no experience with testing process it might come as a surprise to you – conditions during exam are pretty strict. You are going to be recorded by several cameras. Another regulation prohibits any items other than ID card and pen with blank sheet given to you by test center representative. Quality of test centers varies widely so be sure to ask people who have been already tested for their opinions and advice. The exam Both exams share 150 minutes to complete a set of 90 questions. All questions I have encountered in both exams were in form of either select correct answer or choose all that apply. Even though some people mentioned drag-and-drop questions in the exam, neither me nor me colleagues have seen any. When it comes to questions, please be careful. As always, read the question carefully and if in doubt go word for word until you get the point. OCAJP 7 has pretty evenly distributed questions throughout the exam objectives. However, OCPJP 7 exam presented a little twist in form of ever-present threads. I’m not saying every question included threads but the most of the questions did. Another thing that made this exam more interesting were questions regarding patterns and design principles. You have to be able to identify good and bad design (loose/tight coupling, high/low cohesion, …) and also be able to tell which is true regarding given example. The aspect of time and comfort during the exam changes drastically when you move on to OCPJP 7. Let me give an example from my exams. OCAJP 7 exam took me about an hour to complete and I took another hour to thoroughly check all of my answers. After doing so, I decided to turn the test in early since I felt there was nothing more to do (please note that this was the case in 2012 when I took the exam). However, when i was doing OCPJP 7 exam, it took me almost whole 150 minutes to complete, leaving me with time to check 4 first questions! Having said that, please, don’t get stuck on one question too long (unless you are a skilled veteran). You can always mark a question for review and come back to it after you are done with all the questions you can answer without further analysis. In case of OCPJP 7, getting stuck on few question can cause you some unanswered questions, so manage your time carefully. The complexity of questions raised dramatically and you need to take that into account during your preparation. Preparation OCAJP 7 My primary resource for this exam was so-called K&B 6 book (check out resources below). As you might have noticed this book was published for Java 6 so it is missing all additions in Java 7 – mainly project Coin (syntax changes), new frameworks for concurrency and IO as well as other improvements. However, the style of this book is suited for beginners and will prepare you for the exam in its respected areas. I spent several weeks preparing due to the length of this book and my workload at the time. This combined with a handful of mock tests, self studying of project Coin and playing around with code was enough to prepare me for the exam. OCPJP 7 In case of  OCPJP 7, I didn’t bet on a single book because K&B 6 covered only topics relevant to Java 6 and based on reviews and titles available at the time of the purchase, I decided to go with Oracle Certified Professional Java SE 7 Programmer Exams 1Z0-804 and 1Z0-805: A Comprehensive OCPJP 7 Certification Guide. These two books provided enough ground for me to start playing around with code. After 4 or 5 weeks I started with Enthuwares Lab and took the exam on 7th week. Unlike OCAJP 7, this exam really requires some coding experience due to parts that do not test your knowledge of code structure, compilation or program behavior. So keep in mind – best way to prepare for both exams is to code. Helpful notes from fellow bloggers: OCAJP 7jeanne’s oca/ocajp java programmer I experiences How To Prepare for OCAJP 7 Certification Exam Passed My OCAJP 7 Certification ExamOCPJP 7jeanne’s ocpjp java programmer II experiences Top 5 myths and misconceptions about OCPJP 7 exam OCPJP 7 1Z0-804 Oracle Certified Professional Java SE 7 Programmer Success StoryResources Books OCAJP 7SCJP Sun Certified Programmer for Java 6 Exam 310-065 by Bert Bates and Katherine Sierra (known as K&B book)Really great book, especially for beginners. The only downside is when you already know enough about certain topics reading becomes too long and kind of boring (since the book is suitable even for people learning Java). However, it is really good resource for anyone and it might even present information you have no idea are true about Java and compilation. Book also contains a handful of mock questions and whole tests. I was able to complete my preparation for OCAJP 7 almost solely using this book. The only downside of it is the fact, that it was written for Java 6 and does not incorporate syntax changes introduced in Java 7 like try-with-resources, strings in switch, multi-catch, exception rethrow and others. One of the authors published summary of topics covered by the book. You might also consider getting newer version OCA/OCP Java SE 7 Programmer I & II Study Guide.OCP Java SE 6 Programmer Practice Exams by Bert Bates and Katherine SierraYou might find yourself in doubt whether you are ready for the exam – you can check out this book from the same duo as previous one. However test do include topics now in OCPJP 7 so bare that in mind. I tried several of those test and I can recommend this book as well. It complements the first one pretty well. With these two at hand you have pretty solid foundation for your exam preparations.OCPJP 7Oracle Certified Professional Java SE 7 Programmer Exams 1Z0-804 and 1Z0-805: A Comprehensive OCPJP 7 Certification Guide by S.G.Ganesh andTushar SharmaWhen I started preparing for OCPJP 7 there have not been so many books or guides. One of the available at the time was this guide. Having read it, I can say it explained exam objectives pretty clearly with nice examples. It is shorter than K&B and more focused. This means it is not targeted at beginners any more and expects knowledge of concepts from OCAJP 7. One thing I really liked were all the examples throughout the book, especially in the area of concurrency and threads. In spite of a few grammatical errors, I can recommend this book since I used it as my primary resource.Pro Java 7 NIO.2 by Anghel LeonardThis book is not directly related to the certification, but I happened to read it during my thesis preparations. It is safe to say, that this book covers NIO.2 objectives quite nicely, but its scope is way broader than what is required for the exam. But it served me well, so I decided to include it in this list.Labs OCAJP 7NoneOCPJP 7Enthuware Labs for Oracle Certified Professional – Java SE 7 Programmer ExamThere are several companies producing labs with mock tests. Based on reviews online I decided to try out Enthuwares lab. The lab itself works well, you can track your progress, focus exam areas, benchmark your time and the usual stuff you would expect of this kind of software. All questions are marked based on their difficulty and this markup can be hidden in the settings. I found questions marked very easy and easy not worth my time, so I did not bother with these. The higher difficulties provide interesting questions and the opportunity to solidify your knowledge in respected areas. I would say it is a good product for the price.My own The last thing, I am going to highlight are some of my articles, that you may find useful. OCAJP 7 None so far. OCPJP 7Beauty and strangeness of genericsShort article about almost all gotchas present on OCPJP 7 (except interoperability between pre-generic and current collection code) regarding generics.NIO.2 series(Ongoing) Series of posts going into more detail than required on the actual exam. However, you will gain solid knowledge in covered areas.Certificate Oracle uses its Oracle University CertView application to manage your interaction with them so if you have not registered there yet, you will have to. When you are all done and received confirmation emails from Oracle, you should be able to see similar table in your CertView profile under Review My Exam History and Exam Results.Oracle Exam StatusTest Start Date Exam Number Exam Title Grade Indicator Score Report04-MAR-14 1Z0-804 Java SE 7 programmer II PASS View (link)12-SEP-12 1Z0-803 Java SE 7 programmer I PASS View (link)  You will be asked to fill in the address where a hard copy will be sent (you are not required to do this). In my experience it took one or two weeks to get the mail. PDF version is always available in CertView under Review My Certification History. Envelope contains the certificate along with card that proves your accomplishment (but I have not yet found any application for it).  The last thing that you are entitled to is using Oracle Certified Associate and Professional logos. They will be available in CertView so you can download them and use them in your CV or on your web page. This is my first time using them and they look as follows:   Conclusion Well, it was rather long way (as you might have noticed, it took me something more than two and a half-year to complete these) but also rewarding. Preparation for these exams is a long journey that offers a lot new insights on Java and the compiler. It is quite possible that you will develop certain love-hate relationship with the compiler itself (and will be able to replace it in many cases!). However, there were many areas I only new from my college years that needed improvements since I wasn’t using them for my work. After all the studies and playing around with little code snippets could honestly feel improvements in certain areas. You might learn things that will allow you to produce fewer lines of code, more readable and easily understandable code. And this is why I would recommend these exams to you – your general understanding of the code will increase (among other positive things). Only down side is rather big scope of the exam and time requirements for preparation. All in all, great learning experience and great way to discuss things you do and like with your friends and colleagues. So if you are considering these exams I would invite you to try them and wish you best of luck on your way of becoming Oracle Certified Professional.Reference: Oracle Certified Associate and Professional, Java SE 7 Programmer from our JCG partner Jakub Stas at the Jakub Stas blog....
enterprise-java-logo

A Tour Through elasticsearch-kopf

When I needed a plugin to display the cluster state of Elasticsearch or needed some insight into the indices I normally reached for the classic plugin elasticsearch-head. As it is recommended a lot and seems to be the unofficial successor I recently took a more detailed look at elasticsearch-kopf. And I liked it. I am not sure about why elasticsearch-kopf came into existence but it seems to be a clone of elasticsearch-head (kopf means head in German so it is even the same name).       Installation elasticsearch-kopf can be installed like most of the plugins, using the script in the Elasticsearch installation. This is the command that installs the version 1.1 which is suitable for the 1.1.x branch of Elasticsearch. bin/plugin --install lmenezes/elasticsearch-kopf/1.1 elasticsearch-kopf is then available on the url http://localhost:9200/_plugin/kopf/. Cluster On the front page you will see a similar diagram of what elasticsearch-head is providing. The overview of your cluster with all the shards and the distribution across the nodes. The page is being refreshed so you will see joining or leaving nodes immediately. You can adjust the refresh rate in the settings dropdown just next to the kopf logo (by the way, the header reflects the state of the cluster so it might change its color from green to yellow to red).Also, there are lots of different settings that can be reached via this page. On top of the node list there are 4 icons for creating a new index, deactivating shard allocation, for the cluster settings and the cluster diagnosis options. Creating a new index brings up a form for entering the index data. You can also load the settings from an existing index or just paste the settings json in the field on the right side.The icon for disabling the shard allocation just toggles it, disabling the shard allocation can be useful during a cluster restart. Using the cluster settings you can reach a form where you can adjust lots of values regarding your cluster, the routing and recovery. The cluster health button finally lets you load different json documents containing more details on the cluster health, e.g. the nodes stats and the hot threads. Using the little dropdown just next to the index name you can execute some operations on the index. You can view the settings, open and close the index, optimize and refresh the index, clear the caches, adjust the settings or delete the index.When opening the form for the index settings you will be overwhelmed at first. I didn’t know there are so many settings. What is really useful is that there is an info icon next to each field that will tell you what this field is about. A great opportunity to learn about some of the settings.What I find really useful is that you can adjust the slow index log settings directly. The slow log can also be used to log any incoming queries so it is sometimes useful for diagnostic purposes. Finally, back on the cluster page, you can get more detailed information on the nodes or shards when clicking on them. This will open a lightbox with more details.REST The rest menu entry on top brings you to another view which is similar to the one Sense provided. You can enter queries and let them be executed for you. There is a request history, you have highlighting and you can format the request document but unfortunately the interface is missing the autocompletion. Nevertheless I suppose this can be useful if you don’t like to fiddle with curl.Aliases Using the aliases tab you can have a convenient form for managing your index aliases and all the relevant additional information. You can add filter queries for your alias or influence the index or search routing. On the right side you can see the existing aliases and remove them if not needed.Analysis The analysis tab will bring you to a feature that is also very popular for the Solr administration view. You can test the analyzers for different values and different fields. This is a very valuable tool while building a more complex search application.Unfortunately the information you can get from Elasticsearch is not as detailed as the one you can get from Solr: It will only contain the end result so you can’t really see which tokenizer or filter caused a certain change. Percolator On the percolator tab you can use a form to register new percolator queries and view existing ones. There doesn’t seem to be a way to do the actual percolation but maybe this page can be useful for using the percolator extensively.Warmers The warmers tab can be used to register index warmer queries.Repository The final tab is for the snapshot and restore feature. You can create repositories and snapshots and restore them. Though I can imagine that most of the people are automating the snapshot creation this can be a very useful form.Conclusion I hope you could see in this post that elasticsearch-head can be really useful. It is very unlikely that you will ever need all of the forms but it is good to have them available. The cluster view and the rest interface can be very valuable for your daily work and I guess there will be new features coming in the future.Reference: A Tour Through elasticsearch-kopf from our JCG partner Florian Hopf at the Dev Time blog....
java-logo

Java 8 Friday: 10 Subtle Mistakes When Using the Streams API

At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem. Java 8 Friday Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, extension methods, and other great stuff. You’ll find the source code on GitHub.       10 Subtle Mistakes When Using the Streams API We’ve done all the SQL mistakes lists:10 Common Mistakes Java Developers Make when Writing SQL 10 More Common Mistakes Java Developers Make when Writing SQL Yet Another 10 Common Mistakes Java Developers Make When Writing SQL (You Won’t BELIEVE the Last One)But we haven’t done a top 10 mistakes list with Java 8 yet! For today’s occasion (it’s Friday the 13th), we’ll catch up with what will go wrong in YOUR application when you’re working with Java 8 (it won’t happen to us, as we’re stuck with Java 6 for another while). 1. Accidentally reusing streams Wanna bet, this will happen to everyone at least once. Like the existing “streams” (e.g. InputStream), you can consume streams only once. The following code won’t work: IntStream stream = IntStream.of(1, 2); stream.forEach(System.out::println);// That was fun! Let's do it again! stream.forEach(System.out::println); You’ll get a: java.lang.IllegalStateException: stream has already been operated upon or closed So be careful when consuming your stream. It can be done only once. 2. Accidentally creating “infinite” streams You can create infinite streams quite easily without noticing. Take the following example: // Will run indefinitely IntStream.iterate(0, i -> i + 1) .forEach(System.out::println); The whole point of streams is the fact that they can be infinite, if you design them to be. The only problem is, that you might not have wanted that. So, be sure to always put proper limits: // That's better IntStream.iterate(0, i -> i + 1) .limit(10) .forEach(System.out::println); 3. Accidentally creating “subtle” infinite streams We can’t say this enough. You WILL eventually create an infinite stream, accidentally. Take the following stream, for instance: IntStream.iterate(0, i -> ( i + 1 ) % 2) .distinct() .limit(10) .forEach(System.out::println); So…we generate alternating 0′s and 1′s then we keep only distinct values, i.e. a single 0 and a single 1 then we limit the stream to a size of 10 then we consume itWell… the distinct() operation doesn’t know that the function supplied to the iterate() method will produce only two distinct values. It might expect more than that. So it’ll forever consume new values from the stream, and the limit(10) will never be reached. Tough luck, your application stalls. 4. Accidentally creating “subtle” parallel infinite streams We really need to insist that you might accidentally try to consume an infinite stream. Let’s assume you believe that the distinct() operation should be performed in parallel. You might be writing this: IntStream.iterate(0, i -> ( i + 1 ) % 2) .parallel() .distinct() .limit(10) .forEach(System.out::println); Now, we’ve already seen that this will turn forever. But previously, at least, you only consumed one CPU on your machine. Now, you’ll probably consume four of them, potentially occupying pretty much all of your system with an accidental infinite stream consumption. That’s pretty bad. You can probably hard-reboot your server / development machine after that. Have a last look at what my laptop looked like prior to exploding:  5. Mixing up the order of operations So, why did we insist on your definitely accidentally creating infinite streams? It’s simple. Because you may just accidentally do it. The above stream can be perfectly consumed if you switch the order of limit() and distinct(): IntStream.iterate(0, i -> ( i + 1 ) % 2) .limit(10) .distinct() .forEach(System.out::println); This now yields: 0 1 Why? Because we first limit the infinite stream to 10 values (0 1 0 1 0 1 0 1 0 1), before we reduce the limited stream to the distinct values contained in it (0 1). Of course, this may no longer be semantically correct, because you really wanted the first 10 distinct values from a set of data (you just happened to have “forgotten” that the data is infinite). No one really wants 10 random values, and only then reduce them to be distinct. If you’re coming from a SQL background, you might not expect such differences. Take SQL Server 2012, for instance. The following two SQL statements are the same: -- Using TOP SELECT DISTINCT TOP 10 * FROM i ORDER BY ..-- Using FETCH SELECT * FROM i ORDER BY .. OFFSET 0 ROWS FETCH NEXT 10 ROWS ONLY So, as a SQL person, you might not be as aware of the importance of the order of streams operations.  6. Mixing up the order of operations (again) Speaking of SQL, if you’re a MySQL or PostgreSQL person, you might be used to the LIMIT .. OFFSET clause. SQL is full of subtle quirks, and this is one of them. The OFFSET clause is applied FIRST, as suggested in SQL Server 2012′s (i.e. the SQL:2008 standard’s) syntax. If you translate MySQL / PostgreSQL’s dialect directly to streams, you’ll probably get it wrong: IntStream.iterate(0, i -> i + 1) .limit(10) // LIMIT .skip(5) // OFFSET .forEach(System.out::println); The above yields 5 6 7 8 9 Yes. It doesn’t continue after 9, because the limit() is now applied first, producing (0 1 2 3 4 5 6 7 8 9). skip() is applied after, reducing the stream to (5 6 7 8 9). Not what you may have intended. BEWARE of the LIMIT .. OFFSET vs. "OFFSET .. LIMIT" trap! 7. Walking the file system with filters We’ve blogged about this before. What appears to be a good idea is to walk the file system using filters: Files.walk(Paths.get(".")) .filter(p -> !p.toFile().getName().startsWith(".")) .forEach(System.out::println); The above stream appears to be walking only through non-hidden directories, i.e. directories that do not start with a dot. Unfortunately, you’ve again made mistake #5 and #6. walk() has already produced the whole stream of subdirectories of the current directory. Lazily, though, but logically containing all sub-paths. Now, the filter will correctly filter out paths whose names start with a dot “.”. E.g. .git or .idea will not be part of the resulting stream. But these paths will be: .\.git\refs, or .\.idea\libraries. Not what you intended. Now, don’t fix this by writing the following: Files.walk(Paths.get(".")) .filter(p -> !p.toString().contains(File.separator + ".")) .forEach(System.out::println); While that will produce the correct output, it will still do so by traversing the complete directory subtree, recursing into all subdirectories of “hidden” directories. I guess you’ll have to resort to good old JDK 1.0 File.list() again. The good news is, FilenameFilter and FileFilter are both functional interfaces. 8. Modifying the backing collection of a stream While you’re iterating a List, you must not modify that same list in the iteration body. That was true before Java 8, but it might become more tricky with Java 8 streams. Consider the following list from 0..9: // Of course, we create this list using streams: List<Integer> list = IntStream.range(0, 10) .boxed() .collect(toCollection(ArrayList::new)); Now, let’s assume that we want to remove each element while consuming it: list.stream() // remove(Object), not remove(int)! .peek(list::remove) .forEach(System.out::println); Interestingly enough, this will work for some of the elements! The output you might get is this one: 0 2 4 6 8 null null null null null java.util.ConcurrentModificationException If we introspect the list after catching that exception, there’s a funny finding. We’ll get: [1, 3, 5, 7, 9] Heh, it “worked” for all the odd numbers. Is this a bug? No, it looks like a feature. If you’re delving into the JDK code, you’ll find this comment in ArrayList.ArraListSpliterator: /* * If ArrayLists were immutable, or structurally immutable (no * adds, removes, etc), we could implement their spliterators * with Arrays.spliterator. Instead we detect as much * interference during traversal as practical without * sacrificing much performance. We rely primarily on * modCounts. These are not guaranteed to detect concurrency * violations, and are sometimes overly conservative about * within-thread interference, but detect enough problems to * be worthwhile in practice. To carry this out, we (1) lazily * initialize fence and expectedModCount until the latest * point that we need to commit to the state we are checking * against; thus improving precision. (This doesn't apply to * SubLists, that create spliterators with current non-lazy * values). (2) We perform only a single * ConcurrentModificationException check at the end of forEach * (the most performance-sensitive method). When using forEach * (as opposed to iterators), we can normally only detect * interference after actions, not before. Further * CME-triggering checks apply to all other possible * violations of assumptions for example null or too-small * elementData array given its size(), that could only have * occurred due to interference. This allows the inner loop * of forEach to run without any further checks, and * simplifies lambda-resolution. While this does entail a * number of checks, note that in the common case of * list.stream().forEach(a), no checks or other computation * occur anywhere other than inside forEach itself. The other * less-often-used methods cannot take advantage of most of * these streamlinings. */ Now, check out what happens when we tell the stream to produce sorted() results: list.stream() .sorted() .peek(list::remove) .forEach(System.out::println); This will now produce the following, “expected” output 0 1 2 3 4 5 6 7 8 9 And the list after stream consumption? It is empty: [] So, all elements are consumed, and removed correctly. The sorted() operation is a “stateful intermediate operation”, which means that subsequent operations no longer operate on the backing collection, but on an internal state. It is now “safe” to remove elements from the list! Well… can we really? Let’s proceed with parallel(), sorted() removal: list.stream() .sorted() .parallel() .peek(list::remove) .forEach(System.out::println); This now yields: 7 6 2 5 8 4 1 0 9 3 And the list contains [8] Eek. We didn’t remove all elements!? Free beers (and jOOQ stickers) go to anyone who solves this streams puzzler! This all appears quite random and subtle, we can only suggest that you never actually do modify a backing collection while consuming a stream. It just doesn’t work. 9. Forgetting to actually consume the stream What do you think the following stream does? IntStream.range(1, 5) .peek(System.out::println) .peek(i -> { if (i == 5) throw new RuntimeException("bang"); }); When you read this, you might think that it will print (1 2 3 4 5) and then throw an exception. But that’s not correct. It won’t do anything. The stream just sits there, never having been consumed. As with any fluent API or DSL, you might actually forget to call the “terminal” operation. This might be particularly true when you use peek(), as peek() is an aweful lot similar to forEach(). This can happen with jOOQ just the same, when you forget to call execute() or fetch(): DSL.using(configuration) .update(TABLE) .set(TABLE.COL1, 1) .set(TABLE.COL2, "abc") .where(TABLE.ID.eq(3)); Oops. No execute()  Yes, the “best” way – with 1-2 caveats! 10. Parallel stream deadlock This is now a real goodie for the end! All concurrent systems can run into deadlocks, if you don’t properly synchronise things. While finding a real-world example isn’t obvious, finding a forced example is. The following parallel() stream is guaranteed to run into a deadlock: Object[] locks = { new Object(), new Object() };IntStream .range(1, 5) .parallel() .peek(Unchecked.intConsumer(i -> { synchronized (locks[i % locks.length]) { Thread.sleep(100);synchronized (locks[(i + 1) % locks.length]) { Thread.sleep(50); } } })) .forEach(System.out::println); Note the use of Unchecked.intConsumer(), which transforms the functional IntConsumer interface into a org.jooq.lambda.fi.util.function.CheckedIntConsumer, which is allowed to throw checked exceptions. Well. Tough luck for your machine. Those threads will be blocked forever! The good news is, it has never been easier to produce a schoolbook example of a deadlock in Java! For more details, see also Brian Goetz’s answer to this question on Stack Overflow. Conclusion With streams and functional thinking, we’ll run into a massive amount of new, subtle bugs. Few of these bugs can be prevented, except through practice and staying focused. You have to think about how to order your operations. You have to think about whether your streams may be infinite. Streams (and lambdas) are a very powerful tool. But a tool which we need to get a hang of, first.Reference: Java 8 Friday: 10 Subtle Mistakes When Using the Streams API from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
agile-logo

#NoEstimates

The main difficulty with forecasting the future is that it hasn’t yet happened. – James BurkeWhen I first heard about #NoEstimates, I thought it was not only provocative, it can also be damaging. The idea of working without estimates seems preposterous to many people. It did to me.   I mean, how can you plan anything without estimates? How we use estimates When I started my career as a software developer, there was a running joke in the company. For each level of management, we should multiply an estimation by 1.3. If a developer said a task will take 3 months, the team leader will “refine” the estimation for 5 months, and at the program level it was now even more. A joke, but not remote from what I did later as a manager. Let’s first get this out of the way: modifying an estimate is down-right disrespectful. As a manager I think I know better than the people who are actually doing the work. Another thing that happens a lot is estimates turn to commitments, in the eyes of management, which the team now need to meet. This post is not about these abusive things, although they do exist. Why did the estimation process work like that? A couple of assumptions:Work is comprised of learning and development, and these are not linear, or sequential. Estimating the work is complex. Developers are an optimistic bunch. They don’t think about things going wrong. They under promise, so they can over deliver. They cannot foresee all surprises, and we have more experience, so we’ll introduce some buffers. When were they right on the estimates last time?The results were task lengths in the project plan. So now we “know” the estimate is 6 months, instead of the original 3 months estimation. Of course, we don’t know, and we’re aware of that. After all, we know that plans change over time. The difference is now we have more confidence about the estimate, so we can plan ahead with dependent work. Why we need estimates The idea of estimates is to provide enough confidence in the organization in order to make decisions about the future.  To answer questions like:Do we have enough capacity to take on more work after the project? Should we do this project at all? When should marketing and sales be ready for the launch? What should the people do until then?These are very good business questions. The problem is our track record: we’re horrible estimators (I point you to the last bullet). We don’t know much about the future. The whole process of massaging estimates so we can feel better about them, seems like we’re relying on a set of crystal balls. And we use these balls to make business decisions. There should be a better way. So what are the alternatives? That is the interesting question. Once we understand that estimates are just one way of making business decisions, and a crappy one at that, we can have an open discussion. The alternative can be cost-of-delay. It could be empirical evidence to forecast against. It can be limited safe-to-fail experiments. It can be any combination or modification of these things, and It can be things we haven’t discovered yet. #NoEstimates is not really about estimates. It’s about making confident, rational, trust-worthy decisions. I know what results estimates give. Let’s seek out better ones. For more information about #NoEstimates, you can read more on the blogs of Woody Zuill, Neil Killick and Vasco Duarte.Reference: #NoEstimates from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
software-development-2-logo

The Simple Story Paradox

I’ve recently been following the #isTDDDead debate between Kent Beck (@kentbeck), David Heinemeier Hansson (@dhh), and Martin Fowler (@martinfowler) with some interest. I think that it’s particularly beneficial that ideas, which are often taken for granted, can be challenged in a constructive manner. That way you can figure out if they stand up to scrutiny or fall down flat on their faces. The discussion began with @dhh making the following points on TDD and test technique, which I hope I’ve got right. Firstly, the strict definition of TDD includes the following:    TTD is used to drive unit tests You can’t have collaborators You can’t touch the database You can’t touch the File system Fast Unit Tests, complete in the blink of an eye.He went on to say that you therefore drive your system’s architecture from the use of mocks and in that way the architecture suffers damage from the drive to isolate and mock everything, whilst the mandatory enforcement of the ‘red, green, refactor’ cycle is too prescriptive. He also stated that a lot of people mistake that you can’t have confidence in your code and you can’t deliver incremental functionality with tests unless you go through this mandated, well paved road of TDD. @Kent_Beck said that TDD didn’t necessarily include heavy mocking and the discussion continued… I’ve paraphrased a little here; however, it was the difference in the interpretation and experience of using TDD that got me thinking. Was it really a problem with TDD or was it with @dhh’s experience of other developer’s interpretation of TDD? I don’t want to put words into @dhh’s mouth, but it seems like the problem is the dogmatic application of the TDD technique even when it isn’t applicable. I came away with the impression that, in certain development houses, TDD had degenerated into little more than Cargo Cult Programming. The term Cargo Cult Programming seems to derive from a paper written by someone whom I found truly inspirational, the late Professor Richard Feynman. He presented a paper entitled Cargo Cult Science – Some remarks on science, pseudoscience and learning how not to fool yourself as part of Caltech’s 1974 commencement address. This later became part of his autobiography: Surely you must be joking Mr Feynman, a book that I implore you to read. In it, Feynman highlights experiments from several pseudosciences, such as educational science, psychology, parapsychology and physics, where the scientific approach of keeping an open mind, questioning everything and looking for flaws in your theory have been replaced by belief, ritualism and faith: a willingness to take other peoples results for granted in lieu of an experimental control. Taken from the 1974 paper, Feynman sums up Cargo Cult Science as: “In the South Seas there is a cargo cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to imitate things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas–he’s the controller–and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. So I call these things cargo cult science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.” You can apply this idea to programming where you’ll find teams and individuals carrying out ritualised procedures and using techniques without really understanding the theory behind them in the hope that they’ll work and because they are the ‘right thing to do’. In the second talk in the series @dhh came up with an example of what he called “test induced design damage” and at this I got excited because it’s something I’ve seen a number of times. The only reservation I had about the gist code was that to me it didn’t seem to result from TDD, that argument seems a little limited; I’d say that it was more a result of Cargo Cult Programming and that’s because in the instances where I’ve come across this example TDD wasn’t used. If you’ve seen the Gist, you may know what I’m talking about; however, that code is in Ruby, which is something I’ve little experience of. In order to explore this in more detail, I thought that I’d create a Spring MVC version and go from there. The scenario here is one where we have a very simple story: all the code does is to read an object from the database and place it into the model for display. There’s no additional processing, no business logic and no calculations to perform. The agile story would go something like this:Title: View User Details As an admin user I want to click on a link So that I can verify a user's detailsIn this ‘Proper’ N tier sample, I have a User model object, a controller and service layer and DAO together with their interfaces and tests. And, there’s the paradox: you set out to write the best code you possibly can to implement the story, using the well known and probably most popular MVC ‘N’ layer pattern and end up with something that’s total overkill for such a simple scenario. Something, as @jhh would say, is damaged. In my sample code, I’m using the JdbcTemplate class to retrieve a user’s details from a MySQL database, but any DB access API will do. This is the sample code demonstrating the conventional, ‘right’ way of implementing the story; prepare to do a lot of scrolling… public class User {   public static User NULL_USER = new User(-1, "Not Available", "", new Date());   private final long id;   private final String name;   private final String email;   private final Date createDate;   public User(long id, String name, String email, Date createDate) {     this.id = id;     this.name = name;     this.email = email;     this.createDate = createDate;   }   public long getId() {     return id;   }   public String getName() {     return name;   }   public String getEmail() {     return email;   }   public Date getCreateDate() {     return createDate;   } }   @Controller public class UserController {   @Autowired   private UserService userService;   @RequestMapping("/find1")   public String findUser(@RequestParam("user") String name, Model model) {     User user = userService.findUser(name);     model.addAttribute("user", user);     return "user";   } }   public interface UserService {   public abstract User findUser(String name); }   @Service public class UserServiceImpl implements UserService {   @Autowired   private UserDao userDao;   /**    * @see com.captaindebug.cargocult.ntier.UserService#findUser(java.lang.String)    */   @Override   public User findUser(String name) {     return userDao.findUser(name);   } }   public interface UserDao {   public abstract User findUser(String name); }   @Repository public class UserDaoImpl implements UserDao {   private static final String FIND_USER_BY_NAME = "SELECT id, name,email,createdDate FROM Users WHERE name=?";   @Autowired   private JdbcTemplate jdbcTemplate;   /**    * @see com.captaindebug.cargocult.ntier.UserDao#findUser(java.lang.String)    */   @Override   public User findUser(String name) {     User user;     try {       FindUserMapper rowMapper = new FindUserMapper();       user = jdbcTemplate.queryForObject(FIND_USER_BY_NAME, rowMapper, name);     } catch (EmptyResultDataAccessException e) {       user = User.NULL_USER;     }     return user;   } } If you take a look at this code, paradoxically it looks fine; in fact it looks like a classic text book example of how to write an ‘N’ tier MVC application. The controller passes responsibility for sorting out the business rules to the service layer and the service layer retrieves data from the DB using a data access object, which in turn uses a RowMapper<> helper class to retrieve a User object. When the controller has a User object it injects it into the model ready for display. This pattern is clear and extensible; we’re isolating the database from the service and the service from the controller by using interfaces and we’re testing everything using both JUnit with Mockito, and integration tests. This should be the last word in text book MVC coding, or is it? Let’s look at the code. Firstly, there’s the unnecessary use of interfaces. Some would argue that it’s easy to switch database implementations, but who ever does that? 1 plus, modern mocking tools can create their proxies using Class definitions so, unless your design specifically requires multiple implementations of the same interface, then using interfaces is pointless. Next, there is the UserServiceImpl, which is a classic example of the lazy class anti-pattern, because it does nothing except pointlessly delegate to the data access object. Likewise,the controller is also pretty lazy as it delegates to the lazy UserServiceImpl before adding the resulting User class to the model: in fact, all these classes are examples of the lazy class anti pattern. Having written some lazy classes, they are now needlessly tested to death, even the non-event UserServiceImpl class. It’s only worth writing tests for classes that actually perform some logic. public class UserControllerTest {   private static final String NAME = "Woody Allen";   private UserController instance;   @Mock   private Model model;   @Mock   private UserService userService;   @Before   public void setUp() throws Exception {     MockitoAnnotations.initMocks(this);     instance = new UserController();     ReflectionTestUtils.setField(instance, "userService", userService);   }   @Test   public void testFindUser_valid_user() {     User expected = new User(0L, NAME, "aaa@bbb.com", new Date());     when(userService.findUser(NAME)).thenReturn(expected);     String result = instance.findUser(NAME, model);     assertEquals("user", result);     verify(model).addAttribute("user", expected);   }   @Test   public void testFindUser_null_user() {     when(userService.findUser(null)).thenReturn(User.NULL_USER);     String result = instance.findUser(null, model);     assertEquals("user", result);     verify(model).addAttribute("user", User.NULL_USER);   }   @Test   public void testFindUser_empty_user() {     when(userService.findUser("")).thenReturn(User.NULL_USER);     String result = instance.findUser("", model);     assertEquals("user", result);     verify(model).addAttribute("user", User.NULL_USER);   } }   public class UserServiceTest {   private static final String NAME = "Annie Hall";   private UserService instance;   @Mock   private UserDao userDao;   @Before   public void setUp() throws Exception {     MockitoAnnotations.initMocks(this);     instance = new UserServiceImpl();     ReflectionTestUtils.setField(instance, "userDao", userDao);   }   @Test   public void testFindUser_valid_user() {     User expected = new User(0L, NAME, "aaa@bbb.com", new Date());     when(userDao.findUser(NAME)).thenReturn(expected);     User result = instance.findUser(NAME);     assertEquals(expected, result);   }   @Test   public void testFindUser_null_user() {     when(userDao.findUser(null)).thenReturn(User.NULL_USER);     User result = instance.findUser(null);     assertEquals(User.NULL_USER, result);   }   @Test   public void testFindUser_empty_user() {     when(userDao.findUser("")).thenReturn(User.NULL_USER);     User result = instance.findUser("");     assertEquals(User.NULL_USER, result);   } }   public class UserDaoTest {   private static final String NAME = "Woody Allen";   private UserDao instance;   @Mock   private JdbcTemplate jdbcTemplate;   @Before   public void setUp() throws Exception {     MockitoAnnotations.initMocks(this);     instance = new UserDaoImpl();     ReflectionTestUtils.setField(instance, "jdbcTemplate", jdbcTemplate);   }   @SuppressWarnings({ "unchecked", "rawtypes" })   @Test   public void testFindUser_valid_user() {     User expected = new User(0L, NAME, "aaa@bbb.com", new Date());     when(jdbcTemplate.queryForObject(anyString(), (RowMapper) anyObject(), eq(NAME))).thenReturn(expected);     User result = instance.findUser(NAME);     assertEquals(expected, result);   }   @SuppressWarnings({ "unchecked", "rawtypes" })   @Test   public void testFindUser_null_user() {     when(jdbcTemplate.queryForObject(anyString(), (RowMapper) anyObject(), isNull())).thenReturn(User.NULL_USER);     User result = instance.findUser(null);     assertEquals(User.NULL_USER, result);   }   @SuppressWarnings({ "unchecked", "rawtypes" })   @Test   public void testFindUser_empty_user() {     when(jdbcTemplate.queryForObject(anyString(), (RowMapper) anyObject(), eq(""))).thenReturn(User.NULL_USER);     User result = instance.findUser("");     assertEquals(User.NULL_USER, result);   } }   @RunWith(SpringJUnit4ClassRunner.class) @WebAppConfiguration @ContextConfiguration({ "file:src/main/webapp/WEB-INF/spring/appServlet/servlet-context.xml",     "file:src/test/resources/test-datasource.xml" }) public class UserControllerIntTest {   @Autowired   private WebApplicationContext wac;   private MockMvc mockMvc;   /**    * @throws java.lang.Exception    */   @Before   public void setUp() throws Exception {     mockMvc = MockMvcBuilders.webAppContextSetup(wac).build();   }   @Test   public void testFindUser_happy_flow() throws Exception {     ResultActions resultActions = mockMvc.perform(get("/find1").accept(MediaType.ALL).param("user", "Tom"));     resultActions.andExpect(status().isOk());     resultActions.andExpect(view().name("user"));     resultActions.andExpect(model().attributeExists("user"));     resultActions.andDo(print());     MvcResult result = resultActions.andReturn();     ModelAndView modelAndView = result.getModelAndView();     Map<String, Object> model = modelAndView.getModel();     User user = (User) model.get("user");     assertEquals("Tom", user.getName());     assertEquals("tom@gmail.com", user.getEmail());   } } In writing this sample code, I’ve added everything I could think of into the mix. You may think that this example is ‘over the top’ in its construction especially with the inclusion on redundant interface, but I have seen code like this. The benefits of this pattern are that it follows a distinct design understood by most developers; it’s clean and extensible. The down side is that there are lots of classes. More classes take more time to write and,you ever have to maintain or enhance this code, they’re more difficult to get to grips with. So, what’s the solution? That’s difficult to answer. In the #IsTTDDead debate @dhh gives the solution as placing all the code in one class, mixing the data access with the population of the model. If you implement this solution for our user story you still get a User class, but the number of classes you need shrinks dramatically. @Controller public class UserAccessor {   private static final String FIND_USER_BY_NAME = "SELECT id, name,email,createdDate FROM Users WHERE name=?";   @Autowired   private JdbcTemplate jdbcTemplate;   @RequestMapping("/find2")   public String findUser2(@RequestParam("user") String name, Model model) {     User user;     try {       FindUserMapper rowMapper = new FindUserMapper();       user = jdbcTemplate.queryForObject(FIND_USER_BY_NAME, rowMapper, name);     } catch (EmptyResultDataAccessException e) {       user = User.NULL_USER;     }     model.addAttribute("user", user);     return "user";   }   private class FindUserMapper implements RowMapper<User>, Serializable {     private static final long serialVersionUID = 1L;     @Override     public User mapRow(ResultSet rs, int rowNum) throws SQLException {       User user = new User(rs.getLong("id"), //           rs.getString("name"), //           rs.getString("email"), //           rs.getDate("createdDate"));       return user;     }   } }   @RunWith(SpringJUnit4ClassRunner.class) @WebAppConfiguration @ContextConfiguration({ "file:src/main/webapp/WEB-INF/spring/appServlet/servlet-context.xml",     "file:src/test/resources/test-datasource.xml" }) public class UserAccessorIntTest {   @Autowired   private WebApplicationContext wac;   private MockMvc mockMvc;   /**    * @throws java.lang.Exception    */   @Before   public void setUp() throws Exception {     mockMvc = MockMvcBuilders.webAppContextSetup(wac).build();   }   @Test   public void testFindUser_happy_flow() throws Exception {     ResultActions resultActions = mockMvc.perform(get("/find2").accept(MediaType.ALL).param("user", "Tom"));     resultActions.andExpect(status().isOk());     resultActions.andExpect(view().name("user"));     resultActions.andExpect(model().attributeExists("user"));     resultActions.andDo(print());     MvcResult result = resultActions.andReturn();     ModelAndView modelAndView = result.getModelAndView();     Map<String, Object> model = modelAndView.getModel();     User user = (User) model.get("user");     assertEquals("Tom", user.getName());     assertEquals("tom@gmail.com", user.getEmail());   }   @Test   public void testFindUser_empty_user() throws Exception {     ResultActions resultActions = mockMvc.perform(get("/find2").accept(MediaType.ALL).param("user", ""));     resultActions.andExpect(status().isOk());     resultActions.andExpect(view().name("user"));     resultActions.andExpect(model().attributeExists("user"));     resultActions.andExpect(model().attribute("user", User.NULL_USER));     resultActions.andDo(print());   } } The solution above cuts the number of first level classes to two: an implementation class and a test class. All test scenarios are catered for in a very few end to end integration tests. These tests will access the database, but is that so bad in this case? If each trip to the DB takes around 20ms or less then they’ll still complete within a fraction of a second; that should be fast enough. In terms of enhancing or maintaining this code, one small single class is easier to learn than several even smaller classes. If you did have to add in a whole bunch of business rules or other complexity then changing this code into the ‘N’ layer pattern will not be difficult; however the problem is that if/when a change is necessary it may be given to an inexperienced developer who’ll not be confident enough to carry out the necessary refactoring. The upshot is, and you must have seen this lots of times, that the new change could be shoehorned on top of this one class solution leading to a mess of spaghetti code. In implementing a solution like this, you may not be very popular, because the code is unconventional. That’s one of the reasons that I think that this single class solution is something that a lot of people would see as contentious. It’s this idea of a standard ‘right way’ and ‘wrong way’ of writing code, rigorously applied in every case, that has lead to this perfectly good design becoming a problem. I guess that it’s all a matter of horses for courses; choosing the right design for the right situation. If I was implementing a complex story, then I wouldn’t hesitate to split up the various responsibilities, but in the simple case it’s just not worth it. I’ll therefore end by asking if any one has a better solution for the Simple Story Paradox shown above, please let me know.   1 I’ve worked on a project once in umpteen years of programming where the underlying database was changed to meet a customer requirement. That was many years and many thousands of miles away and the code was written in C++ and Visual Basic.The code for this blog is available on Github at https://github.com/roghughe/captaindebug/tree/master/cargo-cultReference: The Simple Story Paradox from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books