Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Continuous Deployment: Implementation with Ansible and Docker

This article is part of the Continuous Integration, Delivery and Deployment series. The previous article described several ways to implement Continuous Deployment. Specifically, it described, among other things, how to implement it using Docker to deploy applications as containers and nginx for reverse proxy necessary for successful utilization of blue-green deployment technique. All that was running on top of CoreOS, operating system specifically designed for running Docker containers. In this article we’ll try to do the same process using Ansible (an open-source platform for configuring and managing computers). Instead of CoreOS, we’ll be using Ubuntu. Source code used in this article can be found in the GitHub repo vfarcic/provisioning (directory ansible). Ansible Ansible is an open-source software platform for configuring and managing computers. It combines multi-node software deployment, ad hoc task execution, and configuration management. It manages nodes over SSH. Modules work over JSON and standard output and can be written in any programming language. The system uses YAML to express reusable descriptions of systems. Preferable way to work with Ansible is with roles. They describe a set of tasks that should be run in order to setup something. In our case, we’ll have 5 roles described in bdd.yml. First four roles (etcd, confd, docker and nginx) will make sure that the tools we need for blue-green deployment are present. Docker role, for example, is defined as following: It installs Docker using apt-get, Python PIP and Docker-py. We need Docker to manage our containers. The Docker-py is a Python library required for the Ansible Docker module that we’ll use to run nginx container. As you can see, Ansible is very easy to use and understand. Just by reading YML files one can easily see what is going on. Simplicity is one of the main advantages it has over similar tools like Puppet and Chef. After very short introduction to Ansible, all we have to do is look for a module that performs tasks we need (i.e. apt for installation of Debian packages) and describe it as a YML task. Deployment Once we have the tools installed, it’s time to take a look at the last Ansible role bdd. This is where deployment happens. However, before we proceed, let me explain goals I had in mind before I started working on it. Deployment should follow blue-green technique. While the application is running, we would deploy a new version in parallel with the old one. Container that will be used already passed all sorts of unit and functional tests giving us reasonable confidence that each release is working correctly. However, we still need to test it after deployment in order to make the final verification that what we deployed is working correctly. Once all post-deployment tests passed we are ready to make the new release available to the public. We can do that by changing our nginx reverse proxy to redirect all requests to the newly deployed release. In other words, we should do following.Pull the latest version of the application container Run the latest application version without stopping the existing one Run post-deployment tests Notify etcd about the new release (port, name, etc) Change nginx configuration to point to the new release Stop the old releaseIf we do all of the above, we should accomplish zero-downtime. At any given moment our application should be available. On top of the procedure described above, deployment should work both with and without Ansible. While using it helps a lot, all essential elements should be located on the server itself. That means that scripts and configurations should be on the machine we’re deploying to and not somewhere on a central server. The role bdd is following. The first task makes sure that the template resource bdd.toml is present. It is used by confd to specify what is the template, what is the destination and is the command that should be executed (in our case restart of the nginx container). The second task makes sure that the confd template (bdd.conf.tmpl)[] is present. This template together with the bdd.toml will change nginx proxy to point to a new release every time we deploy it. That way we’ll have no interruption to our service. The third task makes sure that the deployment script is present and the last one that it is run. From the Ansible point of view, that’s all there is. The real “magic” is in the deployment script itself. Let’s go through it. We start by discovering whether we should do blue or green deployment. If the current one is blue we’ll deploy green and the other way around. Information about currently deployed “color” is stored in etcd key /bdd/color. BLUE_PORT=9001 GREEN_PORT=9002 CURRENT_COLOR=$(etcdctl get /bdd/color) if [ "$CURRENT_COLOR" = "" ]; then CURRENT_COLOR="green" fi if [ "$CURRENT_COLOR" = "blue" ]; then PORT=$GREEN_PORT COLOR="green" else PORT=$BLUE_PORT COLOR="blue" fi Once the decision is made, we stop and remove existing containers if there are any. Keep in mind that the current release will continue operating and won’t be affected until the very end. docker stop bdd-$COLOR docker rm bdd-$COLOR Now we can start the container with the new release and run it in parallel with the existing one. In this particular case, we’re deploying BDD Assistant container vfarcic/bdd. docker pull vfarcic/bdd docker run -d --name bdd-$COLOR -p $PORT:9000 vfarcic/bdd Once new release is up and running we can run final set of tests. This assumes that all tests that do not require deployment were already executed. In case of BDD Assistant, unit (Scala and JavaScript) and functional tests (BDD scenarios) are run as part of the container build process described in the Dockerfile. In other words, container is pushed to the repository only if all tests that are run as part of the build process passed. However, tests run before the deployment are usually not enough. We should verify that deployed application is working as expected. At this stage we’re usually running integration and stress tests. Tests themselves are also a container that is run and automatically removed (arguments -rm) once it finished executing. Important thing to notice is that localhost on the host is, by default, accessed through from within a container. In this particular case a set of BDD scenarios are run using PhantomJS headless browser. docker pull vfarcic/bdd-runner-phantomjs docker run -t --rm --name bdd-runner-phantomjs vfarcic/bdd-runner-phantomjs --story_path data/stories/tcbdd/stories/storyEditorForm.story --composites_path /opt/bdd/composites/TcBddComposites.groovy -P url=$PORT -P widthHeight=1024,768 If all tests passed, we should store information about the new release using etcd and run confd that will update our nginx configuration. Until this moment, nginx was redirecting all the requests to the old release. If nothing failed in the process so far, only from this point will users of the application be redirected to the version we just deployed. etcdctl set /bdd/color $COLOR etcdctl set /bdd/port $PORT etcdctl set /bdd/$COLOR/port $PORT etcdctl set /bdd/$COLOR/status running confd -onetime -backend etcd -node Finally, since we have the new release deployed, tested and made available to the general public through nginx reverse proxy, we’re ready to stop and remove the old version. docker stop bdd-$CURRENT_COLOR etcdctl set /bdd/$CURRENT_COLOR/status stopped Source code of the script can be found in the GitHub repo vfarcic/provisioning. Running it all together Let’s see it in action. I prepared a Vagrantfile that will create an Ubuntu virtual machine and run Ansible playbook that will install and configure everything and, finally, deploy the application. Assuming that Git, Vagrant and VirtualBox are installed, run following. git clone cd provisioning/ansible vagrant up First run might take a while since Vagrant and Ansible will need to download a lot of stuff (OS, packages, containers…). Please be patient especially on slower bandwidth. Good news is that each consecutive run will be much faster. To simulate deployment of a new version, run following. vagrant provision If you SSH to the VM you can see that the running version changes from blue (port 9001) to green (port 9002) and the other way around each time we run vagrant provision. vagrant ssh sudo docker ps Before, during and after deployment, the application will be available without any interruption (zero-downtime). You can check it out by opening http://localhost:8000/ in your favorite browser. Summary As you could see, deployment is greatly simplified with Docker and containers. While, in a more traditional setting, Ansible would need to install a bunch of stuff (JDK, web server, etc) and make sure that ever-increasing amount of configuration files are properly set, with containers the major role of Ansible is to make sure that OS is configured, that Docker is installed and that few other things are properly set. In other words, Ansible continues being useful while an important part of its work is greatly simplified with containers and the concept of immutable deployments (what we deploy is unchangeable). We’re not updating our applications with new versions. Instead, we’re deploying completely new containers and removing old ones. All the code used in this article can be found in the directory ansible inside the GitHub repo vfarcic/provisioning. Next articles will explore how to do the same procedure with other provisioning tools. We’ll explore Chef, Puppet and Salt. Finally, we’ll try to compare all four of them. Stay tuned!Reference: Continuous Deployment: Implementation with Ansible and Docker from our JCG partner Viktor Farcic at the Technology conversations blog....

A persistent KeyValue Server in 40 lines and a sad fact

Advent time again .. picking up Peters well written overview on the uses of Unsafe, i’ll have a short fly-by on how low level techniques in Java can save development effort by enabling a higher level of abstraction or allow for Java performance levels probably unknown to many. My major point is to show that conversion of Objects to bytes and vice versa is an important fundamental, affecting virtually any modern java application. Hardware enjoys to process streams of bytes, not object graphs connected by pointers as “All memory is tape” (M.Thompson if I remember correctly ..).   Many basic technologies are therefore hard to use with vanilla Java heap objects:Memory Mapped Files – a great and simple technology to persist application data safe, fast & easy. Network communication is based on sending packets of bytes Interprocess communication (shared memory) Large main memory of today’s servers (64GB to 256GB). (GC issues) CPU caches work best on data stored as a continuous stream of bytes in memoryso use of the Unsafe class in most cases boil down in helping to transform a java object graph into a continuous memory region and vice versa either using[performance enhanced] object serialization or wrapper classes to ease access to data stored in a continuous memory region.(Code & examples of this post can be found here) Serialization based Off-Heap Consider a retail WebApplication where there might be millions of registered users. We are actually not interested in representing data in a relational database as all needed is a quick retrieve of user related data once he logs in. Additionally one would like to traverse the social graph quickly. Let’s take a simple user class holding some attributes and a list of ‘friends’ making up a social graph.easiest way to store this on heap, is a simple huge HashMap. Alternatively one can use off heap maps to store large amounts of data. An off heap map stores its keys and values inside the native heap, so garbage collection does not need to track this memory. In addition, native heap can be told to automagically get synchronized to disk (memory mapped files). This even works in case your application crashes, as the OS manages write back of changed memory regions. There are some open source off heap map implementations out there with various feature sets (e.g. ChronicleMap), for this example I’ll use a plain and simple implementation featuring fast iteration (optional full scan search) and ease of use. Serialization is used to store objects, deserialization is used in order to pull them to the java heap again. Pleasantly I have written the (afaik) fastest fully JDK compliant object serialization on the planet, so I’ll make use of that.Done:persistence by memory mapping a file (map will reload upon creation). Java Heap still empty to serve real application processing with Full GC < 100ms. Significantly less overall memory consumption. A user record serialized is ~60 bytes, so in theory 300 million records fit into 180GB of server memory. No need to raise the big data flag and run 4096 hadoop nodes on AWS.Comparing a regular in-memory java HashMap and a fast-serialization based persistent off heap map holding 15 millions user records, will show following results (on a 3Ghz older XEON 2×6):consumed Java Heap (MB) Full GC (s) Native Heap (MB) get/put ops per s required VM size (MB)HashMap 6.865,00 26,039 0 3.800.000,00 12.000,00OffheapMap (Serialization based) 63,00 0,026 3.050 750.000,00 500,00  [test source / blog project] Note: You’ll need at least 16GB of RAM to execute them.As one can see, even with fast serialization there is a heavy penalty (~factor 5) in access performance, anyway: compared to other persistence alternatives, its still superior (1-3 microseconds per “get” operation, “put()” very similar). Use of JDK serialization would perform at least 5 to 10 times slower (direct comparison below) and therefore render this approach useless.   Trading performance gains against higher level of abstraction: “Serverize me”A single server won’t be able to serve (hundreds of) thousands of users, so we somehow need to share data amongst processes, even better: across machines. Using a fast implementation, its possible to generously use (fast-) serialization for over-the-network messaging. Again: if this would run like 5 to 10 times slower, it just wouldn’t be viable. Alternative approaches require an order of magnitude more work to achieve similar results.   By wrapping the persistent off heap hash map by an Actor implementation (async ftw!), some lines of code make up a persistent KeyValue server with a TCP-based and a HTTP interface (uses kontraktor actors). Of course the Actor can still be used in-process if one decides so later on.Now that’s a micro service. Given it lacks any attempt of optimization and is single threaded, its reasonably fast [same XEON machine as above]:280_000 successful remote lookups per second 800_000 in case of fail lookups (key not found) serialization based TCP interface (1 liner) a stringy webservice for the REST-of-us (1 liner).[source: KVServer, KVClient] Note: You’ll need at least 16GB of RAM to execute the test. A real world implementation might want to double performance by directly putting received serialized object byte[] into the map instead of encoding it twice (encode/decode once for transmission over wire, then decode/encode for offheaping map). “RestActorServer.Publish(..);” is a one liner to also expose the KVActor as a webservice in addition to raw tcp:C like performance using flyweight wrappers / structs With serialization, regular Java Objects are transformed to a byte sequence. One can do the opposite: Create  wrapper classes which read data from fixed or computed positions of an underlying byte array or native memory address. (E.g. see this blog post). By moving the base pointer its possible to access different records by just moving the the wrapper’s offset. Copying such a “packed object” boils down to a memory copy. In addition, its pretty easy to write allocation free code this way. One downside is, that reading/writing single fields has a performance penalty compared to regular Java Objects. This can be made up for by using the Unsafe class.“flyweight” wrapper classes can be implemented manually as shown in the blog post cited, however as code grows this starts getting unmaintainable. Fast-serializaton provides a byproduct “struct emulation” supporting creation of flyweight wrapper classes from regular Java classes at runtime. Low level byte fiddling in application code can be avoided for the most part this way.           How a regular Java class can be mapped to flat memory (fst-structs):  Of course there are simpler tools out there to help reduce manual programming of encoding  (e.g. Slab) which might be more appropriate for many cases and use less “magic”. What kind of performance can be expected using the different approaches (sad fact incoming)? Lets take the following struct-class consisting of a price update and an embedded struct denoting a tradable instrument (e.g. stock) and encode it using various methods:a ‘struct’ in code Pure encoding performance:Structs fast-Ser (no shared refs) fast-Ser JDK Ser (no shared) JDK Ser26.315.000,00 7.757.000,00 5.102.000,00 649.000,00 644.000,00 Real world test with messaging throughput: In order to get a basic estimation of differences in a real application, i do an experiment how different encodings perform when used to send and receive messages at a high rate via reliable UDP messaging: The Test: A sender encodes messages as fast as possible and publishes them using reliable multicast, a subscriber receives and decodes them.Structs fast-Ser (no shared refs) fast-Ser JDK Ser (no shared) JDK Ser6.644.107,00 4.385.118,00 3.615.584,00 81.582,00 79.073,00 (Tests done on I7/Win8, XEON/Linux scores slightly higher, msg size ~70 bytes for structs, ~60 bytes serialization).  Slowest compared to fastest: factor of 82. The test highlights an issue not covered by micro-benchmarking: Encoding and Decoding should perform similar, as factual throughput is determined by Min(Encoding performance, Decoding performance). For unknown reasons JDK serialization manages to encode the message tested like 500_000 times per second, decoding performance is only 80_000 per second so in the test the receiver gets dropped quickly:     ” … ***** Stats for receive rate:   80351   per second ********* ***** Stats for receive rate:   78769   per second ********* SUB-ud4q has been dropped by PUB-9afs on service 1 fatal, could not keep up. exiting “ (Creating backpressure here probably isn’t the right way to address the issue!) Conclusiona fast serialization allows for a level of abstraction in distributed applications impossible if serialization implementation is either – too slow – incomplete. E.g. cannot handle any serializable object graph – requires manual coding/adaptions. (would put many restrictions on actor message types, Futures, Spore’s, Maintenance nightmare) Low Level utilities like Unsafe enable different representations of data resulting in extraordinary throughput or guaranteed latency boundaries (allocation free main path) for particular workloads. These are impossible to achieve by a large margin with JDK’s public tool set. In distributed systems, communication performance is of fundamental importance. Removing Unsafe is  not the biggest fish to fry looking at the numbers above .. JSON or XML won’t fix this. While the HotSpot VM has reached an extraordinary level of performance and reliability, CPU is wasted in some parts of the JDK like there’s no tomorrow. Given we are living in the age of distributed applications and data, moving stuff over the wire should be easy to achieve (not manually coded) and as fast as possible.Addendum: bounded latency A quick Ping Pong RTT latency benchmark showing that java can compete with C solutions easily, as long the main path is allocation free and techniques like described above are employed: [credits: charts+measurement done with HdrHistogram] This is an “experiment” rather than a benchmark (so do not read: ‘Proven: Java faster than C’), it shows low-level-Java can compete with C in at least this low-level domain. Of course its not exactly idiomatic Java code, however its still easier to handle, port and maintain compared to a JNI or pure C(++) solution. Low latency C(++) code won’t be that idiomatic either!Reference: A persistent KeyValue Server in 40 lines and a sad fact from our JCG partner Rüdiger Möller at the Java Advent Calendar blog....

The Awesome PostgreSQL 9.4 / SQL:2003 FILTER Clause for Aggregate Functions

Sometimes when aggregating data with SQL, we’d love to add some additional filters. For instance, consider the following world bank data:                 GDP per capita (current US$) 2009 2010 2011 2012 CA 40,764 47,465 51,791 52,409 DE 40,270 40,408 44,355 42,598 FR 40,488 39,448 42,578 39,759 GB 35,455 36,573 38,927 38,649 IT 35,724 34,673 36,988 33,814 JP 39,473 43,118 46,204 46,548 RU 8,616 10,710 13,324 14,091 US 46,999 48,358 49,855 51,755 And the table structure: CREATE TABLE countries ( code CHAR(2) NOT NULL, year INT NOT NULL, gdp_per_capita DECIMAL(10, 2) NOT NULL ); Now, let’s assume we’d like to find the number of countries with a GDP higher than 40,000 for each year. With standard SQL:2003, and now also with the newly released PostgreSQL 9.4, we can now take advantage of the new FILTER clause, which allows us to write the following query: SELECT year, count(*) FILTER (WHERE gdp_per_capita >= 40000) FROM countries GROUP BY year The above query will now yield: year count ------------ 2012 4 2011 5 2010 4 2009 4 And that’s not it! As always, you can use any aggregate function also as a window function simply by adding an OVER() clause to the end: SELECT year, code, gdp_per_capita, count(*) FILTER (WHERE gdp_per_capita >= 40000) OVER (PARTITION BY year) FROM countries The result would then look something like this: year code gdp_per_capita count ------------------------------------ 2009 CA 40764.00 4 2009 DE 40270.00 4 2009 FR 40488.00 4 2009 GB 35455.00 4 jOOQ 3.6 will also support the new FILTER clause for aggregate functions Good news for jOOQ users. You can write the same query with jOOQ intuitively as such: DSL.using(configuration) .select( COUNTRIES.YEAR, count().filterWhere( )) .from(COUNTRIES) .groupBy(COUNTRIES.YEAR) .fetch(); … and DSL.using(configuration) .select( COUNTRIES.YEAR, COUNTRIES.CODE, COUNTRIES.GDP_PER_CAPITA count().filterWhere( .over(partitionBy(COUNTRIES.YEAR))) .from(COUNTRIES) .fetch(); And the best thing is that jOOQ (as usual) emulates the above clause for you if you’re not using PostgreSQL. The equivalent query would be: SELECT year, count(CASE WHEN gdp_per_capita >= 40000 THEN 1 END) FROM countries GROUP BY yearRead more about what’s new in PostgreSQL 9.4 hereReference: The Awesome PostgreSQL 9.4 / SQL:2003 FILTER Clause for Aggregate Functions from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Microservices and DevOps with TIBCO Products

Everybody is talking about Microservices these days. You can read a lot about Microservices in hundreds of articles and blog posts. A good starting point is Martin Fowler’s article, which initiated the huge discussion about this new architecture concept. Another great resource is an free on-demand webinar by vendor-independent analyst Gartner: “Time to Get Off the Enterprise Service Bus“. It does not even mention the term “Microservices”, but explains its basic motivation and concepts.       Definition of the Term “Microservices” Here is my short definition of  the term “Microservices” and how it differs from a “classical” Service-oriented Architecture (SOA):No commitment to a unique technology Greater flexibility of architecture Services managed as products, with their own lifecycle Industrialized deploymentThat is the beginning of the Microservices era: Services implementing a limited set of functions. Services are developed, deployed and scaled independently. This way you get shorter time to results and increased flexibility. Microservices and TIBCO The funny thing is that several TIBCO customers are already implementing Microservices for years. This blog post explains how you can use TIBCO products to create Microservices. The key products are TIBCO ActiveMatrix BusinessWorks for creating Microservices, TIBCO Enterprise Administrator (TEA) for administration and monitoring, TIBCO Silver Fabric for Continuous Integration and Continuous Delivery (DevOps), and TIBCO API Exchange as service gateway and self-service portal. The following shows the basic idea of how to create Microservices with TIBCO in combination with any other technology (e.g. Java, Python, Scala), product (e.g. Oracle, SAP, Salesforce), build tool (e.g. Chef, Puppet, Docker) or infrastructure (e.g. Amazon cloud, VMWare, OpenStack).Side node: Be aware that the product is only one part of the story. Organizational changes are required, too. Adrian Cockcrof (former architect at Netflix) did a great talk about organizational changes: “State of the Art in Microservices“. Now, let’s take a look at the products, which help you building, deploying, running and monitoring Microservices in a fast and flexible way. TIBCO ActiveMatrix BusinessWorks for Creating a Microservice TIBCO ActiveMatrix BusinessWorks is an enterprise integration and service delivery platform. Build your own Microservices using your choice of technology (e.g. Java, Scripting, a BusinessWorks process, or anything else) or expose an existing implementation as Microservice. BusinessWorks is the best choice if you need to implement complex integration scenarios including orchestration, routing or B2B integration (e.g. SAP or Salesforce). The exposition of a Microservice is usually done with REST or SOAP standard interfaces. JMS might be used in an event-enabled environment.You also use BusinessWorks to assemble your logic from several Microservices to composites, or extend your existing (Micro)Services to mobile applications. TIBCO API Exchange for Exposing your Microservice via APIs TIBCO API Exchange is used to expose Microservices via REST, SOAP or JMS including Policy-based API management features such as security, throttling, routing and caching. Besides, a portal is available for easy self-service consumption of Microservices. In the context of Microservices, API Exchange is used to enforce consumption contracts, ensure Y-scaling and reliability of Microservices, and to reuse Microservices in multiple contexts without change.“A New Front for SOA – Open API and API Management” explains the term “Open API” in more detail and gives a technical overview about the components of an API Management solution: Gateway, Portal and Analytics. TIBCO Silver Fabric for Continuous Integration and Continuous Delivery (DevOps) Automation is key for agile, flexible and productive Microservices development. Without continuous integration / continuous delivery (DevOps), you cannot realize the Microservices concept efficiently. TIBCO Silver Fabric is used to continuously deploy, configure and manage your applications and middleware, on premise or in the cloud. It offers to end-to-end scripting, automation and visibility via dashboards, and monitoring the quality of deployed application, ports management and elastic load balancing. TIBCO Silver Fabric offer several out-of-the-box features to run a project in a DevOps style. Besides, it supports tools such as Chef, Puppet and Docker. You can deploy Microservices everywhere including private data centers, virtual machines and cloud environments – supporting environments such as Amazon Web Services, VMWare or OpenStack. Important to understand is that every Microservice is build and deployed independently from each other. TIBCO Enterprise Administrator (TEA) for Unified Administration Unified administration and monitoring are another key success factor for Microservices – no matter which technologies are used to implement different Microservices. TIBCO Enterprise Administator (TEA) is an unified graphical user interface (plus shell and scripting API) for administration, monitoring, governance, diagnostics and analytics of most TIBCO products such as BusinessWorks, EMS, SilverFabric, Hawk or PolicyDirector.TEA can also be used for other non-TIBCO technologies and products such as Apache Tomcat out-of-the-box. If something is not supported yet, you can use TEA’s API to integrate it quickly. BusinessWorks 6 and TEA are very open products encouraging TIBCO community to develop additional features. TIBCO Complex Event Processing and Streaming Analytics for Visibility across Microservices Finally, after deploying and running your Microservices in production, you can use tools such as TIBCO StreamBase CEP to combining events, context and big data insights for instant awareness and reaction. Correlation of different events is the real power – ask people from Google, Amazon or Facebook about this topic… As this is a little bit off-topic, I just forward you to an article, which explains Event Processing and Streaming Analytics in more detail and discusses several real world use cases: Real-Time Stream Processing as Game Changer in a Big Data World with Hadoop and Data Warehouse. TIBCO and Microservices are Friends and Profiteers, not Enemies! As you can see, TIBCO products are ready for creating, deploying, running and monitoring Microservices. Products such as ActiveMatrix BusinessWorks, API Exchange and Silver Fabric are designed for the Microservice era. Actually, several TIBCO customers are using this approach for years, though this concept did not have a specific name other than SOA in the past. So, is Microservices a new name for SOA, or is it something new? Who knows… No matter what, you should start to think about using the Microservices approach, too!Reference: Microservices and DevOps with TIBCO Products from our JCG partner Kai Waehner at the Blog about Java EE / SOA / Cloud Computing blog....

Leaky Abstractions, or How to Bind Oracle DATE Correctly with Hibernate

We’ve recently published an article about how to bind the Oracle DATE type correctly in SQL / JDBC, and jOOQ. This article got a bit of traction on reddit with an interesting remark by Vlad Mihalcea, who is frequently blogging about Hibernate, JPA, transaction management and connection pooling on his blog. Vlad pointed out that this problem can also be solved with Hibernate, and we’re going to look into this, shortly.           What is the problem with Oracle DATE? The problem that was presented in the previous article is dealing with the fact that if a query uses filters on Oracle DATE columns: // execute_at is of type DATE and there's an index PreparedStatement stmt = connection.prepareStatement( "SELECT * " + "FROM rentals " + "WHERE rental_date > ? AND rental_date < ?"); … and we’re using java.sql.Timestamp for our bind values: stmt.setTimestamp(1, start); stmt.setTimestamp(2, end); … then the execution plan will turn very bad with a FULL TABLE SCAN or perhaps an INDEX FULL SCAN, even if we should have gotten a regular INDEX RANGE SCAN. ------------------------------------- | Id | Operation | Name | ------------------------------------- | 0 | SELECT STATEMENT | | |* 1 | FILTER | | |* 2 | TABLE ACCESS FULL| RENTAL | -------------------------------------Predicate Information (identified by operation id): ---------------------------------------------------1 - filter(:1<=:2) 2 - filter((INTERNAL_FUNCTION("RENTAL_DATE")>=:1 AND INTERNAL_FUNCTION("RENTAL_DATE")<=:2)) This is because the database column is widened from Oracle DATE to Oracle TIMESTAMP via this INTERNAL_FUNCTION(), rather than truncating the java.sql.Timestamp value to Oracle DATE. More details about the problem itself can be seen in the previous article Preventing this INTERNAL_FUNCTION() with Hibernate You can fix this with Hibernate’s proprietary API, using a org.hibernate.usertype.UserType. Assuming that we have the following entity: @Entity public class Rental {@Id @Column(name = "rental_id") public Long rentalId;@Column(name = "rental_date") public Timestamp rentalDate; } And now, let’s run this query here (I’m using Hibernate API, not JPA, for the example): List<Rental> rentals = session.createQuery("from Rental r where r.rentalDate between :from and :to") .setParameter("from", Timestamp.valueOf("2000-01-01 00:00:00.0")) .setParameter("to", Timestamp.valueOf("2000-10-01 00:00:00.0")) .list(); The execution plan that we’re now getting is again inefficient: ------------------------------------- | Id | Operation | Name | ------------------------------------- | 0 | SELECT STATEMENT | | |* 1 | FILTER | | |* 2 | TABLE ACCESS FULL| RENTAL | -------------------------------------Predicate Information (identified by operation id): ---------------------------------------------------1 - filter(:1<=:2) 2 - filter((INTERNAL_FUNCTION("RENTAL0_"."RENTAL_DATE")>=:1 AND INTERNAL_FUNCTION("RENTAL0_"."RENTAL_DATE")<=:2)) The solution is to add this @Type annotation to all relevant columns… @Entity @TypeDefs( value = @TypeDef( name = "oracle_date", typeClass = OracleDate.class ) ) public class Rental {@Id @Column(name = "rental_id") public Long rentalId;@Column(name = "rental_date") @Type(type = "oracle_date") public Timestamp rentalDate; } and register the following, simplified UserType: import; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Timestamp; import java.sql.Types; import java.util.Objects;import oracle.sql.DATE;import org.hibernate.engine.spi.SessionImplementor; import org.hibernate.usertype.UserType;public class OracleDate implements UserType {@Override public int[] sqlTypes() { return new int[] { Types.TIMESTAMP }; }@Override public Class<?> returnedClass() { return Timestamp.class; }@Override public Object nullSafeGet( ResultSet rs, String[] names, SessionImplementor session, Object owner ) throws SQLException { return rs.getTimestamp(names[0]); }@Override public void nullSafeSet( PreparedStatement st, Object value, int index, SessionImplementor session ) throws SQLException { // The magic is here: oracle.sql.DATE! st.setObject(index, new DATE(value)); }// The other method implementations are omitted } This will work because using the vendor-specific oracle.sql.DATE type will have the same effect on your execution plan as explicitly casting the bind variable in your SQL statement, as shown in the previous article: CAST(? AS DATE). The execution plan is now the desired one: ------------------------------------------------------ | Id | Operation | Name | ------------------------------------------------------ | 0 | SELECT STATEMENT | | |* 1 | FILTER | | | 2 | TABLE ACCESS BY INDEX ROWID| RENTAL | |* 3 | INDEX RANGE SCAN | IDX_RENTAL_UQ | ------------------------------------------------------Predicate Information (identified by operation id): ---------------------------------------------------1 - filter(:1<=:2) 3 - access("RENTAL0_"."RENTAL_DATE">=:1 AND "RENTAL0_"."RENTAL_DATE"<=:2) If you want to reproduce this issue, just query any Oracle DATE column with a java.sql.Timestamp bind value through JPA / Hibernate, and get the execution plan as indicated here. Don’t forget to flush shared pools and buffer caches to enforce the calculation of new plans between executions, because the generated SQL is the same each time. Can I do it with JPA 2.1? At first sight, it looks like the new converter feature in JPA 2.1 (which works just like jOOQ’s converter feature) should be able to do the trick. We should be able to write: import java.sql.Timestamp;import javax.persistence.AttributeConverter; import javax.persistence.Converter;import oracle.sql.DATE;@Converter public class OracleDateConverter implements AttributeConverter<Timestamp, DATE>{@Override public DATE convertToDatabaseColumn(Timestamp attribute) { return attribute == null ? null : new DATE(attribute); }@Override public Timestamp convertToEntityAttribute(DATE dbData) { return dbData == null ? null : dbData.timestampValue(); } } This converter can then be used with our entity: import java.sql.Timestamp;import javax.persistence.Column; import javax.persistence.Convert; import javax.persistence.Entity; import javax.persistence.Id;@Entity public class Rental {@Id @Column(name = "rental_id") public Long rentalId;@Column(name = "rental_date") @Convert(converter = OracleDateConverter.class) public Timestamp rentalDate; } But unfortunately, this doesn’t work out of the box as Hibernate 4.3.7 will think that you’re about to bind a variable of type VARBINARY: // From org.hibernate.type.descriptor.sql.SqlTypeDescriptorRegistrypublic <X> ValueBinder<X> getBinder(JavaTypeDescriptor<X> javaTypeDescriptor) { if ( Serializable.class.isAssignableFrom( javaTypeDescriptor.getJavaTypeClass() ) ) { return VarbinaryTypeDescriptor.INSTANCE.getBinder( javaTypeDescriptor ); }return new BasicBinder<X>( javaTypeDescriptor, this ) { @Override protected void doBind(PreparedStatement st, X value, int index, WrapperOptions options) throws SQLException { st.setObject( index, value, jdbcTypeCode ); } }; } Of course, we can probably somehow tweak this SqlTypeDescriptorRegistry to create our own “binder”, but then we’re back to Hibernate-specific API. This particular implementation is probably a “bug” at the Hibernate side, which has been registered here, for the record: Conclusion Abstractions are leaky on all levels, even if they are deemed a “standard” by the JCP. Standards are often a means of justifying an industry de-facto standard in hindsight (with some politics involved, of course). Let’s not forget that Hibernate didn’t start as a standard and massively revolutionised the way the standard-ish J2EE folks tended to think about persistence, 14 years ago. In this case we have:Oracle SQL, the actual implementation The SQL standard, which specifies DATE quite differently from Oracle ojdbc, which extends JDBC to allow for accessing Oracle features JDBC, which follows the SQL standard with respect to temporal types Hibernate, which offers proprietary API in order to access Oracle SQL and ojdbc features when binding variables JPA, which again follows the SQL standard and JDBC with respect to temporal types Your entity modelAs you can see, the actual implementation (Oracle SQL) leaked up right into your own entity model, either via Hibernate’s UserType, or via JPA’s Converter. From then on, it will hopefully be shielded off from your application (until it won’t), allowing you to forget about this nasty little Oracle SQL detail. Any way you turn it, if you want to solve real customer problems (i.e. the significant performance issue at hand), then you will need to resort to vendor-specific API from Oracle SQL, ojdbc, and Hibernate – instead of pretending that the SQL, JDBC, and JPA standards are the bottom line. But that’s probably alright. For most projects, the resulting implementation-lockin is totally acceptable.Reference: Leaky Abstractions, or How to Bind Oracle DATE Correctly with Hibernate from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Tuple and entry destructuring

The next release of Ceylon features an interesting range of new language features, including constructors, if and switch expression, let and object expressions, and destructuring of tuples and entries. In this post, I’m going to describe our new syntax for destructuring. A destructuring statement looks a lot like a normal value declaration, except that where we would expect to see the value name, a pattern occurs instead. An entry pattern is indicated using the skinny arrow -> we use to construct entries:   String->Integer entry = "one"->1; value key->item = entry; //destructure the Entry A tuple pattern is indicated with brackets: [String,Integer] pair = ["one",1]; value [first,second] = pair; //destructure the Tuple The pattern variables, key, item, first, and second are just regular local values. We can nest tuple and entry patterns: String->[String,Integer] entry = "one"->["one",1]; value key->[first,second] = entry; A tuple pattern may have a tail variable, indicated with a *: [String+] ints = 1..100; value [first,*rest] = ints; //destructure the Sequence (This syntax resembles the spread operator.) Patterns may optionally indicate an explicit type: value String key->[String first, Integer second] = entry; Pattern-based destructuring can occur in a number of other places in the language. A pattern can occur in a for loop: if (exists index->item = stream.indexed.first) { ... }if (nonempty [first,*rest] = sequence) { ... } Or in an exists or nonempty condition: if (exists index->item = stream.indexed.first) { ... }if (nonempty [first,*rest] = sequence) { ... } Or in a let expression: value dist = let ([x,y] = point) sqrt(x^2+y^2); You might wonder why we decided to introduce this syntax, or at least, why we decided to do it now. Well, I suppose the simple answer is that it always felt a bit incomplete or unfinished to have a language with tuples but no convenient destructuring syntax for them. Especially when we did already have destructuring for entries, but only in for, as a special case. But looking into the future, you could also choose to see this as us dipping our toes in the water of eventual support for pattern matching. I remain ambivalent about pattern matching, and it’s certainly not something we find that the language is missing or needs, but lots of folks tell us they like it in other languages, so we’re keeping our options open. Fortunately, the syntax described above will scale nicely to more complex patterns in a full pattern matching facility. This functionality is already implemented and available in github.Reference: Tuple and entry destructuring from our JCG partner Gavin King at the Ceylon Team blog blog....

Top 10 JavaCodeGeeks posts for 2014

2014 is coming to its end and all I have to say is “Wow!”. This has been an absolutely massive year for Java Code Geeks. I am proud and moved to see how much our platform and community has advanced in only a year’s time period. I am also proud to see that we have cracked the limit of 1 million visitors per month. This is insane just to contemplate. Thank you for your support on achieving this great milestone! During this year we have delivered an enormous amount of articles and tutorials, both on our main site and our Examples section. Major contributors were our JCG Partners. Our JCG Program partners list has grown immensely, topping now over 500 participants. A great part of the material were Ultimate Tutorials, which are tutorials “on steroids” and cover a specific programming topic. Most of them come with a downloadable PDF ebook version, so make sure to check them out. We have also “opened” our content by providing numerous programming books for FREE with our JCG Newsletter. Speaking of newsletter, our insiders list now counts over 66,000 subscribers. Make sure to hop on here to enjoy the latest news in the Java world and more. 2014 was also the year we launched JCG Academy, our premium content, subscription based portal. The Academy features amazing courses like the one on Design Patterns and the Advanced Java one. You can learn more about the launch here and read why you should join here. You can also have a taste of it by checking our FREE course, Java Concurrency Essentials. On our quest to give even more value back to the community, we sponsored the Philadelphia Java Users’ Group November 2014 Meeting. It was great seeing the impact our efforts have on the Java community, we were really happy to contribute. On a side note, we also launched Web Code Geeks! This is our sister site, targeted to Web programming developers. Come on, admit it, there is a web developer inside you too, so make sure to check it and get access to two FREE ebooks by joining our WCG Newsletter. All in all, I am proud to say that Java Code Geeks offer now the best way to learn Java programming! Now, keeping the tradition, we are compiling the top Java Code Geeks for the year that just passed. As with the Top 10 JavaCodeGeeks posts for 2010, the Top 10 JavaCodeGeeks posts for 2011, the Top 10 JavaCodeGeeks posts for 2012 and the Top 10 JavaCodeGeeks posts for 2013, we have created a compilation with the most popular posts for this year for your eyes only. The ranking of the posts was based on the absolute number of page views per post, not necessarily unique. It includes only articles published in 2013. So, let’s see in ascending order the top posts for 2014. 10) Abstraction in Java – The ULTIMATE Tutorial In this tutorial the concept of Abstraction in Java is examined. A full tutorial is provided, covering a simple Payroll System using Interfaces, Abstract Classes and Concrete Classes. 9) Single Page Application with Angular.js, Node.js and MongoDB The new Javascript based libraries like Angular.js and Node.js are really getting some traction. This tutorial is a proof of concept of a web application with a Javascript based Web Server. 8) Building Java Web Application Using Hibernate With Spring This post shows how to create a sample application using MySQL DB with Hibernate ORM in a Spring environment. The application aims to collect the input details from the user during signup, save the details in the MySQL DB and provide and authentication mechanism. 7) JUnit Tutorial for Unit Testing – The ULTIMATE Guide The term unit testing refers to the practice of testing small units of your code, so as to ensure that they work as expected and JUnit is the basic Java based framework to achieve that. We have gathered all the JUnit features in one detailed guide for your convenience! 6) Android Service Tutorial Android development is still a hot topic and in this post we talk about Android Service. This is a key component in developing Android app. Differently from Activity, Service in Android runs in background, they don’t have an interface and have a life-cycle very different from Activities. 5) JMeter Tutorial for Load Testing – The ULTIMATE Guide This tutorial discusses JMeter, a Java based load and performance testing tool with several applications and uses. JMeter is an application that offers several possibilities to configure and execute load, performance and stress tests using different technologies and protocols. 4) 5 most awesome desktop environments for Ubuntu This article was a surprise to me, but it shows how popular Linux is growing, especially the Ubuntu distribution. In this we have collected 10 desktop environments that are superbly awesome and you would want to use them on your own machine! 3) Java 8 Features Tutorial – The ULTIMATE Guide This year was also very important to the Java platform since Version 8 was released. It brings tons of new features to the Java as a language, its compiler, libraries, tools and the JVM (Java virtual machine) itself. In this tutorial we are going to take a look on all these changes and demonstrate the different usage scenarios on real examples! 2) 69 Spring Interview Questions and Answers – The ULTIMATE List This is a summary of some of the most important questions concerning the Spring Framework, that you may be asked to answer in an interview or in an interview test procedure! There is no need to worry for your next interview test, because Java Code Geeks are here for you! 1) 115 Java Interview Questions and Answers – The ULTIMATE List The undisputed king of posts for this year. We discovered that Java interview questions is a hot topic for developers, so we delivered a massive guide with over 100 questions and their answers. We discuss object-oriented programming and its characteristics, general questions regarding Java and its functionality, collections in Java, garbage collectors, exception handling, Java applets, Swing, JDBC, Remote Method Invocation (RMI), Servlets and JSP. Check it out! Bonus Post: 7 Brain Tips for Software Developers. This was my personal favorite for this year and I believe that it holds “secrets” that could be life altering for your programming career. I highly recommend it, the time invested to read it will be totally worth it! So, that would be all! I hope you enjoyed this folks. Our top posts for 2014. We would love to see you around again and have your support and love in the year to come. Stay tuned for more Java Code Geeks surprises within the new year. It is going to be… legendary! Happy new year everyone! From the whole Java Code Geeks team, our best wishes! Be well, Ilias ...

How Do You Serve Your Organization?

A recent coaching client was concerned about the progress his team was making—or really, the lack of progress his team was making. We spoke about the obstacles he had noticed. “The team doesn’t have time to write automated tests. As soon as they finish developing or testing a feature, people get yanked to another project.” “Are people, developers and testers, working together on features?” I wanted to know. “No, first a developer works on a feature for a few days, then a tester takes it. We don’t have enough testers to pair them with developers. What would a tester do for three or four days, while a developer worked on a story?” “So, to your managers, it looks as if the testers are hanging around, waiting on developers, right?” I wanted to make sure I understood at least one of his problems. “Yes, that’s exactly the problem! But the testers aren’t hanging around. They’re still working on test automation for stories we said were done. We have more technical debt than ever.” He actually moaned. “Would you like some ideas? It sounds as if you are out of ideas here.” I checked with him. “Yes, I would!” He sounded grateful. These were the ideas I suggested:Don’t mark stories as done, unless they really are done, including all the automated tests. You might need a kanban board instead of a Scrum board, to show your workflow to yourselves, inside the team. Work as developer-tester pairs, or even better, developer-developer-tester triads. Or, add even more developers, so you have enough developers to complete a story in a day or so. When the developers are done, they can see if the tester needs help with test automation hooks, before they proceed to another story. Make sure the product owner ranks all the stories in an iteration, regardless of which product the stories belong to. That way the team always works together, the entire iteration.I asked him if he could do these things for the team. He said he was sure he could. I’d been coaching him for a while. He was pretty sure he could coach his team. Now I asked him the big question. Could he influence the project portfolio work at the level above him? His managers were too involved in who was doing what on the teams, and were not ranking the projects in the project portfolio. He needed to influence the managers to flow work through the team, and not move people like chess pieces. Could he do that? He and I started to work through how and who he could influence. Technical leads, first- and middle-managers may find influence more challenging. You have to build rapport and have a relationship before you influence people. Had he done that yet? No, not yet. You often need to serve your organization at several levels. It doesn’t matter if you are a technical leader, or someone with manager in your title. Rarely can you limit your problem-solving to just your team. If these challenges sound like yours, you should consider joining Gil Broza and me in the Influential Agile Leader in either San Francisco or London next year. It’s an experiential event, where you bring your concerns. We teach a little, and then help you with guided practice. It’s a way to build your servant leadership and learn how to coach up, down, and sideways. It’s a way to learn who and how to influence. We have more sessions, so you can bring your issues and address them, with us and the other participants.Reference: How Do You Serve Your Organization? from our JCG partner Johanna Rothman at the Managing Product Development blog....

The New Agile–More, Please!

The current buzz at the agile world is scale. Now that we know that agile looks golden, we want to apply it to everything. Agile started as a development team practice. Extreme programming, Scrum, Feature Driven Development, and others all originated in software development teams. Since they were successful, it made sense to apply those successes to other teams as well. XP, like most early methodologies didn’t have that concept(and frankly, didn’t try to answer that too). If you wanted more capabilities, you had to have another team.  The first that tried to at least start to answer that was, of course, scrum. Like many things scrum did well, it tried to answer a business need, including the question about how to handle big projects. Scrum suggested the “scrum of scrums” idea, that didn’t happen to stick, but at least it had something. As (few) years went by, another question was starting to come up: Not only do we need to manage multiple teams, but some of them are not co-located! The question did not just appear out of the blue. We had dispersed teams before, and they collaborated on email and phone. But now, in the new millennium, the internet helped with that collaboration. There were video conferencing tools, and skype, and cell phones. We had the technology! Surely, it can help in solving team problems! Don’t call me Shirley There were (and still are) mixed answers. Some teams work together very well on different continents. Some teams don’t work very well, placed on different floors in the same building. We find time and again that tools can be great enablers, but  individuals and interactions come first. The current view is that things can be worked out to a certain extent, and besides, moving people around the globe is not possible, even if those agile guys say it’s better for the business. So distributed teams are no longer a problem to be solved, but situation we should learn to make the best of. The next scaling step was how to execute big product decisions, and make sure execution is in alignment with the vision. Out of all the buzzwords, alignment is the one that matters more. It’s easy (sort of) to maintain alignment in a team of 8 people. How can it be done in a team of 200? How can we maintain an execution velocity over multiple team, and still have them all move in the right direction? Note that this is not the first time the question is asked in the business world. We just expect an “agile” answer now. The truth is we are still learning. Whether it’s SAFe (as it is now, or a future version), or something else, if any – we don’t know yet. SAFe has some good suggestions, but in fact is still too young to actually check long term effects on big organizations. We’ll see. But wait, there’s more! We always want more of a good thing. Feedback, for instance. More, quicker feedback is better. If before we had continuous integration, that gave us quick feedback on quality, now technology brings us continuous delivery. Now we can deliver to production in a push of a button, (or an automatic trigger) and get the feedback directly from the customer. If we do it right, that is. With current technology and tools, we can deliver and deploy, as well as revert and fix problems. Having the tools is one thing, we have learned, we still need to be able to use them correctly. If we’re talking about feedback, we should mention Test Driven Development. TDD gives us feedback that our code is working, and using test-first we specify how its interface should be called. But that was at the function level.  To get more, we got Acceptance Test Driven Development. ATDD gives us feedback that our code is now working for the customer, and by doing it test-first, we get that alignment we sought earlier: instead of developing things that the customer doesn’t need, ATDD shows us the way, and we develop that. As TDD gave us less YAGNI to worry about, ATDD scales that YAGNI. Again, if we do it correctly. Over the last 15 years, agile did not stand still. It moved forward and sideways, tried a few things and experimented a lot. Mixing the available technologies was part of agile growing up.  In the next chapters, we’ll dive into specific areas and see what kind of progress we made.Reference: The New Agile–More, Please! from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Three Common Methods Generated in Three Java IDEs

In this post, I look at the differences in three “common” methods [equals(Object), hashCode(), and toString()] as generated by NetBeans 8.0.2, IntelliJ IDEA 14.0.2, and Eclipse Luna 4.4.1. The objective is not to determine which is best, but to show different approaches one can use for implementing these common methods. Along the way, some interesting insights can be picked up regarding creating of these common methods based on what the IDEs assume and prompt the developer to set.         NetBeans 8.0.2NetBeans 8.0.2 allows the Project Properties to be configured to support the JDK 8 platform and to expect JDK 8 source formatting as shown in the next two screen snapshots.Code is generated in NetBeans 8.0.2 by clicking on Source | Insert Code (or keystrokes Alt+Insert).When generating the methods equals(Object), hashCode(), and toString(), NetBeans 8.0.2 asks for the attributes to be used in each of these generated methods as depicted in the next two screen snapshots.The NetBeans-generated methods take advantage of the JDK 7-introduced Objects class. NetBeans-Generated hashCode() Method for Class @Override public int hashCode() { int hash = 5; hash = 29 * hash + Objects.hashCode(this.someString); hash = 29 * hash + Objects.hashCode(this.timeUnit); hash = 29 * hash + this.integer; hash = 29 * hash + Objects.hashCode(this.longValue); return hash; } NetBeans-Generated equals(Object) Method for Class @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final NetBeans802GeneratedCommonMethods other = (NetBeans802GeneratedCommonMethods) obj; if (!Objects.equals(this.someString, other.someString)) { return false; } if (this.timeUnit != other.timeUnit) { return false; } if (this.integer != other.integer) { return false; } if (!Objects.equals(this.longValue, other.longValue)) { return false; } return true; } NetBeans-Generated toString() Method for Class @Override public String toString() { return "NetBeans802GeneratedCommonMethods{" + "someString=" + someString + ", timeUnit=" + timeUnit + ", integer=" + integer + ", longValue=" + longValue + '}'; } Some observations can be made regarding the NetBeans-generated common methods:All generated code is automatic and does not support customization with the exception of the fields used in the methods which the operator selects. All of these common methods that extend counterparts in the Object class automatically have the @Override annotation provided. No Javadoc documentation is included for generated methods. The methods make use of the Objects class to make the generated code more concise with less need for null checks. Only one format is supported for the String generated by toString() and that output format is a single comma-delimited line. I did not show it in the above example, but NetBeans 8.0.2’s methods generation does treat arrays differently than references, enums, and primitives in some cases:The generated toString() method treats array attributes of the instance like it treats other instance attributes: it relies on the array’s toString(), which leads to often undesirable and typically useless results (the array’s system identity hash code). It’d generally be preferable to have the string contents of array attributes provided by Arrays.toString(Object[]) or equivalent overloaded version or Arrays.deepToString(Object[]). The generated hashCode() method uses Arrays.deepHashCode(Object[]) for handling arrays’ hash codes. The generated equals(Object) method uses Arrays.deepEquals(Object[], Object[]) for handling arrays’ equality checks. It is worth highlighting here that NetBeans uses the “deep” versions of the Arrays methods for comparing arrays for equality and computing arrays’ hash codes while IntelliJ IDEA and Eclipse use the regular (not deep) versions of Arrays methods for comparing arrays for equality and computing arrays’ hash codes.IntelliJ IDEA 14.0.2 For these examples, I’m using IntelliJ IDEA 14.0.2 Community Edition.IntelliJ IDEA 14.0.2 provides the ability to configure the Project Structure to expect a “Language Level” of JDK 8.To generate code in IntelliJ IDEA 14.0.2, one uses the Code | Generate options (or keystrokes Alt+Insert like NetBeans).IntelliJ IDEA 14.0.2 prompts the operator for which attributes should be included in the generated methods. It also asks which fields are non-null, meaning which fields are assumed to never be null. In the snapshot shown here, they are checked, which would lead to methods not checking those attributes for null before trying to access them. In the code that I generate with IntelliJ IDEA for this post, however, I won’t have those checked, meaning that IntelliJ IDEA will check for null before accessing them in the generated methods.IntelliJ IDEA 14.0.2’s toString() generation provides a lengthy list of formats (templates) for the generated toString() method.IntelliJ IDEA 14.0.2 also allows the operator to select the attributes to be included in the generated toString() method (selected when highlighted background is blue).IDEA-Generated equals(Object) Method for Class public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false;Idea1402GeneratedCommonMethods that = (Idea1402GeneratedCommonMethods) o;if (integer != that.integer) return false; if (longValue != null ? !longValue.equals(that.longValue) : that.longValue != null) return false; if (someString != null ? !someString.equals(that.someString) : that.someString != null) return false; if (timeUnit != that.timeUnit) return false;return true; } IDEA-Generated hashCode() Method for Class @Override public int hashCode() { int result = someString != null ? someString.hashCode() : 0; result = 31 * result + (timeUnit != null ? timeUnit.hashCode() : 0); result = 31 * result + integer; result = 31 * result + (longValue != null ? longValue.hashCode() : 0); return result; } IDEA-Generated toString() Method for Class @Override public String toString() { return "Idea1402GeneratedCommonMethods{" + "someString='" + someString + '\'' + ", timeUnit=" + timeUnit + ", integer=" + integer + ", longValue=" + longValue + '}'; } Some observations can be made regarding the IntelliJ IDEA-generated common methods:Most generated code is automatic with minor available customization including the fields used in the methods which the operator selects, specification of which fields are expected to be non-null (so that null checks are not needed in generated code), and the ability to select one of eight built-in toString() formats. All of these common methods that extend counterparts in the Object class automatically have the @Override annotation provided. No Javadoc documentation is included for generated methods. The generated methods do not make use of the Objects class and so require explicit checks for null for all references that could be null. It’s not shown in the above example, but IntelliJ IDEA 14.0.2 does treat arrays differently in the generation of these three common methods:Generated toString() method uses Arrays.toString(Array) on the array. Generated hashCode() method uses Arrays.hashCode(Object[]) (or overloaded version) on the array. Generated equals(Object) method uses Arrays.equals(Object[], Object[]) (or overloaded version) on the array.Eclipse Luna 4.4.1Eclipse Luna 4.4.1 allows the Java Compiler in Project Properties to be set to JDK 8.In Eclipse Luna, the developer uses the “Source” drop-down to select the specific type of source code generation to be performed.Eclipse Luna allows the operator to select the attributes to be included in the common methods. It also allows the operator to specify a few characteristics of the generated methods. For example, the operator can choose to have the elements of an array printed individually in the generated toString() method rather than an often meaningless class name and system identity hash code presented.Eclipse-Generated hashCode() Method for Class /* (non-Javadoc) * @see java.lang.Object#hashCode() */ @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + this.integer; result = prime * result + ((this.longValue == null) ? 0 : this.longValue.hashCode()); result = prime * result + ((this.someString == null) ? 0 : this.someString.hashCode()); result = prime * result + ((this.timeUnit == null) ? 0 : this.timeUnit.hashCode()); return result; } Eclipse-Generated equals(Object) Method for Class /* (non-Javadoc) * @see java.lang.Object#equals(java.lang.Object) */ @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; Eclipse441GeneratedCommonMethods other = (Eclipse441GeneratedCommonMethods) obj; if (this.integer != other.integer) return false; if (this.longValue == null) { if (other.longValue != null) return false; } else if (!this.longValue.equals(other.longValue)) return false; if (this.someString == null) { if (other.someString != null) return false; } else if (!this.someString.equals(other.someString)) return false; if (this.timeUnit != other.timeUnit) return false; return true; } Eclipse-Generated toString() Method for Class /* (non-Javadoc) * @see java.lang.Object#toString() */ @Override public String toString() { return "Eclipse441GeneratedCommonMethods [someString=" + this.someString + ", timeUnit=" + this.timeUnit + ", integer=" + this.integer + ", longValue=" + this.longValue + "]"; } Some observations can be made regarding the Eclipse-generated common methods:Eclipse provides the most points in the generation process in which the generated output can be configured. Here are some of the configurable options:Location in class (before or after existing methods of class) can be explicitly specified. All of these common methods that extend counterparts in the Object class automatically have the @Override annotation provided. “Method comments” can be generated, but they are not Javadoc style comments (use /* instead of /** and explicitly state they are not Javadoc comments as part of the generated comment). Option to “list contents of arrays instead of using native toString()” allows developer to have Arrays.toString(Array) be used (same as IntelliJ IDEA’s approach and occurs if checked) or have the system identify hash code be used (same as NetBeans’s approach and occurs if not checked). Support for four toString() styles plus ability to specify custom style. Ability to limit the number of entries of an array, collection, or map that is printed in toString(). Ability to use instance of in generated equals(Object) implementation.All of these common methods that extend counterparts in the Object class automatically have the @Override annotation provided. The generated methods do not make use of the Objects class and so require explicit checks for null for all references that could be null. Eclipse Luna 4.4.1 does treat arrays differently when generating the three common methods highlighted in this post:Generated toString() optionally uses Arrays.toString(Object[]) or overloaded version for accessing contents of array. Generated equals(Object) uses Arrays.equals(Object[], Object[]) or overloaded version for comparing arrays for equality. Generated hashCode() uses Arrays.hashCode(Object[]) or overloaded version for computing hash code of array.Conclusion All three IDEs covered in this post (NetBeans, IntelliJ IDEA, and Eclipse) generate sound implementations of the common methods equals(Object), hashCode(), and toString(), but there are differences between the customizability of these generated methods across the three IDEs. The different customizations that are available and the different implementations that are generated can provide lessons for developers new to Java to learn about and consider when implementing these methods. While the most obvious and significant advantage of these IDEs’ ability to generate these methods is the time savings associated with this automatic generation, other advantages of IDE generation of these methods include the ability to learn about implementing these methods and the greater likelihood of successful implementations without typos or other errors.Reference: Three Common Methods Generated in Three Java IDEs from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.