Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

java-logo

My favourite Java puzzler 2 + 1 = 4

Here’s  my current favourite Java puzzler. How can you get your code to do this?                     Integer b = 2; Integer c = 1;System.out.println("b+c : " + (b+c) ); // output: 'b+c : 4' !! There are no tricks with Sytem.out.println() i.e. you would be able to see the same value in a debugger. Clue: You need to add a few lines of code somewhere before this in your program. Scroll down for the solution. . . . . . . . . . . . . . . . . public static void main(String... args)throws Exception{ Integer a = 2;Field valField = a.getClass().getDeclaredField("value"); valField.setAccessible(true); valField.setInt(a, 3);Integer b = 2; Integer c = 1;System.out.println("b+c : " + (b+c) ); // b+c : 4 } As you can see (and I would encourage you to go to the source code for Integer) there is a static cache (look for the Inner class IntegerCache) where the value of the Integer is mapped to its corresponding int value. The cache will store all numbers from -128 to 127 although you can tune this using the property java.lang.Integer.IntegerCache.high.Reference: My favourite Java puzzler 2 + 1 = 4 from our JCG partner Daniel Shaya at the Rational Java blog....
docker-logo

Docker container linking across multiple hosts

Docker container linking is important concept to understand since any application in production will typically run on a cluster of containers across multiple hosts. But simple container linking does not allow cross-host communication. Whats the issue with Docker container linking? Docker containers can communicate with each other be manually linking as shown in Tech Tip #66 or orchestrated using Fig as shown in Tech Tip #68. Both of these using container linking but that has an inherent disadvantage that it is restricted to a single host. Linking does not work if containers are running across multiple hosts. What is the solution? This Tech Tip will evolve the sample built in Tech Tip #66 and #68 and show the containers can be connected if they are running across multiple hosts. Docker container linking across multiple hosts can be easily done by explicitly publishing the host/port and using it from a container on a different host. Lets get started!Start MySQL container as: docker run --name mysqldb -e MYSQL_USER=mysql -e MYSQL_PASSWORD=mysql -e MYSQL_DATABASE=sample -e MYSQL_ROOT_PASSWORD=supersecret -p 5306:3306 -d mysql The MySQL container is explicitly forwarding the port 3306 to port 5506. Git repo has customization/execute.sh that creates the MySQL data source. The command looks like: data-source add --name=mysqlDS --driver-name=mysql --jndi-name=java:jboss/datasources/ExampleMySQLDS --connection-url=jdbc:mysql://$DB_PORT_3306_TCP_ADDR:$DB_PORT_3306_TCP_PORT/sample?useUnicode=true&characterEncoding=UTF-8 --user-name=mysql --password=mysql --use-ccm=false --max-pool-size=25 --blocking-timeout-wait-millis=5000 --enabled=true This command creates the JDBC resource for WildFly using jboss-cli. It is using $DB_PORT_3306_TCP_ADDR and $DB_PORT_3306_TCP_PORT variables which are defined per Container Linking Environment Variables. The scheme by which the environment variables for containers are created is rather weird. It exposes the port number in the variable name itself. I hope this improves in subsequent releases. This command needs to be updated such that an explicit host/port can be used instead. So update the command to: data-source add --name=mysqlDS --driver-name=mysql --jndi-name=java:jboss/datasources/ExampleMySQLDS --connection-url=jdbc:mysql://$MYSQL_HOST:$MYSQL_PORT/sample?useUnicode=true&characterEncoding=UTF-8 --user-name=mysql --password=mysql --use-ccm=false --max-pool-size=25 --blocking-timeout-wait-millis=5000 --enabled=true The only change in the command is to use $MYSQL_HOST and $MYSQL_PORT variables. This command already exists in the file but is commented. So just comment the previous one and uncomment this one. Build the image and run it as: docker build -t arungupta/wildfly-mysql-javaee7 . docker run --name mywildfly -e MYSQL_HOST=<IP_ADDRESS> -e MYSQL_PORT=5306 -p 8080:8080 -d arungupta/wildfly-mysql-javaee7 Make sure to substitute <IP_ADDRESS> with the IP address of your host. For convenience, I ran it on the same host. The IP address in this case can be easily obtained using boot2docker ip. A quick verification of the deployment can be done by accessing the REST endpoint: curl http://192.168.59.103:8080/employees/resources/employees/ <?xml version="1.0" encoding="UTF-8" standalone="yes"?><collection><employee><id>1</id><name>Penny</name></employee><employee><id>2</id><name>Sheldon</name></employee><employee><id>3</id><name>Amy</name></employee><employee><id>4</id><name>Leonard</name></employee><employee><id>5</id><name>Bernadette</name></employee><employee><id>6</id><name>Raj</name></employee><employee><id>7</id><name>Howard</name></employee><employee><id>8</id><name>Priya</name></employee></collection>With this, your WildFly and MySQL can run on two separate hosts, no special configuration required. Enjoy! Docker allows cross-host container linking using Ambassador Containers but that adds a redundant hop for the service to be accessed. A cleaner solution would to use Kubernetes or Swarm, more on that later. Marek also blogged about a more elaborate solution in Connecting Docker Containers on Multiple Hosts.Reference: Docker container linking across multiple hosts from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
java-interview-questions-answers

JBoss Data Virtualization 6.1 Beta Now Available

JBoss Data Virtualization (JDV) is a data integration solution that sits in front of multiple data sources and allows them to be treated as a single source.  Do do that, it offers data abstraction, federation, integration, transformation, and delivery capabilities to combine data from one or multiple sources into reusable and unified logical data models accessible through standard SQL (JDBC, ODBC, Hibernate) and/or web services (REST, OData, SOAP) interfaces. Yesterday the latest 6.1 Beta was made available for download. It focuses on three major areas which are Big Data, Cloud and Development and Deployment Improvements. Big Data Improvements In addition to the Apache Hive support released in 6.0, 6.1 will also offer support for Cloudera Impala for fast SQL query access to data stored in Hadoop. Also new in 6.1 is the support for Apache Solr as a data source.  With Apache Solr you are able to take advantage of enterprise search capabilities for organized retrieval of structured and unstructured data. Another area of improvements is the updated support for MongoDB as a NoSQL data source. This was already introduced as a Technical Preview in 6.0 and will be fully supported in 6.1. The JBoss Data Grid support has been further expanded and brings the ability to perform writes in addition to reads. With 6.1 it is also possible to take advantage of JDG Library mode as an embedded cache in addition to the support as a remote cache that was previously available. Newly introduced in this release is the Apache Cassandra support which is included as a not supported, technical preview. Cloud Improvements The OpenShift cartridge for 6.1 will be updated with a new WebUI that focusses on ease of use for web and mobile developers.  This lightweight user interface allows users to quickly access a library of existing data services, or create one of their own in a top-down manner. Beside that, the support for the Salesforce.com (SFDC) API has been improved. It now supports the Bulk API with a better RESTful interface and better resource handling and is now able to handle very large data-sets. Finally, the 6.1 version brings full support of JBoss Data Virtualization on Amazon EC2 and Google Compute Engine. Productivity and Deployment Improvments The consistent centralized security capabilities across multiple heterogeneous data sources got even better by a security audit log dashboard that can be viewed in the dashboard builder. It works with JDV’s RBAC feature and displays who has been accessing what data and when. Beside the large set of already supported data sources, JDV already allowed to create custom integrations, called translators. Those have been reworked and the developer productivity got better by providing features to improve usability including archetype templates that can be used to generate a starting maven project for custom development.  When the project is created, it will contain the essential classes and resources to begin adding custom logic. JDV 6.1 will provide support for Azul Zing JVM.  Azul Zing is optimized for Linux server deployments and designed for enterprise applications and workloads that require any combination of large memory, high transaction rates, low latency, consistent response times or high sustained throughput.  The support for MariaDB as a data source has been added. The Excel support has been further extended and allows to read Microsoft Excel documents on all platforms by using the Apache POI connector. Find out more:Discuss on the JBoss Data Virtualization Forums Get started with one of our many quickstarts. View the JBoss Data Virtualization Documentation Hava a look at the 6.1 Beta DocumentationReference: JBoss Data Virtualization 6.1 Beta Now Available from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
java-logo

Fail-fast validations using Java 8 streams

I’ve lost count of the number of times I’ve seen code which fail-fast validates the state of something, using an approach like:                     public class PersonValidator { public boolean validate(Person person) { boolean valid = person != null; if (valid) valid = person.givenName != null; if (valid) valid = person.familyName != null; if (valid) valid = person.age != null; if (valid) valid = person.gender != null; // ...and many more } } It works, but it’s a brute force approach that’s filled with repetition due to the valid check. If your code style enforces braces for if statements (+1 for that), your method is also three times longer and growing every time a new check is added to the validator. Using Java 8’s new stream API, we can improve this by taking the guard condition of if (valid) and making a generic validator that handles the plumbing for you. import java.util.LinkedList; import java.util.List; import java.util.function.Function; public class GenericValidator implements Function { private final List> validators = new LinkedList<>(); public GenericValidator(List> validators) { this.validators.addAll(validators); }@Override public Boolean apply(final T toValidate) { // a final array allows us to change the boolean value within a lambda final boolean[] guard = {true}; return validators.stream() // only send the validator downstream if // previous validations were successful .filter(validator -> guard[0]) .map(validator -> validator.apply(toValidate)) // update the guard condition .map(result -> { guard[0] = result; return result; }) // Logically AND the results of the applied validators .reduce(guard[0], (b1, b2) -> b1 && b2); } } Using this, we can rewrite the Person validator to be a specification of the required validations. public class PersonValidator extends GenericValidator {private static final List> VALIDATORS = new LinkedList<>();static { VALIDATORS.add(person -> person.givenName != null); VALIDATORS.add(person -> person.familyName != null); VALIDATORS.add(person -> person.age != null); VALIDATORS.add(person -> person.gender != null); // ...and many more }public PersonValidator() { super(VALIDATORS); } } PersonValidator, and all your other validators, can now focus completely on validation. The behaviour hasn’t changed – the validation still fails fast. There’s no boiler plate, which is A Good Thing. This one’s going in the toolbox.Reference: Fail-fast validations using Java 8 streams from our JCG partner Steve Chaloner at the Objectify blog....
software-development-2-logo

First rule of performance optimisation

Let’s start with a system with no obvious performance bottlenecks.  By that I mean that there are no glaring algorithmic problems which are grinding your system to a halt.  e.g. a tight loop which is reading a property from a file without caching the result. You want your system to run as fast as possible, where do you start?  Most profilers (e.g. my current favourite YourKit) have modules for memory tracing and CPU tracing.  Since the aim of the exercise is for your program to run faster you start by looking at the CPU? – Wrong!  The first place to start is by looking at the memory and in particular at object allocation.     What you should always try and do in first instance is to reduce your object allocations as much as possible.  The reason that this is not intuitive (at least it wasn’t to me) was because we know that object allocation is fast, in fact it’s super fast even compared to languages like C. (Lots of discussion on the web about exactly which and in what circumstances it can be faster – but it’s undeniably fast).  So, if object allocation is so fast why is it slowing my program down and why should I start by minimising my object allocation?It puts extra pressure on the garbage collector.  Having more objects in the system (especially if they are not short lived) will give your garbage collector more work and slow down the system that way. It fills up your CPU caches with garbage forcing them to flush and have to keep going higher up the stack to L2 and L3 cache and then to main memory to retrieve the data.  Roughly speaking, each level up the stack from which data has to fetched takes an order of magnitude longer in time (see graphic below).  So even if object allocation is fast it causes cache misses and thus many wasted cpu cycles which will slow your program down. Do the easy things first. It’s far easier in general to reduce allocation (by caching etc) than it is fix algorithms when looking at a CPU performance trace.  Changing the allocations may completely change the performance characteristics of your program and it may be that any changes to algorithms carried out prior to that will have been a waste of time. Profilers lie (this is a must watch video).  It’s really hard to know, when looking at CPU traces where exactly the bottlenecks lie.  Profilers however do not lie about the allocations. High object allocation is often a bad smell in the code.  Looking for excess object allocation will lead you to algorithmic issues.Reference: First rule of performance optimisation from our JCG partner Daniel Shaya at the Rational Java blog....
java-logo

Why Now is the Perfect Time to Upgrade to Java 8

Interested to see how you can get the most out of the new Java 8 features with AppDynamics? Start a FREE trial now! This past March, Oracle released their most anticipated version in almost decade, Java 8. The latest version had a growing buzz since it had been announced, and companies of all sizes were eager to upgrade. Our partner, Typesafe conducted a Java 8 adoption survey of 2,800 developers and found 65% of companies had already committed to adopting within the first 24 months of the release date. Typesafe’s survey corroborated InfoQ’s survey of developers who stated 61% were devoted to adopting Java 8. Their handy heatmap below displays how excited developers were to get started with Java 8 and utilize the new features such as lambda expressions, date and time, and the Nashon JavaScript engine. In my opinion, the lambda expressions are by far the most exciting new Java 8 feature.So, why are folks so excited for Java 8? Lambda Expressions and Stream Processing What are they? Lambda expressions are arguably the most exciting and interesting new feature with the Java 8 release. Not only is the feature itself exciting for engineers, the implications will have resounding effects on flexibility and productivity. A lambda expression is essentially an anonymous function which can be invoked as a named function normally would, or passed as an argument to higher-order functions. The introduction of lambdas opens up aspects of functional programming to the predominantly object-oriented programming environment, enabling your code to be more concise and flexible. Why is it useful? Consider the task of parsing Twitter data from a given user’s home stream. Specifically, we’ll be creating a map of word length to a list of words of the same length from a user’s home stream. For instance:Should yield:{ 2=[so, an], 3=[are, for], 4=[wont, here, some, tips], 7=[extreme], 8=[programs, makeover], 9=[sometimes, uninstall], 11=[misbehaving, application] }And of course, for many tweets this data is aggregated. Using traditional Java loop constructs, this could be solved as follows:Lets break down what’s happening step-by-step: – Fetch the Twitter home timeline – For each statusExtract text Remove punctuation Gather in one big list of words– For each wordFilter HTTP links and empty words Add word to mapping of length to list of words of same lengthNow, lets consider the solution using stream processing and lambdas:The lambda solution follows the same logic, and is significantly shorter. To boot, this solution can be very easily parallelized. Listed below is the next version performing the same processing in parallel below:Though a contrived example for purposes of illustration, the implications here are profound. By adding lambda expressions, code can be developed faster, be clearer, and overall more flexible. Flexible Code As mentioned earlier, the implications of adding lambda expressions are huge. Flexible code is one of the biggest advantages of this feature. In today’s Agile and rapid-release engineering environment, it’s imperative for your code to be amenable to change. Java has finally begun to close the gap on other more nimble programming languages. As another example, let’s consider an enhancement request for our Twitter processor. In abstract, we wish to procure a list of Twitter timeline statuses which are deemed “interesting”. Concretely, the retweet count is greater than 1 and the status text contains the word “awesome”. This rather straightforward to implement, as outlined below:Now, at some later point in time, suppose product management decides to change what it means for a tweet to be interesting. Specifically, we’ll need to provide a user interface where a user can indicate based on an available set of criteria how a Tweet is deemed interesting. This poses an interesting set of challenges. First, a user interface should provide some representation of an available set of filter criteria. More importantly, that representation should manifest in the Twitter processor as a formal set of filter criteria applied in code. One approach, is to parameterize the filter such that calling code specifies that criteria. This strategy is illustrated as follows:This grants calling code the ability to specify arbitrary filter criteria, realized by the UI component. By disambiguating how the timeline is filtered from what criteria is imposed, the code is now flexible enough to accept arbitrary filter criteria. Full code details can be found at the following Github repository. Summary In short, lambda expressions in Java 8 enable development of clear, concise code while maximizing flexibility to remain responsive to future changes. Engineers, and entire companies, work better when they can spend more time innovating on new products on features rather than focusing a majority of their time firefighting existing problems and squashing bugs. With AppDynamics Java 8 support, you’re finally able to gain some of time back, become more efficient, and starting innovating again. After implementing AppDynamics throughout their Java environment, Giri Nathan, VP of Engineering at Priceline.com stated, “The AppDynamics APM solution increases our agility by letting us instrument any new code on the fly,” says Nathan. “We can monitor everything from servlets and Enterprise JavaBeans entry points to JDBC exit points, which gives us an end-to-end view of our transactions.” Interested to see how you can get the most out of the new Java 8 features with AppDynamics? Start a FREE trial now! ...
java-logo

Working with GZIP and compressed data

Abstract We all know what it means to zip a file with zip or gzip.  But using zipped files in Java is not quite as straight forward as you would like to think, especially if you are not working directly with files but rather with compressing streaming data.  We’ll go though:how to convert a String into a compressed / zipped byte array and visa versa create utility functions for reading and writing files without having to know in advance whether the file or stream is gzip’d or not.The basics So why would you want to zip anything?  Quite simply because it is great way to cut down the amount of data that you have to ship across a network or store to disk therefore increase the speed of the operation.  A typical text file or message can be reduced by a factor of 10 or more depending on the nature of your document.  Of course you will have to factor in the cost of zipping and unzipping but when you have a large amount of data it will be unlikely that these costs will be significant. Does Java support this? Yes, Java support reading and writing gzip files in the java.util.zip package. It also supports zip files as well data inflating and deflating of the popular ZLIB compression library. How do I compress/uncompress a Java String? Here’s an example of how to compress and decompress a String using the DeflaterOutputStream. Here are two methods to use the Java built in compressor as well as a method using GZIP:Using the DeflaterOutputStream is the easiest way: enum StringCompressor { ; public static byte[] compress(String text) { ByteArrayOutputStream baos = new ByteArrayOutputStream(); try { OutputStream out = new DeflaterOutputStream(baos); out.write(text.getBytes("UTF-8")); out.close(); } catch (IOException e) { throw new AssertionError(e); } return baos.toByteArray(); }public static String decompress(byte[] bytes) { InputStream in = new InflaterInputStream(new ByteArrayInputStream(bytes)); ByteArrayOutputStream baos = new ByteArrayOutputStream(); try { byte[] buffer = new byte[8192]; int len; while((len = in.read(buffer))>0) baos.write(buffer, 0, len); return new String(baos.toByteArray(), "UTF-8"); } catch (IOException e) { throw new AssertionError(e); } } }If you want to use the Deflater / Inflater directly: enum StringCompressor2 { ; public static byte[] compress(String text) throws Exception{ byte[] output = new byte; Deflater compresser = new Deflater(); compresser.setInput(text.getBytes("UTF-8")); compresser.finish(); int compressedDataLength = compresser.deflate(output); byte[] dest = new byte[compressedDataLength]; System.arraycopy(output, 0, dest, 0, compressedDataLength); return dest; }public static String decompress(byte[] bytes) throws Exception{ Inflater decompresser = new Inflater(); decompresser.setInput(bytes, 0, bytes.length); byte[] result = new byte[bytes.length *10]; int resultLength = decompresser.inflate(result); decompresser.end();// Decode the bytes into a String String outputString = new String(result, 0, resultLength, "UTF-8"); return outputString; } }Here’s how to do it using GZIP: enum StringGZipper { ; private static String ungzip(byte[] bytes) throws Exception{ InputStreamReader isr = new InputStreamReader(new GZIPInputStream(new ByteArrayInputStream(bytes)), StandardCharsets.UTF_8); StringWriter sw = new StringWriter(); char[] chars = new char[1024]; for (int len; (len = isr.read(chars)) > 0; ) { sw.write(chars, 0, len); } return sw.toString(); }private static byte[] gzip(String s) throws Exception{ ByteArrayOutputStream bos = new ByteArrayOutputStream(); GZIPOutputStream gzip = new GZIPOutputStream(bos); OutputStreamWriter osw = new OutputStreamWriter(gzip, StandardCharsets.UTF_8); osw.write(s); osw.close(); return bos.toByteArray(); } }How to decode a stream of bytes to allow for both GZip and normal streams: The code below will turn a stream of bytes into a String (dump) irrespective of without having to know in advance if the stream was zipped or not. if (isGZIPStream(bytes)) { InputStreamReader isr = new InputStreamReader(new GZIPInputStream(new ByteArrayInputStream(bytes)), StandardCharsets.UTF_8); StringWriter sw = new StringWriter(); char[] chars = new char[1024]; for (int len; (len = isr.read(chars)) > 0; ) { sw.write(chars, 0, len); } dump = sw.toString(); } else { dump = new String(bytes, 0, length, StandardCharsets.UTF_8); } } This is the implementation of the isGZIPStream method. Reveals the truth about what’s behind GZIP_MAGIC! public static boolean isGZIPStream(byte[] bytes) { return bytes[0] == (byte) GZIPInputStream.GZIP_MAGIC && bytes[1] == (byte) (GZIPInputStream.GZIP_MAGIC >>> 8); } This is a simple way to do read a file without knowing whether it was zipped or not (relying on the extension .gz). static Stream<String> getStream(String dir, @NotNull String fileName) throws IOException { File file = new File(dir, fileName); InputStream in; if (file.exists()) { in = new FileInputStream(file); } else { file = new File(dir, fileName + ".gz"); in = new GZIPInputStream(new FileInputStream(file)); }return new BufferedReader(new InputStreamReader(in)).lines(); }Reference: Working with GZIP and compressed data from our JCG partner Daniel Shaya at the Rational Java blog....
javafx-logo

Transform Your SQL Data into Charts Using jOOQ and JavaFX

In the recent past, we’ve shown how Java 8 and functional programming will bring a new perspective to Java developers when it comes to functional data transformation of SQL data using jOOQ and Java 8 lambdas and Streams. Today, we take this a step further and transform the data into JavaFX XYChart.Series to produce nice-looking bar charts from our data. Setting up the database We’re going to be using a small subset of the World Bank’s Open Data again, in a PostgreSQL database. The data that we’re using is this here:   DROP SCHEMA IF EXISTS world;CREATE SCHEMA world;CREATE TABLE world.countries ( code CHAR(2) NOT NULL, year INT NOT NULL, gdp_per_capita DECIMAL(10, 2) NOT NULL, govt_debt DECIMAL(10, 2) NOT NULL ); INSERT INTO world.countries VALUES ('CA', 2009, 40764, 51.3), ('CA', 2010, 47465, 51.4), ('CA', 2011, 51791, 52.5), ('CA', 2012, 52409, 53.5), ('DE', 2009, 40270, 47.6), ('DE', 2010, 40408, 55.5), ('DE', 2011, 44355, 55.1), ('DE', 2012, 42598, 56.9), ('FR', 2009, 40488, 85.0), ('FR', 2010, 39448, 89.2), ('FR', 2011, 42578, 93.2), ('FR', 2012, 39759,103.8), ('GB', 2009, 35455,121.3), ('GB', 2010, 36573, 85.2), ('GB', 2011, 38927, 99.6), ('GB', 2012, 38649,103.2), ('IT', 2009, 35724,121.3), ('IT', 2010, 34673,119.9), ('IT', 2011, 36988,113.0), ('IT', 2012, 33814,131.1), ('JP', 2009, 39473,166.8), ('JP', 2010, 43118,174.8), ('JP', 2011, 46204,189.5), ('JP', 2012, 46548,196.5), ('RU', 2009, 8616, 8.7), ('RU', 2010, 10710, 9.1), ('RU', 2011, 13324, 9.3), ('RU', 2012, 14091, 9.4), ('US', 2009, 46999, 76.3), ('US', 2010, 48358, 85.6), ('US', 2011, 49855, 90.1), ('US', 2012, 51755, 93.8); (see also this article here about another awesome set of SQL queries against the above data) What we want to do now is plot the two sets of values in two different bar charts:Each country’s GDP per capita in each year between 2009-2012 Each country’s debt as a percentage of its GDP in each year between 2009-2012This will then create 8 series with four data points for each series in both charts. In addition to the above, we’d like to order the series among themselves by the average projected value between 2009-2012, such that the series – and thus the countries – can be compared easily. This is obviously easier to explain visually via the resulting chart than in text, so stay tuned until the end of the article. Collecting the data with jOOQ and JavaFX The query that we would write to calculate the above data series would look as follows in plain SQL: select COUNTRIES.YEAR, COUNTRIES.CODE, COUNTRIES.GOVT_DEBT from COUNTRIES join ( select COUNTRIES.CODE, avg(COUNTRIES.GOVT_DEBT) avg from COUNTRIES group by COUNTRIES.CODE ) c1 on COUNTRIES.CODE = c1.CODE order by avg asc, COUNTRIES.CODE asc, COUNTRIES.YEAR asc In other words, we’ll simply select the relevant columns from the COUNTRIES table, and we’ll self-join the average projected value per country such that we can order the result by that average. The same query could be written using window functions. We’ll get to that later on. The code that we’ll write to create such a bar chart with jOOQ and JavaFX is the following: CategoryAxis xAxis = new CategoryAxis(); NumberAxis yAxis = new NumberAxis(); xAxis.setLabel("Country"); yAxis.setLabel("% of GDP");BarChart<String, Number> bc = new BarChart<>(xAxis, yAxis); bc.setTitle("Government Debt"); bc.getData().addAll(// SQL data transformation, executed in the DB // ------------------------------------------- DSL.using(connection) .select( COUNTRIES.YEAR, COUNTRIES.CODE, COUNTRIES.GOVT_DEBT) .from(COUNTRIES) .join( table( select( COUNTRIES.CODE, avg(COUNTRIES.GOVT_DEBT).as("avg")) .from(COUNTRIES) .groupBy(COUNTRIES.CODE) ).as("c1") ) .on(COUNTRIES.CODE.eq( field( name("c1", COUNTRIES.CODE.getName()), String.class ) ))// order countries by their average // projected value .orderBy( field(name("avg")), COUNTRIES.CODE, COUNTRIES.YEAR)// The result produced by the above statement // looks like this: // +----+----+---------+ // |year|code|govt_debt| // +----+----+---------+ // |2009|RU | 8.70| // |2010|RU | 9.10| // |2011|RU | 9.30| // |2012|RU | 9.40| // |2009|CA | 51.30| // +----+----+---------+// Java data transformation, executed in app memory // ------------------------------------------------// Group results by year, keeping sort // order in place .fetchGroups(COUNTRIES.YEAR)// The generic type of this is inferred... // Stream<Entry<Integer, Result< // Record3<BigDecimal, String, Integer>> // >> .entrySet() .stream()// Map entries into { Year -> Projected value } .map(entry -> new XYChart.Series<>( entry.getKey().toString(), observableArrayList(// Map records into a chart Data entry.getValue().map(country -> new XYChart.Data<String, Number>( country.getValue(COUNTRIES.CODE), country.getValue(COUNTRIES.GOVT_DEBT) )) ) )) .collect(toList()) ); The interesting thing here is really that we can fetch data from the database, and later on, transform it into JavaFX data structures all in one go. The whole thing is almost a single Java statement. SQL and Java is cleanly separated As we’ve blogged on this blog before, there is a very important difference when comparing the above approach to LINQ or to JPQL’s DTO fetching capabilities. The SQL query is cleanly separated from the Java in-memory data transformation, even if we express the whole transformation in one single statement. We want to be as precise as possible when expressing our SQL query for the database to be able to calculate the optimal execution plan. Only once we have materialised our data set, the Java 8 Stream transformation will kick in. The importance of this is made clear when we change the above SQL-92 compatible query with a SQL-1999 compatible one that makes use of awesome window functions. The jOOQ part of the above statement could be replaced by the following query: DSL.using(connection) .select( COUNTRIES.YEAR, COUNTRIES.CODE, COUNTRIES.GOVT_DEBT) .from(COUNTRIES) .orderBy( avg(COUNTRIES.GOVT_DEBT) .over(partitionBy(COUNTRIES.CODE)), COUNTRIES.CODE, COUNTRIES.YEAR) ; … or in SQL: select COUNTRIES.YEAR, COUNTRIES.CODE, COUNTRIES.GOVT_DEBT from COUNTRIES order by avg(COUNTRIES.GOVT_DEBT) over (partition by COUNTRIES.CODE), COUNTRIES.CODE, COUNTRIES.YEAR As you can see, staying in control of you actual SQL statement is of the essence when you run such reports. There is no way you could have refactored ordering via nested selects into a much more efficient ordering via window functions as easily as this. Let alone refactoring dozens of lines of Java sorting logic. Yep. It’s hard to beat the beauty of window functions If we add some additional JavaFX boilerplate to put the chart into a Pane, a Scene, and a Stage, we’ll get these nice-looking charts below:Play with it yourself You can download and run the above example yourself. Simply download the following example and run mvn clean install: https://github.com/jOOQ/jOOQ/tree/master/jOOQ-examples/jOOQ-javafx-exampleReference: Transform Your SQL Data into Charts Using jOOQ and JavaFX from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
java-interview-questions-answers

Good Microservices Architecture = Death of the Enterprise Service Bus (ESB)?

These days, it seems like everybody is talking about microservices. You can read a lot about it in hundreds of articles and blog posts, but my recommended starting point would be this article by Martin Fowler, which initiated the huge discussion about this new architectural concept. This article is about the challenges, requirements and best practices for creating a good microservices architecture, and what role an Enterprise Service Bus (ESB) plays in this game. Branding and Marketing: EAI vs. SOA vs. ESB vs. Microservices Let’s begin with a little bit of history about Service-oriented Architecture (SOA) and Enterprise Service Bus to find out why microservices have become so trendy. Many years ago, software vendors offered a middleware for Enterprise Application Integration (EAI), often called EAI broker or EAI backbone. The middleware was a central hub. Back then, SOA was just emerging. The tool of choice was an ESB. Many vendors just rebranded their EAI tool into an ESB. Nothing else changed. Some time later, some new ESBs came up, without central hub, but distributed agents. So, ESB served for different kinds of middleware. Many people do not like the term “ESB” as they only know the central one, but not the distributed one. Therefore, vendors often avoid talking about an ESB. They cannot sell a central integration middleware anymore, because everything has to be distributed and flexible. Today, you can buy a service delivery platform. In the future, it might be a microservices platform or something similar. In some cases, the code base might still be the same of the EAI broker 20 years ago. What all these products have in common is that you can solve integration problems by implementing “Enterprise Integration Patterns”. To summarise the history about branding and marketing of integration products: Pay no attention to sexy impressive sounding names! Instead, make looking at the architecture and features the top priority.Ask yourself what business problems you need to solve, and evaluate which architecture and product might help you best. It is amazing how many people still think of a “central ESB hub”, when I say “ESB”. Requirements for a Good Microservices Architecture Six key requirements to overcome those challenges and leverage the full value of microservices:Services Contract Exposing microservices from existing applications Discovery of services Coordination across services Managing complex deployments and their scalability Visibility across servicesThe full article discusses these six requirements in detail, and also answers the question how a modern ESB is related to a Microservices architecture. Read the full article here: Do Good Microservices Architectures Spell the Death of the Enterprise Service Bus?Reference: Good Microservices Architecture = Death of the Enterprise Service Bus (ESB)? from our JCG partner Kai Waehner at the Blog about Java EE / SOA / Cloud Computing blog....
career-logo

Career Stagnation – Early Detection and Treatment

Have you ever been on LinkedIn and stumbled on one of their work anniversary announcements? In case you haven’t, they look something like this:The announcements are generated by LinkedIn and typically followed by a predictable handful of likes, congratulatory words, and positive sentiments. I’m yet to see a comment that generally reflects my knee-jerk reaction to at least some of these posts.Longevity and career stability obviously don’t have to be a bad thing, but I’ve seen far too many programmers become so comfortable in a job that they don’t bother to take a look outside to see what changes are going on outside of their bubble. When these professionals eventually decide to seek new employment, usually by way of some triggering event such as a layoff,they are usually stunned by how much the hiring landscape has changed since their last job search. This is a conversation I’ve had hundreds of times, where my role begins as educator (“JavaScript is actually pretty popular these days.”), morphs into advisor (“You might want to brush up your web skills.”), and eventually lands on crisis counselor (“It’s going to be OK.”). Stagnation: How it happens The formula for career stagnation is pretty simple. When one’s basic needs are met or even exceeded, they stay put. When provided with fair compensation, a tolerable work environment, and a comfortable chair, many technologists go about their work without finding the need to pay much attention to trends in the industry that don’t affect them. In an industry that changes rapidly, the result is marketability problems. For some in our field, new challenges, learning, and change are basic needs. Even if these types are paid well and given other perks, they will be likely to investigate trends and possibly seek new employers. Their natural curiosity protects them from stagnation. Managers hire technologists who possess the skills needed to perform jobs at their company, even if those skills are not in high demand on the overall job market. Those working for companies staying current with new tools and offering multiple challenging projects have no reason to fear the negative impact stagnation has on marketability. For those working in static environments with little change and a dependence on less popular or proprietary technologies, the burden of maintaining marketability is their own. Avoidance, Detection, and Treatment The biggest problem with stagnation is that technologists don’t realize it’s even an issue until it’s too late to remedy. Thankfully, there are ways to identify and treat this common problem.Diagnose - Set a reminder on your calendar to update your résumé and/or LinkedIn profile every six months. Are you able to add any projects or skills to your résumé? Are there any skills on your résumé that you no longer feel comfortable including for fear of being exposed as a fraud? How many six month periods have passed since you have been able to mark a significant accomplishment? Test – Send your résumé to a past co-worker – ideally someone who has been on the job market a bit – or a recruiter you trust, and ask if your current skills would get you interviewed. Make it clear that you aren’t actively seeking employment, but now are just interested in an assessment. Read and get out – Much of stagnation is related to technologists being insulated professionally and not paying attention to trends outside their offices. Reading technology blogs or article aggregators for as little as an hour per week will give you an idea if you are building marketable skills. Having any involvement with others in the industry that are not co-workers is perhaps the most valuable method of prevention. Some office environments may fall prey to groupthink and hive mind tendencies, so communication of any kind with the outside world is useful. If you aren’t getting it at work, get it at home – If your employer doesn’t provide you with the ability to work on challenging projects and relevant technologies, side projects are one solution if you have the time. Employers today tend to appreciate self-taught skills and impressive independent development efforts as much as on-the-job experience. Leave – Leaving your job doesn’t need to be the first solution to career stagnation, but for many it’s the most effective. When evaluating new employers, consider whether stagnation and marketability issues may arise again in the near future.Marketability is a complex concept that depends upon several independent factors that are difficult to predict. Stagnation is easier to diagnose than it is to treat. Early detection is the key.Reference: Career Stagnation – Early Detection and Treatment from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close