What's New Here?

java-logo

10 JDK 7 Features to Revisit, Before You Welcome Java 8

It’s been almost a month Java 8 is released and I am sure all of you are exploring new features of JDK 8. But, before you completely delve into Java 8, it’s time to revisit some of the cool features introduced on Java 7. If you remember, Java 6 was nothing on feature, it was all about JVM changes and performance, but JDK 7 did introduced some cool features which improved developer’s day to day task. Why I am writing this post now? Why I am talking about Java 1. 7, when everybody is talking about Java 8? Well I think, not all Java developers are familiar with changes introduced in JDK 7, and what time can be better to revisit earlier version than before welcoming a new version. I don’t see automatic resource management used by developer in daily life, even after IDE’s has got content assist for that. Though I see programmers using String in Switch and Diamond operator for type inference, again there is very little known about fork join framework,  catching multiple exception in one catch block or using underscore on numeric literals.  So I took this opportunity to write a summary sort of post to revise these convenient changes and adopt them into out daily programming life. There are couple of good changes on NIO and new File API, and lots of other at API level, which is also worth looking. I am sure combined with Java 8 lambda expression, these feature will result in much better and cleaner code.Type inference Before JDK 1.7 introduce a new operator <<, known as diamond operator to making type inference available for constructors as well. Prior to Java 7, type inference is only available for methods, and Joshua Bloch has rightly predicted in Effective Java 2nd Edition, it’s now available for constructor as well. Prior JDK 7, you type more to specify types on both left and right hand side of object creation expression, but now it only needed on left hand side, as shown in below example. Prior JDK 7 Map<String, List<String>> employeeRecords = new HashMap<String, List<String>>(); List<Integer> primes = new ArrayList<Integer>(); In JDK 7 Map<String, List<String>> employeeRecords = new HashMap<>(); List<Integer> primes = new ArrayList<>(); So you have to type less in Java 7, while working with Collections, where we heavily use Generics. See here for more detailed information on diamond operator in Java.String in SwitchBefore JDK 7, only integral types can be used as selector for switch-case statement. In JDK 7, you can use a String object as the selector. For example, String state = "NEW";switch (day) { case "NEW": System.out.println("Order is in NEW state"); break; case "CANCELED": System.out.println("Order is Cancelled"); break; case "REPLACE": System.out.println("Order is replaced successfully"); break; case "FILLED": System.out.println("Order is filled"); break; default: System.out.println("Invalid");} equals() and hashcode() method from java.lang.String is used in comparison, which is case-sensitive. Benefit of using String in switch is that, Java compiler can generate more efficient code than using nested if-then-else statement. See here for more detailed information of how to use String on Switch case statement.Automatic Resource ManagementBefore JDK 7, we need to use a finally block, to ensure that a resource is closed regardless of whether the try statement completes normally or abruptly, for example while reading files and streams, we need to close them into finally block, which result in lots of boiler plate and messy code, as shown below : public static void main(String args[]) { FileInputStream fin = null; BufferedReader br = null; try { fin = new FileInputStream("info.xml"); br = new BufferedReader(new InputStreamReader(fin)); if (br.ready()) { String line1 = br.readLine(); System.out.println(line1); } } catch (FileNotFoundException ex) { System.out.println("Info.xml is not found"); } catch (IOException ex) { System.out.println("Can't read the file"); } finally { try { if (fin != null) fin.close(); if (br != null) br.close(); } catch (IOException ie) { System.out.println("Failed to close files"); } } } Look at this code, how many lines of boiler codes? Now in Java 7, you can use try-with-resource feature to automatically close resources, which implements AutoClosable and Closeable interface e.g. Streams, Files, Socket handles, database connections etc. JDK 7 introduces a try-with-resources statement, which ensures that each of the resources in try(resources) is closed at the end of the statement by calling close() method of AutoClosable. Now same example in Java 7 will look like below, a much concise and cleaner code : public static void main(String args[]) { try (FileInputStream fin = new FileInputStream("info.xml"); BufferedReader br = new BufferedReader(new InputStreamReader(fin));) { if (br.ready()) { String line1 = br.readLine(); System.out.println(line1); } } catch (FileNotFoundException ex) { System.out.println("Info.xml is not found"); } catch (IOException ex) { System.out.println("Can't read the file"); } } Since Java is taking care of closing opened resources including files and streams, may be no more leaking of file descriptors and probably an end to file descriptor error. Even JDBC 4.1 is retrofitted as AutoClosable too.Fork Join FrameworkThe fork/join framework is an implementation of the ExecutorService interface that allows you to take advantage of multiple processors available in modern servers. It is designed for work that can be broken into smaller pieces recursively. The goal is to use all the available processing power to enhance the performance of your application. As with any ExecutorService implementation, the fork/join framework distributes tasks to worker threads in a thread pool. The fork join framework is distinct because it uses a work-stealing algorithm, which is very different than producer consumer algorithm. Worker threads that run out of things to do can steal tasks from other threads that are still busy. The centre of the fork/join framework is the ForkJoinPool class, an extension of the AbstractExecutorService class. ForkJoinPool implements the core work-stealing algorithm and can execute ForkJoinTask processes. You can wrap code in a ForkJoinTask subclass like RecursiveTask (which can return a result) or RecursiveAction. See here for some more information on fork join framework in Java.Underscore in Numeric literalsIn JDK 7, you could insert underscore(s) ‘_’ in between the digits in an numeric literals (integral and floating-point literals) to improve readability. This is especially valuable for people who uses large numbers in source files, may be useful in finance and computing domains. For example, int billion = 1_000_000_000; // 10^9 long creditCardNumber = 1234_4567_8901_2345L; //16 digit number long ssn = 777_99_8888L; double pi = 3.1415_9265; float pif = 3.14_15_92_65f; You can put underscore at convenient points to make it more readable, for examples for large amounts putting underscore between three digits make sense, and for credit card numbers, which are 16 digit long, putting underscore after 4th digit make sense, as they are printed in cards. By the way remember that you cannot put underscore, just after decimal number or at the beginning or at the end of number. For example, following numeric literals are invalid, because of wrong placement of underscore: double pi = 3._1415_9265; // underscore just after decimal point long creditcardNum = 1234_4567_8901_2345_L; //underscore at the end of number long ssn = _777_99_8888L; //undersocre at the beginning See my post about how to use underscore on numeric literals for more information and use case.Catching Multiple Exception Type in Single Catch BlockIn JDK 7, a single catch block can handle more than one exception types. For example, before JDK 7, you need two catch blocks to catch two exception types although both perform identical task: try {......} catch(ClassNotFoundException ex) { ex.printStackTrace(); } catch(SQLException ex) { ex.printStackTrace(); } In JDK 7, you could use one single catch block, with exception types separated by ‘|’. try {......} catch(ClassNotFoundException|SQLException ex) {ex.printStackTrace();} By the way, just remember that Alternatives in a multi-catch statement cannot be related by sub classing. For example a multi-catch statement like below will throw compile time error : try {......} catch (FileNotFoundException | IOException ex) {ex.printStackTrace();} Alternatives in a multi-catch statement cannot be related by sub classing, it will throw error at compile time : java.io.FileNotFoundException is a subclass of alternative java.io.IOException at Test.main(Test.java:18) see here to learn more about improved exception handling in Java SE 7.Binary Literals with prefix “0b”In JDK 7, you can express literal values in binary with prefix ’0b’ (or ’0B’) for integral types ( byte, short, int and long), similar to C/C++ language. Before JDK 7, you can only use octal values (with prefix ’0′) or hexadecimal values (with prefix ’0x’ or ’0X’). int mask = 0b01010000101; or even better int binary = 0B0101_0000_1010_0010_1101_0000_1010_0010;Java NIO 2.0Java SE 7 introduced java.nio.file package and its related package, java.nio.file.attribute, provide comprehensive support for file I/O and for accessing the default file system. It also introduced the Path class which allow you to represent any path in operating system. New File system API complements older one and provides several useful method checking, deleting, copying, and moving files. for example, now you can check if a file is hidden in Java. You can also create symbolic and hard links from Java code.  JDK 7 new file API is also capable of searching for files using wild cards. You also get support to watch a directory for changes. I would recommend to check Java doc of new file package to learn more about this interesting useful feature.G1 Garbage CollectorJDK 7 introduced a new Garbage Collector known as G1 Garbage Collection, which is short form of garbage first. G1 garbage collector performs clean-up where there is most garbage. To achieve this it split Java heap memory into multiple regions as opposed to 3 regions in the prior to Java 7 version (new, old and permgen space). It’s said that G1 is quite predictable and provides greater through put for memory intensive applications.More Precise Rethrowing of ExceptionThe Java SE 7 compiler performs more precise analysis of re-thrown exceptions than earlier releases of Java SE. This enables you to specify more specific exception types in the throws clause of a method declaration. before JDK 7, re-throwing an exception was treated as throwing the type of the catch parameter. For example, if your try block can throw ParseException as well as IOException. In order to catch all exceptions and rethrow them, you would have to catch Exception and declare your method as throwing an Exception. This is sort of obscure non-precise throw, because you are throwing a general Exception type (instead of specific ones) and statements calling your method need to catch this general Exception. This will be more clear by seeing following example of exception handling in code prior to Java 1.7 public void obscure() throws Exception{ try { new FileInputStream("abc.txt").read(); new SimpleDateFormat("ddMMyyyy").parse("12-03-2014"); } catch (Exception ex) { System.out.println("Caught exception: " + ex.getMessage()); throw ex; } } From JDK 7 onwards you can be more precise while declaring type of Exception in throws clause of any method. This precision in determining which Exception is thrown from the fact that, If you re-throw an exception from a catch block, you are actually throwing an exception type which:your try block can throw, has not handled by any previous catch block, and is a subtype of one of the Exception declared as catch parameterThis leads to improved checking for re-thrown exceptions. You can be more precise about the exceptions being thrown from the method and you can handle them a lot better at client side, as shown in following example : public void precise() throws ParseException, IOException { try { new FileInputStream("abc.txt").read(); new SimpleDateFormat("ddMMyyyy").parse("12-03-2014"); } catch (Exception ex) { System.out.println("Caught exception: " + ex.getMessage()); throw ex; } } The Java SE 7 compiler allows you to specify the exception types ParseException and IOException in the throws clause in the preciese() method declaration because you can re-throw an exception that is a super-type of any of the types declared in the throws, we are throwing java.lang.Exception, which is super class of all checked Exception. Also in some places you will see final keyword with catch parameter, but that is not mandatory any more.That’s all about what you can revise in JDK 7. All these new features of Java 7 are very helpful in your goal towards clean code and developer productivity. With lambda expression introduced in Java 8, this goal to cleaner code in Java has reached another milestone. Let me know, if you think I have left out any useful feature of Java 1.7, which you think should be here. P.S. If you love books then you may like Java 7 New features Cookbook from Packet Publication as well.Reference: 10 JDK 7 Features to Revisit, Before You Welcome Java 8 from our JCG partner Javin Paul at the Javarevisited blog....
android-logo

Android Shake to Refresh tutorial

In this post we want to explore another way to refresh our app UI called Shake to Refresh. We all know the pull-to-refresh pattern that is implemented in several app. In this pattern we pull down our finger along the screen and the UI is refreshed:Even this pattern is very useful, we can use another pattern to refresh our UI, based on smartphone sensors, we can call it Shake to refresh. Instead of pulling down our finger, we shake our smartphone to refresh the UI:Implementation In order to enable our app to support the Shake to refresh feature we have to use smartphone sensors and specifically motion sensors: Accelerometer. If you want to have more information how to use sensor you can give a look here. As said, we want that the user shakes the smartphone to refresh and at the same time we don’t want that the refresh process starts accidentally or when user just moves his smartphone. So we have to implement some controls to be sure that the user is shaking the smartphone purposely. On the other hand we don’t want to implement this logic in the class that handles the UI, because it is not advisable to mix the UI logic with other things and using another class we can re-use this “pattern” in other contexts. Then, we will create another class called ShakeEventManager. This class has to listen to sensor events: public class ShakeEventManager implements SensorEventListener { .. } so that it will implement SensorEventListener. Then we have to look for the accelerometer sensor and register our class as event listener: public void init(Context ctx) { sManager = (SensorManager) ctx.getSystemService(Context.SENSOR_SERVICE); s = sManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER); register(); } and then: public void register() { sManager.registerListener(this, s, SensorManager.SENSOR_DELAY_NORMAL); } To trigger the refresh event on UI some conditions must be verified, these conditions guarantee that the user is purposely shaking his smartphone. The conditions are:The acceleration must be greater than a threshold level A fixed number of acceleration events must occur The time between these events must be in a fixed time windowWe will implement this logic in onSensorChanged method that is called everytime a new value is available. The first step is calculating the acceleration, we are interested to know the max acceleration value on the three axis and we want to clean the sensor value from the gravity force. So, as stated in the official Android documentation, we first apply a low pass filter to isolate the gravity force and then high pass filter: private float calcMaxAcceleration(SensorEvent event) { gravity[0] = calcGravityForce(event.values[0], 0); gravity[1] = calcGravityForce(event.values[1], 1); gravity[2] = calcGravityForce(event.values[2], 2);float accX = event.values[0] - gravity[0]; float accY = event.values[1] - gravity[1]; float accZ = event.values[2] - gravity[2];float max1 = Math.max(accX, accY); return Math.max(max1, accZ); } where // Low pass filter private float calcGravityForce(float currentVal, int index) { return ALPHA * gravity[index] + (1 - ALPHA) * currentVal; } Once we know the max acceleration we implement our logic: @Override public void onSensorChanged(SensorEvent sensorEvent) { float maxAcc = calcMaxAcceleration(sensorEvent); Log.d("SwA", "Max Acc ["+maxAcc+"]"); if (maxAcc >= MOV_THRESHOLD) { if (counter == 0) { counter++; firstMovTime = System.currentTimeMillis(); Log.d("SwA", "First mov.."); } else { long now = System.currentTimeMillis(); if ((now - firstMovTime) < SHAKE_WINDOW_TIME_INTERVAL) counter++; else { resetAllData(); counter++; return; } Log.d("SwA", "Mov counter ["+counter+"]");if (counter >= MOV_COUNTS) if (listener != null) listener.onShake(); } }} Analyzing the code at line 3, we simply calculate the acceleration and then we check if it is greater than a threshold value  (condition 1) (line 5). If it is the first movement,  (line 7-8), we save the timestamp to check if other events happen in the specified time window. If all the conditions are satisfied we invoke a callback method define in the callback interface: public static interface ShakeListener { public void onShake(); } Test app Now we have implemented the shake event manager we are ready to create a simple app that uses it. We can create a simple activity with a ListView that is refreshed when shake event occurs: public class MainActivity extends ActionBarActivity implements ShakeEventManager.ShakeListener { ....@Override public void onShake() { // We update the ListView } } Where at line 5 we update the UI because this method is called only when the user is shaking his smartphone. Some final considerations: When the app is paused we have to unregister the sensor listener so that it won’t listen anymore to events and in this way we will save the battery. On the other hand when the app is resumed we will register again the listener: Override protected void onResume() { super.onResume(); sd.register(); }@Override protected void onPause() { super.onPause(); sd.deregister(); }Reference: Android Shake to Refresh tutorial from our JCG partner Francesco Azzola at the Surviving w/ Android blog....
java-logo

Programmatic Access to Sizes of Java Primitive Types

One of the first things many developers new to Java learn about is Java’s basic primitive data types, their fixed (platform independent) sizes (measured in bits or bytes in terms of two’s complement), and their ranges (all numeric types in Java are signed). There are many good online resources that list these characteristics and some of these resources are the Java Tutorial lesson on Primitive Data Types, The Eight Data Types of Java, Java’s Primitive Data Types, and Java Basic Data Types. Java allows one to programmatically access these characteristics of the basic Java primitive data types. Most of the primitive data types’ maximum values and minimum values have been available for some time in Java via the corresponding reference types’ MAX_VALUE and MIN_VALUE fields. J2SE 5 introduced a SIZE field for most of the types that provides each type’s size in bits (two’s complement). JDK 8 has now provided most of these classes with a new field called BYTES that presents the type’s size in bytes (two’s complement). DataTypeSizes.java package dustin.examples.jdk8;import static java.lang.System.out; import java.lang.reflect.Field;/** * Demonstrate JDK 8's easy programmatic access to size of basic Java datatypes. * * @author Dustin */ public class DataTypeSizes { /** * Print values of certain fields (assumed to be constant) for provided class. * The fields that are printed are SIZE, BYTES, MIN_VALUE, and MAX_VALUE. * * @param clazz Class which may have static fields SIZE, BYTES, MIN_VALUE, * and/or MAX_VALUE whose values will be written to standard output. */ private static void printDataTypeDetails(final Class clazz) { out.println("\nDatatype (Class): " + clazz.getCanonicalName() + ":"); final Field[] fields = clazz.getDeclaredFields(); for (final Field field : fields) { final String fieldName = field.getName(); try { switch (fieldName) { case "SIZE" : // generally introduced with 1.5 (twos complement) out.println("\tSize (in bits): " + field.get(null)); break; case "BYTES" : // generally introduced with 1.8 (twos complement) out.println("\tSize (in bytes): " + field.get(null)); break; case "MIN_VALUE" : out.println("\tMinimum Value: " + field.get(null)); break; case "MAX_VALUE" : out.println("\tMaximum Value: " + field.get(null)); break; default : break; } } catch (IllegalAccessException illegalAccess) { out.println("ERROR: Unable to reflect on field " + fieldName); } } }/** * Demonstrate JDK 8's ability to easily programmatically access the size of * basic Java data types. * * @param arguments Command-line arguments: none expected. */ public static void main(final String[] arguments) { printDataTypeDetails(Byte.class); printDataTypeDetails(Short.class); printDataTypeDetails(Integer.class); printDataTypeDetails(Long.class); printDataTypeDetails(Float.class); printDataTypeDetails(Double.class); printDataTypeDetails(Character.class); printDataTypeDetails(Boolean.class); } } When executed, the code above writes the following results to standard output. The Output Datatype (Class): java.lang.Byte: Minimum Value: -128 Maximum Value: 127 Size (in bits): 8 Size (in bytes): 1Datatype (Class): java.lang.Short: Minimum Value: -32768 Maximum Value: 32767 Size (in bits): 16 Size (in bytes): 2Datatype (Class): java.lang.Integer: Minimum Value: -2147483648 Maximum Value: 2147483647 Size (in bits): 32 Size (in bytes): 4Datatype (Class): java.lang.Long: Minimum Value: -9223372036854775808 Maximum Value: 9223372036854775807 Size (in bits): 64 Size (in bytes): 8Datatype (Class): java.lang.Float: Maximum Value: 3.4028235E38 Minimum Value: 1.4E-45 Size (in bits): 32 Size (in bytes): 4Datatype (Class): java.lang.Double: Maximum Value: 1.7976931348623157E308 Minimum Value: 4.9E-324 Size (in bits): 64 Size (in bytes): 8Datatype (Class): java.lang.Character: Minimum Value: UPDATE: Note that, as Attila-Mihaly Balazs has pointed out in the comment below, the MIN_VALUE values showed for java.lang.Float and java.lang.Double above are not negative numbers even though these constant values are negative for Byte, Short, Int, and Long. For the floating-point types of Float and Double, the MIN_VALUE constant represents the minimum absolute value that can stored in those types. Although the characteristics of the Java primitive data types are readily available online, it’s nice to be able to programmatically access those details easily when so desired. I like to think about the types’ sizes in terms of bytes and JDK 8 now provides the ability to see those sizes directly measured in bytes.Reference: Programmatic Access to Sizes of Java Primitive Types from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
software-development-2-logo

Thoughts on The Reactive Manifesto

Reactive programming is an emerging trend in software development that has gathered a lot of enthusiasm among technology connoisseurs during the last couple of years. After studying the subject last year, I got curious enough to attend the “Principles of Reactive Programming” course on Coursera (by Odersky, Meijer and Kuhn). Reactive advocates from Typesafe and others have created The Reactive Manifesto that tries to formulate the vocabulary for reactive programming and what it actually aims at. This post collects some reflections on the manifesto.         According to The Reactive Manifesto systems that are reactivereact to events – event-driven nature enables the following qualities react to load – focus on scalability by avoiding contention on shared resources react to failure – resilient systems that are able to recover at all levels react to users – honor response time guarantees regardless of loadEvent-driven Event-driven applications are composed of components that communicate through sending and receiving events. Events are passed asynchronously, often using a push based communication model, without the event originator blocking. A key goal is to be able to make efficient use of system resources, not tie up resources unnecessarily and maximize resource sharing. Reactive applications are built on a distributed architecture in which message-passing provides the inter-node communication layer and location transparency for components. It also enables interfaces between components and subsystems to be based on loosely coupled design, thus allowing easier system evolution over time. Systems designed to rely on shared mutable state require data access and mutation operations to be coordinated by using some concurrency control mechanism, in order to avoid data integrity issues. Concurrency control mechanisms limit the degree of parallelism in the system. Amdahl’s law formulates clearly how reducing the parallelizable portion of the program code puts an upper limit to system scalability. Designs that avoid shared mutable state allow for higher degrees of parallelism and thus reaching higher degrees of scalability and resource sharing. Scalable System architecture needs to be carefully designed to scale out, as well as up, in order to be able to exploit the hardware trends of both increased node-level parallelism (increased number of CPUs and nb. of physical and logical cores within a CPU) and system level parallelism (number of nodes). Vertical and horizontal scaling should work both ways, so an elastic system will also be able to scale in and down, thereby allowing to optimize operational cost structures for lower demand conditions. A key building block for elasticity is achieved through a distributed architecture and the node-to-node communication mechanism, provided by message-passing, that allows subsystems to be configured to run on the same node or on different nodes without code changes (location transparency). Resilient A resilient system will continue to function in the presence of failures in one or more parts of the system, and in unanticipated conditions (e.g. unexpected load). The system needs to be designed carefully to contain failures in well defined and safe compartments to prevent failures from escalating and cascading unexpectedly and uncontrollably. Responsive The Reactive manifesto characterizes the responsive quality as follows: Responsive is defined by Merriam-Webster as “quick to respond or react appropriately”. … Reactive applications use observable models, event streams and stateful clients. … Observable models enable other systems to receive events when state changes. … Event streams form the basic abstraction on which this connection is built. … Reactive applications embrace the order of algorithms by employing design patterns and tests to ensure a response event is returned in O(1) or at least O(log n) time regardless of load. Commentary If you’ve been actively following software development trends during the last couple of years, ideas stated in the reactive manifesto may seem quite familiar to you. This is because the manifesto captures insights learned by the software development community in building internet-scale systems. One such set of lessons stems from problems related to having centrally-stored state in distributed systems. The tradeoffs of having a strong consistency model in a distributed system have been formalized in the CAP theorem. CAP-induced insights led developers to consider alternative consistency models, such as BASE, in order to trade off strong consistency guarantees for availability and partition tolerance, but also scalability. Looser consistency models have been popularized during recent years, in particular, by different breeds of NoSQL databases. Application’s consistency model has a major impact on the application scalability and availability, so it would be good to address this concern more explicitly in the manifesto. The chosen consistency model is a cross-cutting trait, over which all the application layers should uniformly agree. This concern is something that is mentioned in the manifesto, but since it’s such an important issue, and it has subtle implications, it would be good to elaborate it a bit more or refer to a more through discussion of the topic. Event-driven is a widely used term in programming that can take on many different meanings and has multiple variations. Since it’s such an overloaded term, it would be good to define it more clearly and try to characterize what exactly does and does not constitute as event-driven in this context. The authors clearly have event-driven architecture (EDA) in mind, but EDA is also something that can be achieved with different approaches. The same is true for “asynchronous communication”. In the reactive manifesto “asynchronous communication” seems to imply using message-passing, as in messaging systems or the Actor model, and not asynchronous function or method invocation. The reactive manifesto adopts and combines ideas from many movement from CAP theorem, NoSQL, event-driven architecture. It captures and amalgamates valuable lessons learned learned by the software development community in building internet-scale applications. The manifesto makes a lot of sense, and I can subscribe to the ideas presented in it. However, on a few occasions, the terminology could be elaborated a bit and made more approachable to developers who don’t have extensive experience in scalability issues. Sometimes, the worst thing that can happen to great ideas is that they get diluted by unfortunate misunderstandings!Reference: Thoughts on The Reactive Manifesto from our JCG partner Marko Asplund at the practicing techie blog....
jooq-2-logo

Using jOOQ with Spring: CRUD

jOOQ is a library which helps us to get back in control of our SQL. It can generate code from our database and lets us build typesafe database queries by using its fluent API. The earlier parts of this tutorial have taught us how we can configure the application context of our example application and generate code from our database. We are now ready to take one step forward and learn how we can create typesafe queries with jOOQ. This blog post describes how we can add CRUD operations to a simple application which manages todo entries.   Let’s get started. Additional Reading:Using jOOQ with Spring: Configuration is the first part of this tutorial, and it describes you can configure the application context of a Spring application which uses jOOQ. You can understand this blog post without reading the first part of this tutorial, but if you want to really use jOOQ in a Spring powered application, I recommend that you read the first part of this tutorial as well. Using jOOQ with Spring: Code Generation is the second part of this tutorial, and it describes how we can reverse-engineer our database and create the jOOQ query classes which represents different database tables, records, and so on. Because these classes are the building blocks of typesafe SQL queries, I recommend that you read the second part of this tutorial before reading this blog post.Creating the Todo Class Let’s start by creating a class which contains the information of a single todo entry. This class has the following fields:The id field contains the id of the todo entry. The creationTime field contains a timestamp which describes when the todo entry was persisted for the first time. The description field contains the description of the todo entry. The modificationTime field contains a timestamp which describes when the todo entry was updated. The title field contains the title of the todo entry.The name of this relatively simple class is Todo, and it follows three principles which are described in the following:We can create new Todo objects by using the builder pattern described in Effective Java by Joshua Bloch. If you are not familiar with this pattern, you should read an article titled Item 2: Consider a builder when faced with many constructor parameters. The title field is mandatory, and we cannot create a new Todo object which has either null or empty title. If we try to create a Todo object with an invalid title, an IllegalStateException is thrown. This class is immutable. In other words, all its field are declared final.The source code of the Todo class looks as follows: import org.apache.commons.lang3.builder.ToStringBuilder; import org.joda.time.LocalDateTime;import java.sql.Timestamp;public class Todo {private final Long id;private final LocalDateTime creationTime;private final String description;private final LocalDateTime modificationTime;private final String title;private Todo(Builder builder) { this.id = builder.id;LocalDateTime creationTime = null; if (builder.creationTime != null) { creationTime = new LocalDateTime(builder.creationTime); } this.creationTime = creationTime;this.description = builder.description;LocalDateTime modificationTime = null; if (builder.modificationTime != null) { modificationTime = new LocalDateTime(builder.modificationTime); } this.modificationTime = modificationTime;this.title = builder.title; }public static Builder getBuilder(String title) { return new Builder(title); }//Getters are omitted for the sake of clarity.public static class Builder {private Long id;private Timestamp creationTime;private String description;private Timestamp modificationTime;private String title;public Builder(String title) { this.title = title; }public Builder description(String description) { this.description = description; return this; }public Builder creationTime(Timestamp creationTime) { this.creationTime = creationTime; return this; }public Builder id(Long id) { this.id = id; return this; }public Builder modificationTime(Timestamp modificationTime) { this.modificationTime = modificationTime; return this; }public Todo build() { Todo created = new Todo(this);String title = created.getTitle();if (title == null || title.length() == 0) { throw new IllegalStateException("title cannot be null or empty"); }return created; } } } Let’s find out why we need to get the current date and time, and more importantly, what is the best way to do it. Getting the Current Date and Time Because the creation time and modification time of each todo entry are stored to the database, we need a way to obtain the current date and time. Of course could we simply create this information in our repository. The problem is that if we would do this, we wouldn’t be able to write automated tests which ensure that the creation time and the modification time are set correctly (we cannot write assertions for these fields because their values depends from the current time). That is why we need to create a separate component which is responsible for returning the current date and time. The DateTimeService interface declares two methods which are described in the following:The getCurrentDateTime() method returns the current date and time as a LocalDateTime object. The getCurrentTimestamp() method returns the current date and time as a Timestamp object.The source code of the DateTimeService interface looks as follows: import org.joda.time.LocalDateTime; import java.sql.Timestamp;public interface DateTimeService {public LocalDateTime getCurrentDateTime();public Timestamp getCurrentTimestamp(); } Because our application is interested in the “real” time, we have to implement this interface and create a component which returns the real date and time. We can do this by following these steps:Create a CurrentTimeDateTimeService class which implements the DateTimeService interface. Annotate the class with the @Profile annotation and set the name of the profile to ‘application’. This means that the component can be registered to the Spring container when the active Spring profile is ‘application’. Annotate the class with the @Component annotation. This ensures that the class is found during classpath scanning. Implement the methods declared in the DateTimeService interface. Each method must return the current date and time.The source code of the CurrentTimeDateTimeService looks as follows:import org.joda.time.LocalDateTime; import org.springframework.context.annotation.Profile; import org.springframework.stereotype.Component;import java.sql.Timestamp;@Profile("application") @Component public class CurrentTimeDateTimeService implements DateTimeService {@Override public LocalDateTime getCurrentDateTime() { return LocalDateTime.now(); }@Override public Timestamp getCurrentTimestamp() { return new Timestamp(System.currentTimeMillis()); } } Let’s move on and start implementing the repository layer of our example application. Implementing the Repository Layer First we have create a repository interface which provides CRUD operations for todo entries. This interface declares five methods which are described in the following:The Todo add(Todo todoEntry) method saves a new todo entry to the database and returns the information of the saved todo entry. The Todo delete(Long id) method deletes a todo entry and returns the deleted todo entry. The ListfindAll()method returns all todo entries which are found from the database. The Todo findById(Long id) returns the information of a single todo entry. The Todo update(Todo todoEntry) updates the information of a todo entry and returns the updated todo entry.The source code of the TodoRepository interface looks as follows: import java.util.List;public interface TodoRepository {public Todo add(Todo todoEntry);public Todo delete(Long id);public List<Todo> findAll();public Todo findById(Long id);public Todo update(Todo todoEntry); } Next we have to implement the TodoRepository interface. When we do that, we must follow the following rule: All database queries created by jOOQ must be executed inside a transaction. The reason for this is that our application uses the TransactionAwareDataSourceProxy class, and if we execute database queries without a transaction, jOOQ will use a different connection for each operation. This can lead into race condition bugs. Typically the service layer acts as a transaction boundary, and each call to a jOOQ repository should be made inside a transaction. However, because programmers make mistakes too, we cannot trust that this is the case. That is why we must annotate the repository class or its methods with the @Transactional annotation. Now that we have got that covered, we are ready to create our repository class. Creating the Repository Class We can create the “skeleton” of our repository class by following these steps:Create a JOOQTodoRepository class and implement the TodoRepository interface. Annotate the class with the @Repository annotation. This ensures that the class is found during the classpath scan. Add a DateTimeService field to the created class. As we remember, the DateTimeService interface declares the methods which are used to get the current date and time. Add a DSLContext field to the created class. This interface acts as an entry point to the jOOQ API and we can build our SQL queries by using it. Add a public constructor to the created class and annotate the constructor with the @Autowired annotation. This ensures that the dependencies of our repository are injected by using constructor injection. Add a private Todo convertQueryResultToModelObject(TodosRecord queryResult) method to the repository class. This utility method is used by the public methods of our repository class. Implement this method by following these steps:Create a new Todo object by using the information of the TodosRecord object given as a method parameter. Return the created object.The relevant part of the JOOQTodoRepository class looks as follows: import net.petrikainulainen.spring.jooq.todo.db.tables.records.TodosRecord; import org.jooq.DSLContext; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Repository;@Repository public class JOOQTodoRepository implements TodoRepository {private final DateTimeService dateTimeService;private final DSLContext jooq;@Autowired public JOOQTodoRepository(DateTimeService dateTimeService, DSLContext jooq) { this.dateTimeService = dateTimeService; this.jooq = jooq; }private Todo convertQueryResultToModelObject(TodosRecord queryResult) { return Todo.getBuilder(queryResult.getTitle()) .creationTime(queryResult.getCreationTime()) .description(queryResult.getDescription()) .id(queryResult.getId()) .modificationTime(queryResult.getModificationTime()) .build(); } } Let’s move on and implement the methods which provide CRUD operations for todo entries. Adding a New Todo Entry The public Todo add(Todo todoEntry) method of the TodoRepository interface is used to add a new todo entries to the database. We can implement this method by following these steps:Add a private TodosRecord createRecord(Todo todoEntry) method to the repository class and implement this method following these steps:Get the current date and time by calling the getCurrentTimestamp() method of the DateTimeService interface. Create a new TodosRecord object and set its field values by using the information of the Todo object given as a method parameter. Return the created TodosRecord object.Add the add() method to the JOOQTodoRepository class and annotate the method with the @Transactional annotation. This ensures that the INSERT statement is executed inside a read-write transaction. Implement the add() method by following these steps:Add a new todo entry to the database by following these steps:Create a new INSERT statement by calling the insertInto(Table table) method of the DSLContext interface and specify that you want to insert information to the todos table. Create a new TodosRecord object by calling the createRecord() method. Pass the Todo object as a method parameter. Set the inserted information by calling the set(Record record) method of the InsertSetStep interface. Pass the created TodosRecord object as a method parameter. Ensure that the INSERT query returns all inserted fields by calling the returning() method of the InsertReturningStep interface. Get the TodosRecord object which contains the values of all inserted fields by calling the fetchOne() method of the InsertResultStep interface.Convert the TodosRecord object returned by the INSERT statement into a Todo object by calling the convertQueryResultToModelObject() method. Return the created the Todo object.The relevant part of the JOOQTodoRepository class looks as follows: import net.petrikainulainen.spring.jooq.todo.db.tables.records.TodosRecord; import org.jooq.DSLContext; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional;import java.sql.Timestamp;import static net.petrikainulainen.spring.jooq.todo.db.tables.Todos.TODOS;@Repository public class JOOQTodoRepository implements TodoRepository {private final DateTimeService dateTimeService;private final DSLContext jooq;//The constructor is omitted for the sake of clarity@Transactional @Override public Todo add(Todo todoEntry) { TodosRecord persisted = jooq.insertInto(TODOS) .set(createRecord(todoEntry)) .returning() .fetchOne();return convertQueryResultToModelObject(persisted); }private TodosRecord createRecord(Todo todoEntry) { Timestamp currentTime = dateTimeService.getCurrentTimestamp();TodosRecord record = new TodosRecord(); record.setCreationTime(currentTime); record.setDescription(todoEntry.getDescription()); record.setModificationTime(currentTime); record.setTitle(todoEntry.getTitle());return record; }private Todo convertQueryResultToModelObject(TodosRecord queryResult) { return Todo.getBuilder(queryResult.getTitle()) .creationTime(queryResult.getCreationTime()) .description(queryResult.getDescription()) .id(queryResult.getId()) .modificationTime(queryResult.getModificationTime()) .build(); } } The section 4.3.3. The INSERT statement of the jOOQ reference manual provides additional information about inserting data to the database. Let’s move on and find out how we can find all entries which are stored to the database. Finding All Todo Entries The public List findAll() method of the TodoRepository interface returns all todo entries which are stored to the database. We can implement this method by following these steps:Add the findAll() method to the repository class and annotate the method with the @Transactional annotation. Set the value of its readOnly attribute to true. This ensures that the SELECT statement is executed inside a read-only transaction. Get all todo entries from the database by following these steps:Create a new SELECT statement by calling the selectFrom(Table table) method of the DSLContext interface and specify that you want to select information from the todos table. Get a list of TodosRecord objects by calling the fetchInto(Class type) method of the ResultQuery interface.Iterate the returned list of TodosRecord objects and convert each TodosRecord object into a Todo object by calling the convertQueryResultToModelObject() method. Add each Todo object to the list of Todo objects. Return the List which contains the found Todo objects.The relevant part of the JOOQTodoRepository class looks as follows: import net.petrikainulainen.spring.jooq.todo.db.tables.records.TodosRecord; import org.jooq.DSLContext; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional;import java.util.ArrayList; import java.util.List;import static net.petrikainulainen.spring.jooq.todo.db.tables.Todos.TODOS;@Repository public class JOOQTodoRepository implements TodoRepository {private final DSLContext jooq;//The constructor is omitted for the sake of clarity@Transactional(readOnly = true) @Override public List<Todo> findAll() { List<Todo> todoEntries = new ArrayList<>();List<TodosRecord> queryResults = jooq.selectFrom(TODOS).fetchInto(TodosRecord.class);for (TodosRecord queryResult: queryResults) { Todo todoEntry = convertQueryResultToModelObject(queryResult); todoEntries.add(todoEntry); }return todoEntries; }private Todo convertQueryResultToModelObject(TodosRecord queryResult) { return Todo.getBuilder(queryResult.getTitle()) .creationTime(queryResult.getCreationTime()) .description(queryResult.getDescription()) .id(queryResult.getId()) .modificationTime(queryResult.getModificationTime()) .build(); } } The section 4.3.2. The SELECT Statement of the jOOQ reference manual provides more information about selecting information from the database. Next we will find out how we can get a single todo entry from the database. Finding a Single Todo Entry The public Todo findById(Long id) method of the TodoRepository interface returns the information of a single todo entry. We can implement this method by following these steps:Add the findById() method the repository class and annotate the method with the @Transactional annotation. Set the value of its readOnly attribute to true. This ensures that the SELECT statement is executed inside a read-only transaction. Get the information of a single todo entry from the database by following these steps:Create a new SELECT statement by calling the selectFrom(Table table) method of the DSLContext interface and specify that you want to select information from the todos table. Specify the WHERE clause of the SELECT statement by calling the where(Collection conditions) method of the SelectWhereStep interface. Ensure that the SELECT statement returns only the todo entry which id was given as a method parameter. Get the TodosRecord object by calling the fetchOne() method of the ResultQuery interface.If the returned TodosRecord object is null, it means that no todo entry was found with the given id. If this is the case, throw a new TodoNotFoundException. Convert TodosRecord object returned by the SELECT statement into a Todo object by calling the convertQueryResultToModelObject() method. Return the created Todo object.The relevant part of the JOOQTodoRepository looks as follows:import net.petrikainulainen.spring.jooq.todo.db.tables.records.TodosRecord; import org.jooq.DSLContext; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional;import static net.petrikainulainen.spring.jooq.todo.db.tables.Todos.TODOS;@Repository public class JOOQTodoRepository implements TodoRepository {private final DSLContext jooq;//The constructor is omitted for the sake of clarity.@Transactional(readOnly = true) @Override public Todo findById(Long id) { TodosRecord queryResult = jooq.selectFrom(TODOS) .where(TODOS.ID.equal(id)) .fetchOne();if (queryResult == null) { throw new TodoNotFoundException("No todo entry found with id: " + id); }return convertQueryResultToModelObject(queryResult); }private Todo convertQueryResultToModelObject(TodosRecord queryResult) { return Todo.getBuilder(queryResult.getTitle()) .creationTime(queryResult.getCreationTime()) .description(queryResult.getDescription()) .id(queryResult.getId()) .modificationTime(queryResult.getModificationTime()) .build(); } } The section 4.3.2. The SELECT Statement of the jOOQ reference manual provides more information about selecting information from the database. Let’s find out how we can delete a todo entry from the database. Deleting a Todo Entry The public Todo delete(Long id) method of the TodoRepository interface is used to delete a todo entry from the database. We can implement this method by following these steps:Add the delete() method to the repository class and annotate the method with the @Transactional annotation. This ensures that the DELETE statement is executed inside a read-write transaction. Implement this method by following these steps:Find the deleted Todo object by calling the findById(Long id) method. Pass the id of the deleted todo entry as a method parameter. Delete the todo entry from the database by following these steps:Create a new DELETE statement by calling the delete(Table table) method of the DSLContext interface and specify that you want to delete information from the todos table. Specify the WHERE clause of the DELETE statement by calling the where(Collection conditions) method of the DeleteWhereStep interface. Ensure that the DELETE statement deletes the todo entry which id was given as a method parameter. Execute the the DELETE statement by calling the execute() method of the Query interface.Return the information of the deleted todo entry.The relevant part of the JOOQTodoRepository class looks as follows: import net.petrikainulainen.spring.jooq.todo.db.tables.records.TodosRecord; import org.jooq.DSLContext; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional;import static net.petrikainulainen.spring.jooq.todo.db.tables.Todos.TODOS;@Repository public class JOOQTodoRepository implements TodoRepository {private final DSLContext jooq;//The constructor is omitted for the sake of clarity@Transactional @Override public Todo delete(Long id) { Todo deleted = findById(id);int deletedRecordCount = jooq.delete(TODOS) .where(TODOS.ID.equal(id)) .execute();return deleted; } } The section 4.3.5. The DELETE Statement of the jOOQ reference manual provides additional information about deleting data from the database. Let’s move on and find out how we can update the information of an existing todo entry. Updating an Existing Todo Entry The public Todo update(Todo todoEntry) method of the TodoRepository interface updates the information of an existing todo entry. We can implement this method by following these steps:Add the update() method to the repository class and annotate the method with the @Transactional annotation. This ensures that the UPDATE statement is executed inside a read-write transaction. Get the current date and time by calling the getCurrentTimestamp() method of the DateTimeService interface. Update the information of the todo entry by following these steps:Create a new UPDATE statement by calling the update(Table table) method of the DSLContext interface and specify that you want to update information found from the todos table. Set the new description, modification time, and title by calling the set(Field field, T value) method of the UpdateSetStep interface. Specify the WHERE clause of the UPDATE statement by calling the where(Collection conditions) method of the UpdateWhereStep interface. Ensure that the UPDATE statement updates the todo entry which id is found from the Todo object given as a method parameter. Execute the UPDATE statement by calling the execute() method of the Query interface.Get the information of the updated todo entry by calling the findById() method. Pass the id of the updated todo entry as a method parameter. Return the information of the updated todo entry.The relevant part of the JOOQTodoRepository class looks as follows: import org.jooq.DSLContext; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional;import java.sql.Timestamp;import static net.petrikainulainen.spring.jooq.todo.db.tables.Todos.TODOS;@Repository public class JOOQTodoRepository implements TodoRepository {private final DateTimeService dateTimeService;private final DSLContext jooq;//The constructor is omitted for the sake of clarity.@Transactional @Override public Todo update(Todo todoEntry) { Timestamp currentTime = dateTimeService.getCurrentTimestamp(); int updatedRecordCount = jooq.update(TODOS) .set(TODOS.DESCRIPTION, todoEntry.getDescription()) .set(TODOS.MODIFICATION_TIME, currentTime) .set(TODOS.TITLE, todoEntry.getTitle()) .where(TODOS.ID.equal(todoEntry.getId())) .execute();return findById(todoEntry.getId()); } }The section 4.3.4. The UPDATE Statement of the jOOQ reference manual provides additional information about updating the information which is stored to the database. If you are using Firebird or PostgreSQL databases, you can use the RETURNING clause in the update statement (and avoid the extra select clause).That is all folks. Let’s summarize what we learned from this blog post. Summary We have now implemented CRUD operations for todo entries. This tutorial has taught us three things:We learned how we can get the current date and time in a way which doesn’t prevent us from writing automated tests for our example application. We learned how we can ensure that all database queries executed by jOOQ are executed inside a transaction. We learned how we can create INSERT, SELECT, DELETE, and UPDATE statements by using the jOOQ API.The next part of this tutorial describes how we can add a search function, which supports sorting and pagination, to our example application.The example application of this blog post is available at Github (The frontend is not implemented yet).Reference: Using jOOQ with Spring: CRUD from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
apache-activemq-logo

ActiveMQ – Network of Brokers Explained

Objective This 7 part blog series is to share about how to create network of ActiveMQ brokers in order to achieve high availability and scalability. Why network of brokers? ActiveMQ message broker is a core component of messaging infrastructure in an enterprise. It needs to be highly available and dynamically scalable to facilitate communication between dynamic heterogeneous distributed applications which have varying capacity needs. Scaling enterprise applications on commodity hardware is a rage nowadays. ActiveMQ caters to that very well by being able to create a network of brokers to share the load. Many times applications running across geographically distributed data centers need to coordinate messages. Running message producers and consumers across geographic regions/data centers can be architected better using network of brokers. ActiveMQ uses transport connectors over which it communicates with message producers and consumers. However, in order to facilitate broker to broker communication, ActiveMQ uses network connectors. A network connector is a bridge between two brokers which allows on-demand message forwarding. In other words, if Broker B1 initiates a network connector to Broker B2 then the messages on a channel (queue/topic) on B1 get forwarded to B2 if there is at least one consumer on B2 for the same channel. If the network connector was configured to be duplex, the messages get forwarded from B2 to B1 on demand. This is very interesting because it is now possible for brokers to communicate with each other dynamically. In this 7 part blog series, we will look into the following topics to gain understanding of this very powerful ActiveMQ feature:Network Connector Basics – Part 1 Duplex network connectors – Part 2 Load balancing consumers on local/remote brokers – Part 3 Load-balance consumers/subscribers on remote brokersQueue: Load balance remote concurrent consumers - Part 4 Topic: Load Balance Durable Subscriptions on Remote Brokers – Part 5Store/Forward messages and consumer failover  - Part 6How to prevent stuck  messagesVirtual Destinations – Part 7To give credit where it is due, the following URLs have helped me in creating this blog post series.Advanced Messaging with ActiveMQ by Dejan Bosanac [Slides 32-36] Understanding ActiveMQ Broker Networks by Jakub KorabPrerequisitesActiveMQ 5.8.0 – To create broker instances Apache Ant – To run ActiveMQ sample producer and consumers for demo.We will use multiple ActiveMQ broker instances on the same machine for the ease of demonstration. Network Connector Basics – Part 1 The following diagram shows how a network connector functions. It bridges two brokers and is used to forward messages from Broker-1 to Broker-2 on demand if established by Broker-1 to Broker-2.A network connector can be duplex so messages could be forwarded in the opposite direction; from Broker-2 to Broker-1, once there is a consumer on Broker-1 for a channel which exists in Broker-2. More on this in Part 2 Setup network connector between broker-1 and broker-2Create two broker instances, say broker-1 and broker-2Ashwinis-MacBook-Pro:bin akuntamukkala$ pwd /Users/akuntamukkala/apache-activemq-5.8.0/bin Ashwinis-MacBook-Pro:bin akuntamukkala$ ./activemq-admin create ../bridge-demo/broker-1 Ashwinis-MacBook-Pro:bin akuntamukkala$ ./activemq-admin create ../bridge-demo/broker-2 Since we will be running both brokers on the same machine, let’s configure broker-2 such that there are no port conflicts.Edit /Users/akuntamukkala/apache-activemq-5.8.0/bridge-demo/broker-2/conf/activemq.xmlChange transport connector to 61626 from 61616 Change AMQP port from 5672 to 6672 (won’t be using it for this blog)Edit /Users/akuntamukkala/apache-activemq-5.8.0/bridge-demo/broker-2/conf/jetty.xmlChange web console port to 9161 from 8161Configure Network Connector from broker-1 to broker-2 Add the following XML snippet to  /Users/akuntamukkala/apache-activemq-5.8.0/bridge-demo/broker-1/conf/activemq.xmlnetworkConnectors> <networkConnector name="T:broker1->broker2" uri="static:(tcp://localhost:61626)" duplex="false" decreaseNetworkConsumerPriority="true" networkTTL="2" dynamicOnly="true"> <excludedDestinations> <queue physicalName=">" /> </excludedDestinations> </networkConnector> <networkConnector name="Q:broker1->broker2" uri="static:(tcp://localhost:61626)" duplex="false" decreaseNetworkConsumerPriority="true" networkTTL="2" dynamicOnly="true"> <excludedDestinations> <topic physicalName=">" /> </excludedDestinations> </networkConnector> </networkConnectors> The above XML snippet configures two network connectors “T:broker1->broker2″ (only topics as queues are excluded) and “Q:broker1->broker2″  (only queues as topics are excluded). This allows for nice separation between network connectors used for topics and queues. The name can be arbitrary although I prefer to specify the [type]:->[destination broker]. The URI attribute specifies how to connect to broker-2Start broker-2Ashwinis-MacBook-Pro:bin akuntamukkala$ pwd /Users/akuntamukkala/apache-activemq-5.8.0/bridge-demo/broker-2/bin Ashwinis-MacBook-Pro:bin akuntamukkala$ ./broker-2 consoleStart broker-1Ashwinis-MacBook-Pro:bin akuntamukkala$ pwd /Users/akuntamukkala/apache-activemq-5.8.0/bridge-demo/broker-1/bin Ashwinis-MacBook-Pro:bin akuntamukkala$ ./broker-1 console Logs on broker-1 show 2 network connectors being established with broker-2 INFO | Establishing network connection from vm://broker-1?async=false&network=true to tcp://localhost:61626 INFO | Connector vm://broker-1 Started INFO | Establishing network connection from vm://broker-1?async=false&network=true to tcp://localhost:61626 INFO | Network connection between vm://broker-1#24 and tcp://localhost/127.0.0.1:61626@52132(broker-2) has been established. INFO | Network connection between vm://broker-1#26 and tcp://localhost/127.0.0.1:61626@52133(broker-2) has been established. Web Console on broker-1 @ http://localhost:8161/admin/connections.jsp shows the two network connectors established to broker-2The same on broker-2 does not show any network connectors since no network connectors were initiated by broker-2 Let’s see this in action Let’s produce 100 persistent messages on a queue called “foo.bar” on broker-1. Ashwinis-MacBook-Pro:example akuntamukkala$ pwd /Users/akuntamukkala/apache-activemq-5.8.0/example Ashwinis-MacBook-Pro:example akuntamukkala$ ant producer -Durl=tcp://localhost:61616 -Dtopic=false -Ddurable=true -Dsubject=foo.bar -Dmax=100broker-1 web console shows that 100 messages have been enqueued in queue “foo.bar” http://localhost:8161/admin/queues.jspLet’s start a consumer on a queue called “foo.bar” on broker-2. The important thing to note here is that the destination name “foo.bar” should match exactly. Ashwinis-MacBook-Pro:example akuntamukkala$ ant consumer -Durl=tcp://localhost:61626 -Dtopic=false -Dsubject=foo.bar We find that all the 100 messages from broker-1′s foo.bar queue get forwarded to broker-2′s foo.bar queue consumer. broker-1 admin console at http://localhost:8161/admin/queues.jspbroker-2 admin console @ http://localhost:9161/admin/queues.jsp shows that the consumer we had started has consumed all 100 messages which were forwarded on-demand from broker-1broker-2 consumer details on foo.bar queuebroker-1 admin console shows that all 100 messages have been dequeued [forwarded to broker-2 via the network connector].broker-1 consumer details on “foo.bar” queue shows that the consumer is created on demand: [name of connector]_[destination broker]_inbound_Thus we have seen the basics of network connector in ActiveMQ. Stay tuned for Part 2…Reference: ActiveMQ – Network of Brokers Explained from our JCG partner Ashwini Kuntamukkala at the Ashwini Kuntamukkala – Technology Enthusiast blog....
java-logo

How to do Continuous Integration with Java 8, NetBeans Platform 8, Jenkins, Jacoco and Sonar

Intro Java 8 is there, the promised revolution is finally released, and I am sure that a lot of you are having in mind the same question “Should I use it in my project?”. Well, I had the same question for few months and today that I have an answer I would like to share it with you. A lot of aspects have been influencing this decision but in this post I want to focus on one in particular that is: Can I continue to do Continuous Integration with Java 8 and NetBeans Platform?   The main question was around the maturity of the tools necessary to do CI, and how easy was to integrate that with the ant build scripts of the NetBeans Platform. Fortunately, we found that it is possible and easy to do! I would also thanks Alberto Requena Sanchez for his contribution on this article. The Technical Environment Working in a project where Safety & Quality are the main drivers, CI is vital. For this reason I started with my team a “proof of concept” to show that the following technologies were ready to work together:Java 8, NetBeans 8.0 & Ant JUnit 4 & Jacoco 0.7.1 Jenkins & Sonar 4.2Scope of this post is to explain all the steps done to install and setup the necessary tools to have a completely working CI server for Java 8. Note that the proof has been done on a developer machine on Windows 7, but is easy to do the same in a Linux server. The next diagram shows at high level the architecture that will be described in the post.  Java 8, NetBeans 8.0 & Ant Java 8 is released, get it here, install it, study it (preferable) and start to use it! We are using the NetBeans Platform 8.0 to create a modular application. This application has a Multi Layered Architecture where each layer is a Suite of Modules, and where the final executable is just an integrated set of Suites. We are using Ant to build our projects, but if you are using Maven the procedure can even be simplified since the Sonar integration in Jenkins can be done via a plugin that uses Maven. JUnit 4 & Jacoco 0.7.1 Naturally, we are doing unit tests, and for this reason we use JUnit 4. It is well integrated everywhere, specially in NetBeans. Jacoco is a great tool for the generation of code coverage and since version 0.7.1 it fully supports Java 8. Jenkins & Sonar 4.2 Jenkins is the engine of our CI server, it will integrate with all the above described technologies without any issue. The tested version is 1.554. Sonar is doing all the quality analysis of the code. The release 4.2 has a full compatibility with Java 8. Using Sonar with Ant needs a small library that contains the target to be integrated in Jenkins. If you are using Maven instead you can just install the plugin for Maven. Starting the puzzle Step 1 – NetBeansInstall Java 8 & NetBeans 8.0 Create a module suite with several modules, several classes and several jUnit tests Commit the code into your source code version management server Inside the harness of NetBeans Create a folder in the harness named “jacoco-0.7.1″ containing the downloaded jacoco jars Create a folder in the harness named “sonar-ant-task” and put inside the downloaded sonar ant jars Create a file in the harness named sonar-jacoco-module.xml and paste the following code inside: <?xml version="1.0" encoding="UTF-8"?> <!----> <project name="sonar-jacoco-module" basedir="." xmlns:jacoco="antlib:org.jacoco.ant" xmlns:sonar="antlib:org.sonar.ant"> <description>Builds the module suite otherSuite.</description><property name="jacoco.dir" location="${nbplatform.default.harness.dir}/jacoco-0.7.1"/> <property name="result.exec.file" location="${jacoco.dir}/jacoco.exec"/> <property name="build.test.results.dir" location="build/test/unit/results"/><property file="nbproject/project.properties"/><!-- Step 1: Import JaCoCo Ant tasks --> <taskdef uri="antlib:org.jacoco.ant" resource="org/jacoco/ant/antlib.xml"> <classpath path="${jacoco.dir}/jacocoant.jar"/> </taskdef><!-- Target at the level of modules --> <target name="-do-junit" depends="test-init"> <echo message="Doing testing for jacoco" /> <macrodef name="junit-impl"> <attribute name="test.type"/> <attribute name="disable.apple.ui" default="false"/> <sequential> <jacoco:coverage destfile="${build.test.results.dir}/${code.name.base}_jacoco.exec"> <junit showoutput="true" fork="true" failureproperty="tests.failed" errorproperty="tests.failed" filtertrace="${test.filter.trace}" tempdir="${build.test.@{test.type}.results.dir}" timeout="${test.timeout}"> <batchtest todir="${build.test.@{test.type}.results.dir}"> <fileset dir="${build.test.@{test.type}.classes.dir}" includes="${test.includes}" excludes="${test.excludes}"/> </batchtest> <classpath refid="test.@{test.type}.run.cp"/> <syspropertyset refid="test.@{test.type}.properties"/> <jvmarg value="${test.bootclasspath.prepend.args}"/> <jvmarg line="${test.run.args}"/> <!--needed to have tests NOT to steal focus when running, works in latest apple jdk update only.--> <sysproperty key="apple.awt.UIElement" value="@{disable.apple.ui}"/> <formatter type="brief" usefile="false"/> <formatter type="xml"/> </junit> </jacoco:coverage> <copy file="${build.test.results.dir}/${code.name.base}_jacoco.exec" todir="${suite.dir}/build/coverage"/> <!-- Copy the result of all the unit tests of all the modules into one common folder at the level of the suite, so that sonar could find those files to generate associated reports --> <copy todir="${suite.dir}/build/test-results"> <fileset dir="${build.test.results.dir}"> <include name="**/TEST*.xml"/> </fileset> </copy> <fail if="tests.failed" unless="continue.after.failing.tests">Some tests failed; see details above.</fail> </sequential> </macrodef> <junit-impl test.type="${run.test.type}" disable.apple.ui="${disable.apple.ui}"/> </target></project> Scope of this file is to override the do-junit task adding the jacoco coverage, and to copy the result of the unit test of each module in the build of the suite, so that sonar will find all of them together to perform its analysis. Create a file in the harness named sonar-jacoco-suite.xml and paste the following code inside <?xml version="1.0" encoding="UTF-8"?> <project name="sonar-jacoco-suite" basedir="." xmlns:jacoco="antlib:org.jacoco.ant" xmlns:sonar="antlib:org.sonar.ant"> <description>Builds the module suite otherSuite.</description><property name="jacoco.dir" location="${nbplatform.default.harness.dir}/jacoco-0.7.1"/> <property name="result.exec.file" location="build/coverage"/>    <!-- Define the SonarQube global properties (the most usual way is to pass these properties via the command line) --> <property name="sonar.jdbc.url" value="jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8" /> <property name="sonar.jdbc.username" value="sonar" /> <property name="sonar.jdbc.password" value="sonar" /> <!-- Define the SonarQube project properties --> <property name="sonar.projectKey" value="org.codehaus.sonar:example-java-ant" /> <property name="sonar.projectName" value="Simple Java Project analyzed with the SonarQube Ant Task" /> <property name="sonar.projectVersion" value="1.0" /> <property name="sonar.language" value="java" /> <!-- Load the project properties file for retrieving the modules of the suite --> <property file="nbproject/project.properties"/><!-- Using Javascript functions to build the paths of the data source for sonar configuration --> <script language="javascript">  <![CDATA[// getting the value modulesName = project.getProperty("modules"); modulesName = modulesName.replace(":",","); res = modulesName.split(","); srcModules = ""; binariesModules = ""; testModules = ""; //Build the paths   for (var i=0; i<res.length; i++) { srcModules += res[i]+"/src,"; binariesModules += res[i]+"/build/classes,"; testModules += res[i]+"/test,"; } //Remove the last comma srcModules = srcModules.substring(0, srcModules.length - 1); binariesModules = binariesModules.substring(0, binariesModules.length - 1); testModules = testModules.substring(0, testModules.length - 1); // store the result in a new properties project.setProperty("srcModulesPath",srcModules); project.setProperty("binariesModulesPath",binariesModules); project.setProperty("testModulesPath",testModules); ]]> </script>   <!-- Display the values -->        <property name="sonar.sources" value="${srcModulesPath}"/> <property name="sonar.binaries" value="${binariesModulesPath}" /> <property name="sonar.tests" value="${testModulesPath}" /> <!-- Define where the coverage reports are located --> <!-- Tells SonarQube to reuse existing reports for unit tests execution and coverage reports --> <property name="sonar.dynamicAnalysis" value="reuseReports" /> <!-- Tells SonarQube where the unit tests execution reports are --> <property name="sonar.junit.reportsPath" value="build/test-results" /> <!-- Tells SonarQube that the code coverage tool by unit tests is JaCoCo --> <property name="sonar.java.coveragePlugin" value="jacoco" /> <!-- Tells SonarQube where the unit tests code coverage report is --> <property name="sonar.jacoco.reportPath" value="${result.exec.file}/merged.exec" /> <!--  Step 1: Import JaCoCo Ant tasks  --> <taskdef uri="antlib:org.jacoco.ant" resource="org/jacoco/ant/antlib.xml"> <classpath path="${jacoco.dir}/jacocoant.jar"/> </taskdef>     <target name="merge-coverage">         <jacoco:merge destfile="${result.exec.file}/merged.exec"> <fileset dir="${result.exec.file}" includes="*.exec"/> </jacoco:merge> </target><target name="sonar"> <taskdef uri="antlib:org.sonar.ant" resource="org/sonar/ant/antlib.xml"> <!-- Update the following line, or put the "sonar-ant-task-*.jar" file in your "$HOME/.ant/lib" folder --> <classpath path="${harness.dir}/sonar-ant-task-2.1/sonar-ant-task-2.1.jar" /> </taskdef><!-- Execute the SonarQube analysis --> <sonar:sonar /> </target></project> Scope of this file is to define at the level of the suite the sonar configuration and the sonar ant task. If you are using for sonar some special database or special users is here that you must change the configuration. Another task that is defined is the jacoco merge that will actually take all the generated exec for each module and merge them into one single exec in the build of the suite, to permit sonar to make its analysis. Replace the content of the build.xml of each module with this one: <description>Builds, tests, and runs the project com.infrabel.jacoco.</description> <property file="nbproject/suite.properties"/> <property file="${suite.dir}/nbproject/private/platform-private.properties"/> <property file="${user.properties.file}"/> <import file="${nbplatform.default.harness.dir}/sonar-jacoco-module.xml"/> <import file="nbproject/build-impl.xml"/> Replace the content of the build.xml of each suite with this one: <description>Builds the module suite otherSuite.</description> <property file="nbproject/private/platform-private.properties"/> <property file="${user.properties.file}"/> <import file="${nbplatform.default.harness.dir}/sonar-jacoco-suite.xml"/> <import file="nbproject/build-impl.xml"/> Step 2 – Jenkins In “Manage Jenkins -> Manage Plugins” go in the available list and install (if not already present) the following plugins:JaCoCo Mercurial or Subversion SonarIf you are behind a firewall or proxy and getting issue to configure the network settings you can always download and install them manually from here. In this case remember to download also the dependencies of each plugin first. In “Manage Jenkins -> Configure System” check that all plugins are correctly setup, see the following screenshots to have an example (replace the folders with the good ones for you):Create a new free style project, configure the version control of your preference and in the “Build” panel add the following three “Invoce Ant” tasks:Finally in the “Post-build Actions” panel add a new “Record Jacoco Coverage Report” configured like this one:Step 3 – Sonar Create a database following this script, and optionally run this query to make the connection work: GRANT ALL PRIVILEGES ON 'sonar'.* TO 'sonar'@'localhost'; Go in the configuration file of sonar (sonar.properties) and enable the use of MySQL, the file is located in the conf folder of the installation # Permissions to create tables, indices and triggers # must be granted to JDBC user. # The schema must be created first. sonar.jdbc.username=sonar sonar.jdbc.password=sonar#----- MySQL 5.x # Comment the embedded database and uncomment the following # line to use MySQL sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true In the configuration of sonar update the java plugin if necessary to be compatible with Java 8 If necessary go and configure your proxy always in the sonar.properties fileDone! Now everything is set-up, you can go in NetBeans, do a build, commit your code, then in Jenkins launch the build and after the build is ok check the project in Sonar. That’s all! I hope I did not forget anything, but in case if you find some errors during the process do not hesitate to leave a comment, I will try to find the solution.Reference: How to do Continuous Integration with Java 8, NetBeans Platform 8, Jenkins, Jacoco and Sonar from our JCG partner Marco Di Stefano at the Refactoring Ideas blog....
software-development-2-logo

Seven Databases in Seven Days – Riak

In this post I am summarizing the three days of Riak, which is the second database in the Seven Databases in Seven Days book. This post is actually in order for me to remember some tweaks I had to do while reading this chapter as sometimes the book wasn’t entirely correct. A good blog, which I used a little, can be found at: http://blog.wakatta.jp/blog/2011/12/09/seven-databases-in-seven-weeks-riak-day-3/ (this link directs to the 3rd Riak’s day) I have everything pushed to GitHub as raw material: https://github.com/eyalgo/seven-dbs-in-seven-weeks Installing The book recommends to install using the source code itself. I needed to install Erlang as well. Besides the information in the book, the following link was mostly helpful: http://docs.basho.com/riak/latest/ops/building/installing/from-source/ I installed everything under /usr/local/riak/. Start / Stop / Restart A nice command line to start/stop/restart all the servers: # under /usr/local/riak/riak-1.4.8/dev for node in `ls`; do $node/bin/riak start; done # change start to restart or stop Port The port which was installed in my machine was: 10018 for dev1, 10028 for dev2 etc. The port is located in app.config file, under the etc folder. Day 3 Issues Pre-commit I kept getting PUT aborted by pre-commit hook message instead of the one described in the book. I had to add the language (javascript) to the operation: curl -i -X PUT http://localhost:10018/riak/animals -H "content-type: application/json" -d '{"props":{"precommit":[{"name":"good_score","language":"javascript"}]}}' (see: http://blog.sacaluta.com/2012/07/riak-precommit-hook-example.html) Running a solr query Running the suggested query from the book ( curl http://localhost:10018/solr/animals/select?wt=json&q=nickname:rin%20breed:shepherd&q.op=and) kept returning 400 – Bad Request. All I needed to do was to surround the URL with: ‘ (apostrophe). Inverted Index Running the link as mentioned in the book gives bad response: Invalid link walk query submitted. Valid link walk query format is: ... The correct way, as described in http://docs.basho.com/riak/latest/dev/using/2i/ curl http://localhost:10018/buckets/animals/index/mascot_bin/butler Conclusion Riak chapter gives a taste of this database. It explains more about the “tooling” of it rather than the application of it. I feel that it didn’t explain too much on why someone would use it instead of something else (let’s wait for Redis). The book had errors in how to run commands. I had to find by myself how to fix these problems. Perhaps it’s because I’m reading eBook (PDF on my computer and mobi on my Kindle), and the hard-copy has less issues. The good part of this problem, is that I had to drill down and read more online and learn more from those mistakes.Reference: Seven Databases in Seven Days – Riak from our JCG partner Eyal Golan at the Learning and Improving as a Craftsman Developer blog....
java-logo

Hi there . . ! How would you rate your Java/Java EE skills?

To know, is to know that you know nothing. That is the meaning of true knowledge. Socrates This post is to provide the reader with a quick overview of the Java ecosystem and it’s technology stack. To be honest, there have been many revolutionary changes and additions to the Java Platform – from Java EE 7, Java SE 8 to Java Embedded 8 …. wow! Exciting times! In the midst of all this, why did I decide to write a blog post about a rudimentary topic such as the Java platform and its related technologies? How many times have you conducted an interview and asked a candidate to provide a rough estimate/rating of their Java skill set (on a specific scale)? What kind of answers have you received ? 8/10, 4/5, 6.5/10 ?? I am pretty surprised as to how the candidate actually managed to muster these figures in a matter of few seconds (I really don’t think that experience matters here!) So the premise of this post is toDrive home the point that “How would you rate your Java/J2EE skills?” is an unreasonable question – even though I have made the mistake of asking this on a number of occasions! Help you answer it!Read on . . . . . . . Java Technology can be broadly categorized intoJava SE Java EE Java Embedded Java FXLet’s begin . . . . . Java Standard Edition (Java SE)The Platform itself! The mother of all other Java related technologies ranging from Java EE on enterprise servers to Java Embedded on resource constrained devices. Latest version – Java SE 8 (click here for more on the new stuff in Java SE 8) Java is not just a programming language as many people mistakenly assume. It’s a complete Platform (sorry about the fact that I had to plug in the tabular content in the form of images. For some reason I can’t seem to find support for inserting tables into my WordPress blogs. Hence I decided to write the content in Word and use their snapshots) Primary ComponentsJava Enterprise Edition (Java EE)For developing enterprise grade applications which are distributed, multi-tiered, scalable, robust, fault tolerant. Latest version – Java EE 7 (click here for more on the latest Java EE 7 features) Standards driven modelJava EE 7 defines a unified model for developing rich and powerful server side solutions It is composed of individual specifications which are standards in themselves. Each of these specifications are a set of interfaces/APIs which are implemented by vendors of Application Servers (more details here)There are 32 specifications which Java EE definesAlright then! I am guessing you have had enough of Java EE …. ! Let’s move on Java EmbeddedThe Java Embedded technologies are focussed on mobile and embedded devices (RFIDs, sensors, micro controllers. blu-ray discs etc) and are powered mainly by different flavours of Java ME and SE for specific device capabilities Java Micro Edition (Java ME) flavours Java ME Embedded ClientBased on Connected Device Configuration (CDC) – subset of Java SE platform for small device like mobile phones Sufficient for devices having 8 MB RAM or moreJava ME EmbeddedNew launch Based on Connected Limited Device Configuration (CLDC) – JVM which is optimized for really small embedded systems which have 130 KB or more memory Suitable for memory/resource constrained embedded devices such as sensors, wireless modules etc Hailed as the platform of choice for developing applications in the Internet Of Things (IoT) era The latest version is Java ME Embedded 8 (Early Access) – Lends support for language features from Java SE 8Java SE flavours Java SE EmbeddedIt’s JVM implementation is suitable for mid to high range embedded devices 32 MB or more memory is required Allows developers to configure their own custom JRE as per application requirements Latest version – Java SE Embedded 8Java Embedded SuiteNew platform – An enriched version of Java SE Embedded Adds enterprise functionalities like support for Glass Fish server (yes – an application server in an embedded device!), Java DB, REST support through JAX-RS implementation Oracle Event Processing – Optional module in the Java SE Embedded Suite. It aims at extending real time, event driven processing support to embedded devicesJava FXJava FX is leveraged to build rich client applications. It sort of completes the puzzle so to say, complements the Java server side development stack and provides a comprehensive UI platform including graphics, and media API support. It’s tailor made to deliver high performance with hardware accelerated graphics. Ok, so.. what was the whole point of this post? To help you answer the inevitable “How would you rate your Java/J2EE skills?” Basically, this is what you can doSummarize this post – it’s not going to be tough.. trust me! Ask the interviewer to be more specific as far as Java is concerned, given the fact that you explained the length and breadth of the Java platform!Although this post only touched upon the various Java tech flavors, it’s quite evident as to how vast it is.  That’s precisely why, we as mortals cannot expect to attach numbers and random figures to our Java knowledge. Instead of fooling around with Java ratings, let’s just have fun with the platform and language and leverage it to build stuff which the world has not yet imagined!Reference: Hi there . . ! How would you rate your Java/Java EE skills? from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
java-logo

We’re Hacking JDBC, so You Don’t Have To

We love working with JDBC Said no one. Ever. On a more serious note, JDBC is actually a very awesome API, if you think about it. It is probably also one of the very reasons Java has become the popular platform it is today. Before the JDK 1.1, and before ODBC (and that’s a very long time ago) it was hard to imagine any platform that would standardise database access at all. Heck, SQL itself was hardly even standardised at the time and along came Java with JDBC, a simple API with only few items that you have to know of in every day work:Connection: the object that models all your DB interactions PreparedStatement: the object that lets you execute a statement ResultSet: the object that lets you fetch data from the databaseThat’s it! Back to reality That was the theory. In practice, enterprise software operating on top of JDBC quickly evolved towards this:JDBC is one of the last resorts for Java developers, where they can feel like real hackers, hacking this very stateful, very verbose, very arcane API in many ways. Pretty much everyone operating on JDBC will implement wrappers around the API to prevent at least:Common syntax errors Bind variable index mismatches Dynamic SQL construction Edge cases around the usage LOBs Resource handling and closing Array and UDT management Stored procedure abstraction… and so much more. So while everyone is doing the above infrastructure work, they’re not working on their business logic. And pretty much everyone does these things, when working with JDBC. Hibernate and JPA do not have most these problems, but they’re not SQL APIs any longer, either. Here are a couple of examples that we have been solving inside of jOOQ, so you don’t have to: How to fetch generated keys in some databases case DERBY: case H2: case MARIADB: case MYSQL: { try { listener.executeStart(ctx); result = ctx.statement().executeUpdate(); ctx.rows(result); listener.executeEnd(ctx); }// Yes. Not all warnings may have been consumed yet finally { consumeWarnings(ctx, listener); }// Yep. Should be as simple as this. But it isn't. rs = ctx.statement().getGeneratedKeys();try { List<Object> list = new ArrayList<Object>();// Some JDBC drivers seem to illegally return null // from getGeneratedKeys() sometimes if (rs != null) { while (rs.next()) { list.add(rs.getObject(1)); } }// Because most JDBC drivers cannot fetch all // columns, only identity columns selectReturning(ctx.configuration(), list.toArray()); return result; } finally { JDBCUtils.safeClose(rs); } } How to handle BigInteger and BigDecimal else if (type == BigInteger.class) { // The SQLite JDBC driver doesn't support BigDecimals if (ctx.configuration().dialect() == SQLDialect.SQLITE) { return Convert.convert(rs.getString(index), (Class) BigInteger.class); } else { BigDecimal result = rs.getBigDecimal(index); return (T) (result == null ? null : result.toBigInteger()); } } else if (type == BigDecimal.class) { // The SQLite JDBC driver doesn't support BigDecimals if (ctx.configuration().dialect() == SQLDialect.SQLITE) { return Convert.convert(rs.getString(index), (Class) BigDecimal.class); } else { return (T) rs.getBigDecimal(index); } } How to fetch all exceptions from SQL Server switch (configuration.dialect().family()) { case SQLSERVER: consumeLoop: for (;;) try { if (!stmt.getMoreResults() && stmt.getUpdateCount() == -1) break consumeLoop; } catch (SQLException e) { previous.setNextException(e); previous = e; } } Convinced? This is nasty code. And we have more examples of nasty code here, or in our source code. All of these examples show that when working with JDBC, you’ll write code that you don’t want to / shouldn’t have to write in your application. This is why… we have been hacking JDBC, so you don’t have toReference: We’re Hacking JDBC, so You Don’t Have To from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books