Featured FREE Whitepapers

What's New Here?


Open Session In View Design Tradeoffs

The Open Session in View (OSIV) pattern gives rise to different opinions in the Java development community. Let’s go over OSIV and some of the pros and cons of this pattern.   The problem The problem that OSIV solves is a mismatch between the Hibernate concept of session and it’s lifecycle and the way that many server-side view technologies work. In a typical Java frontend application the service layer starts by querying some of the data needed to build the view. The remaining data needed can be lazy-loaded later, with the condition that the Hibernate session remains open – and there lies the problem. Between the moment that the service layer method finishes it’s execution and the moment that the view is rendered, Hibernate has already committed the transaction and closed the session. When the view tries to lazy load the extra data that it needs, if finds the Hibernate session closed, causing a LazyInitializationException. The OSIV solution OSIV tackles this problem by ensuring that the Hibernate session is kept open all the way up to the rendering of the view – hence the name of the pattern. Because the session is kept open, no more LazyInitializationExceptions occur. The session or entity manager is kept open by means of a filter that is added to the request processing chain. In the case of JPA the OpenEntityManagerInViewFilter will create an entity manager at the beginning of the request, and then bind it to the request thread. The service layer will then be executed and the business transaction committed or rolled back, but the transaction manager will not remove the entity manager from the thread after the commit. When the view rendering starts, the transaction manager will then check if there is already an entity manager binded to the thread, and if so use it instead of creating a new one. After the request is processed, the filter will then unbind the entity manager from the thread. The end result is that the same entity manager used to commit the business transaction was kept around in the request thread, allowing the view rendering code to lazy load the needed data. Going back to the original problem Let’s step back a moment and go back to the initial problem: the LazyInitializationException. Is this exception really a problem? This exception can also be seen as a warning sign of a wrongly written query in the service layer. When building a view and it’s backing services, the developer knows upfront what data is needed, and can make sure that the needed data is loaded before the rendering starts. Several relation types such as one-to-many use lazy-loading by default, but that default setting can be overridden if needed at query time using the following syntax: select p FROM Person p left join fetch p.invoices This means that the lazy loading can be turned off on a case by case basis depending on the data needed by the view. OSIV in projects I’ve worked In projects I have worked that used OSIV, we could see via query logging that the database was getting hit with a high number of SQL queries, sometimes to the point that developers had to turn off the Hibernate SQL logging. The performance of these application was impacted, but it was kept manageable using second-level caches, and due to the fact that these where intranet-based applications with a limited number of users. Pros of OSIV The main advantage of OSIV is that it makes working with ORM and the database more transparent:Less queries need to be manually written Less awareness is required about the Hibernate session and how to solve LazyInitializationExceptions.Cons of OSIV OSIV seems to be easy to misuse and can accidentally introduce N+1 performance problems in the application. On projects I’ve worked OSIV did not work out well in the long-term. The alternative of writing custom queries that eager fetch data depending on the use case is manageable and turned out well in other projects I’ve worked. Alternatives to OSIV Besides the application-level solution of writing custom queries to pre-fetch the needed data, there are other framework-level aproaches to OSIV. The Seam Framework was built by some of the same developers as Hibernate , and solves the problem by introducing the notion of conversation.Reference: Open Session In View Design Tradeoffs from our JCG partner Aleksey Novik at the The JHades Blog blog....

Building a simple RESTful API with Spark

Disclaimer: This post is about the Java micro web framework named Spark and not about the data processing engine Apache Spark. In this blog post we will see how Spark can be used to build a simple web service. As mentioned in the disclaimer, Spark is a micro web framework for Java inspired by the Ruby framework Sinatra. Spark aims for simplicity and provides only a minimal set of features. However, it provides everything needed to build a web application in a few lines of Java code.     Getting started Let’s assume we have a simple domain class with a few properties and a service that provides some basic CRUD functionality: public class User {  private String id;   private String name;   private String email;      // getter/setter } public class UserService {  // returns a list of all users   public List<User> getAllUsers() { .. }      // returns a single user by id   public User getUser(String id) { .. }  // creates a new user   public User createUser(String name, String email) { .. }  // updates an existing user   public User updateUser(String id, String name, String email) { .. } } We now want to expose the functionality of UserService as a RESTful API (For simplicity we will skip the hypermedia part of REST). For accessing, creating and updating user objects we want to use following URL patterns:GET /users Get a list of all usersGET /users/<id> Get a specific userPOST /users Create a new userPUT /users/<id> Update a userThe returned data should be in JSON format. To get started with Spark we need the following Maven dependencies: <dependency>   <groupId>com.sparkjava</groupId>   <artifactId>spark-core</artifactId>   <version>2.0.0</version> </dependency> <dependency>   <groupId>org.slf4j</groupId>   <artifactId>slf4j-simple</artifactId>   <version>1.7.7</version> </dependency> Spark uses SLF4J for logging, so we need to a SLF4J binder to see log and error messages. In this example we use the slf4j-simple dependency for this purpose. However, you can also use Log4j or any other binder you like. Having slf4j-simple in the classpath is enough to see log output in the console. We will also use GSON for generating JSON output and JUnit to write a simple integration tests. You can find these dependencies in the complete pom.xml. Returning all users Now it is time to create a class that is responsible for handling incoming requests. We start by implementing the GET /users request that should return a list of all users. import static spark.Spark.*;public class UserController {  public UserController(final UserService userService) {          get("/users", new Route() {       @Override       public Object handle(Request request, Response response) {         // process request         return userService.getAllUsers();       }     });          // more routes   } } Note the static import of spark.Spark.* in the first line. This gives us access to various static methods including get(), post(), put() and more. Within the constructor the get() method is used to register a Route that listens for GET requests on /users. A Route is responsible for processing requests. Whenever a GET /users request is made, the handle() method will be called. Inside handle() we return an object that should be sent to the client (in this case a list of all users). Spark highly benefits from Java 8 Lambda expressions. Route is a functional interface (it contains only one method), so we can implement it using a Java 8 Lambda expression. Using a Lambda expression the Route definition from above looks like this: get("/users", (req, res) -> userService.getAllUsers()); To start the application we have to create a simple main() method. Inside main() we create an instance of our service and pass it to our newly created UserController: public class Main {   public static void main(String[] args) {     new UserController(new UserService());   } } If we now run main(), Spark will start an embedded Jetty server that listens on Port 4567. We can test our first route by initiating a GET http://localhost:4567/users request. In case the service returns a list with two user objects the response body might look like this: [com.mscharhag.sparkdemo.User@449c23fd, com.mscharhag.sparkdemo.User@437b26fe] Obviously this is not the response we want. Spark uses an interface called ResponseTransformer to convert objects returned by routes to an actual HTTP response. ReponseTransformer looks like this: public interface ResponseTransformer {   String render(Object model) throws Exception; } ResponseTransformer has a single method that takes an object and returns a String representation of this object. The default implementation of ResponseTransformer simply calls toString() on the passed object (which creates output like shown above). Since we want to return JSON we have to create a ResponseTransformer that converts the passed objects to JSON. We use a small JsonUtil class with two static methods for this: public class JsonUtil {  public static String toJson(Object object) {     return new Gson().toJson(object);   }  public static ResponseTransformer json() {     return JsonUtil::toJson;   } } toJson() is an universal method that converts an object to JSON using GSON. The second method makes use of Java 8 method references to return a ResponseTransformer instance. ResponseTransformer is again a functional interface, so it can be satisfied by providing an appropriate method implementation (toJson()). So whenever we call json() we get a new ResponseTransformer that makes use of our toJson() method. In our UserController we can pass a ResponseTransformer as a third argument to Spark’s get() method: import static com.mscharhag.sparkdemo.JsonUtil.*;public class UserController {      public UserController(final UserService userService) {          get("/users", (req, res) -> userService.getAllUsers(), json());          ...   } } Note again the static import of JsonUtil.* in the first line. This gives us the option to create a new ResponseTransformer by simply calling json(). Our response looks now like this: [{   "id": "1866d959-4a52-4409-afc8-4f09896f38b2",   "name": "john",   "email": "john@foobar.com" },{   "id": "90d965ad-5bdf-455d-9808-c38b72a5181a",   "name": "anna",   "email": "anna@foobar.com" }] We still have a small problem. The response is returned with the wrong Content-Type. To fix this, we can register a Filter that sets the JSON Content-Type: after((req, res) -> {   res.type("application/json"); }); Filter is again a functional interface and can therefore be implemented by a short Lambda expression. After a request is handled by our Route, the filter changes the Content-Type of every response to application/json. We can also use before() instead of after() to register a filter. Then, the Filter would be called before the request is processed by the Route. The GET /users request should be working now! Returning a specific user To return a specific user we simply create a new route in our UserController: get("/users/:id", (req, res) -> {   String id = req.params(":id");   User user = userService.getUser(id);   if (user != null) {     return user;   }   res.status(400);   return new ResponseError("No user with id '%s' found", id); }, json()); With req.params(“:id”) we can obtain the :id path parameter from the URL. We pass this parameter to our service to get the corresponding user object. We assume the service returns null if no user with the passed id is found. In this case, we change the HTTP status code to 400 (Bad Request) and return an error object. ResponseError is a small helper class we use to convert error messages and exceptions to JSON. It looks like this: public class ResponseError {   private String message;  public ResponseError(String message, String... args) {     this.message = String.format(message, args);   }  public ResponseError(Exception e) {     this.message = e.getMessage();   }  public String getMessage() {     return this.message;   } } We are now able to query for a single user with a request like this: GET /users/5f45a4ff-35a7-47e8-b731-4339c84962be If an user with this id exists we will get a response that looks somehow like this: {   "id": "5f45a4ff-35a7-47e8-b731-4339c84962be",   "name": "john",   "email": "john@foobar.com" } If we use an invalid user id, a ResponseError object will be created and converted to JSON. In this case the response looks like this: {   "message": "No user with id 'foo' found" } Creating and updating users Creating and updating users is again very easy. Like returning the list of all users it is done using a single service call: post("/users", (req, res) -> userService.createUser(     req.queryParams("name"),     req.queryParams("email") ), json());put("/users/:id", (req, res) -> userService.updateUser(     req.params(":id"),     req.queryParams("name"),     req.queryParams("email") ), json()); To register a route for HTTP POST or PUT requests we simply use the static post() and put() methods of Spark. Inside a Route we can access HTTP POST parameters using req.queryParams(). For simplicity reasons (and to show another Spark feature) we do not do any validation inside the routes. Instead we assume that the service will throw an IllegalArgumentException if we pass in invalid values. Spark gives us the option to register ExceptionHandlers. An ExceptionHandler will be called if an Exception is thrown while processing a route. ExceptionHandler is another single method interface we can implement using a Java 8 Lambda expression: exception(IllegalArgumentException.class, (e, req, res) -> {   res.status(400);   res.body(toJson(new ResponseError(e))); }); Here we create an ExceptionHandler that is called if an IllegalArgumentException is thrown. The caught Exception object is passed as the first parameter. We set the response code to 400 and add an error message to the response body. If the service throws an IllegalArgumentException when the email parameter is empty, we might get a response like this: {   "message": "Parameter 'email' cannot be empty" } The complete source the controller can be found here. Testing Because of Spark’s simple nature it is very easy to write integration tests for our sample application. Let’s start with this basic JUnit test setup: public class UserControllerIntegrationTest {  @BeforeClass   public static void beforeClass() {     Main.main(null);   }  @AfterClass   public static void afterClass() {     Spark.stop();   }      ... } In beforeClass() we start our application by simply running the main() method. After all tests finished we call Spark.stop(). This stops the embedded server that runs our application. After that we can send HTTP requests within test methods and validate that our application returns the correct response. A simple test that sends a request to create a new user can look like this: @Test public void aNewUserShouldBeCreated() {   TestResponse res = request("POST", "/users?name=john&email=john@foobar.com");   Map<String, String> json = res.json();   assertEquals(200, res.status);   assertEquals("john", json.get("name"));   assertEquals("john@foobar.com", json.get("email"));   assertNotNull(json.get("id")); } request() and TestResponse are two small self made test utilities. request() sends a HTTP request to the passed URL and returns a TestResponse instance. TestResponse is just a small wrapper around some HTTP response data. The source of request() and TestResponse is included in the complete test class found on GitHub. Conclusion Compared to other web frameworks Spark provides only a small amount of features. However, it is so simple you can build small web applications within a few minutes (even if you have not used Spark before). If you want to look into Spark you should clearly use Java 8, which reduces the amount of code you have to write a lot.You can find the complete source of the sample project on GitHub.Reference: Building a simple RESTful API with Spark from our JCG partner Michael Scharhag at the mscharhag, Programming and Stuff blog....

Out of memory: Kill process or sacrifice child

It is 6 AM. I am awake summarizing the sequence of events leading to my way-too-early wake up call. As those stories start, my phone alarm went off. Sleepy and grumpy me checked the phone to see whether I was really crazy enough to set the wake-up alarm at 5AM. No, it was our monitoring system indicating that one of Plumbr services went down. As a seasoned veteran in the domain, I made the first correct step towards solution by turning on the espresso machine. With a cup of coffee I was equipped to tackle the problems. First suspect, application itself seemed to have behave completely normal before the crash. No errors, no warning signs, no trace of any suspects in the application logs. The monitoring we have in place had noticed the death of the process and had already restarted the crashed service. But as I already had caffeine in my bloodstream, I started to gather more evidence. 30 minutes later I found myself staring at the following in the /var/log/kern.log : Jun 4 07:41:59 plumbr kernel: [70667120.897649] Out of memory: Kill process 29957 (java) score 366 or sacrifice child Jun 4 07:41:59 plumbr kernel: [70667120.897701] Killed process 29957 (java) total-vm:2532680kB, anon-rss:1416508kB, file-rss:0kB Apparently we became victims of the Linux kernel internals. As you all know, Linux is built with a  bunch of unholy creatures ( called ‘daemons’). Those daemons are shepherded by several kernel jobs, one of which seems to be especially sinister entity. Apparently all modern Linux kernels have a built-in mechanism called “Out Of Memory killer” which can annihilate your processes under extremely low memory conditions. When such a condition is detected, the killer is activated and picks a process to kill. The target is picked using a set of heuristics scoring all processes and selecting the one with the worst score to kill. Understanding the “Out Of Memory killer” By default, Linux kernels allow processes to request more memory than currently available in the system. This makes all the sense in the world, considering that most of the processes never actually use all of the memory they allocate. The easiest comparison to this approach would be with the cable operators. They sell all the consumers a 100Mbit download promise, far exceeding the actual bandwidth present in their network. The bet is again on the fact that the users will not simultaneously all use their allocated download limit. Thus one 10Gbit link can successfully serve way more than the 100 users our simple math would permit. A side effect of such approach is visible in case some of your programs is on the path of depleting the system’s memory.This can lead to extremely low memory conditions, where no pages can be allocated to process. You might have faced such situation, where not even a root account cannot kill the offending task. To prevent such situations, the killer activates, and identifies the process to be the killed. You can read more about fine-tuning the behaviour of “Out of memory killer” from this article in RedHat documentation. What was triggering the Out of memory killer? Now that we have the context, it is still unclear what was triggering the “killer” and woke me up at 5AM? Some more investigation revealed that:The configuration in /proc/sys/vm/overcommit_memory allowed overcommitting memory – it was set to 1, indicating that every malloc() should succeed. The application was running on a EC2 m1.small instance. EC2 instances have disabled swapping by default.Those two facts combined with the sudden spike in traffic in our services resulted in the application requesting more and more memory to support those extra users. Overcommitting configuration allowed to allocate more and more memory for this greedy process, eventually triggering the “Out of memory killer” who was doing exactly what it is meant to do. Killing our application and waking me up in the middle of the night. Example When I described the behaviour to engineers, one of them was interested enough to create a small test case reproducing the error. When you compile and launch the following Java code snippet on Linux (I used the latest stable Ubuntu version): package eu.plumbr.demo; public class OOM {public static void main(String[] args){ java.util.List l = new java.util.ArrayList(); for (int i = 10000; i < 100000; i++) { try { l.add(new int[100_000_000]); } catch (Throwable t) { t.printStackTrace(); } } } } then you will face the very same Out of memory: Kill process <PID> (java) score <SCORE> or sacrifice child message. Note that you might need to tweak the swapfile and heap sizes, in my testcase I used the 2g heap specified via -Xmx2g and following configuration for swap: swapoff -a dd if=/dev/zero of=swapfile bs=1024 count=655360 mkswap swapfile swapon swapfile Solution? There are several ways to handle such situation. In our example, we just migrated the system to an instance with more memory. I also considered allowing swapping, but after consulting with engineering I was reminded of the fact that garbage collection processes on JVM are not good at operating under swapping, so this option was off the table. Other possibilities would involve fine-tuning the OOM killer, scaling the load horizontally across several small instances or reducing the memory requirements of the application. If you found the study interesting – follow Plumbr in Twitter or RSS, we keep publishing our insights about Java internals.Reference: Out of memory: Kill process or sacrifice child from our JCG partner Jaan Angerpikk at the Plumbr Blog blog....

Spring/Hibernate improved SQL logging with log4jdbc

Hibernate provides SQL logging out of the box, but such logging only shows prepared statements, and not the actual SQL queries sent to the database. It also does not log the execution time of each query, which is useful for performance troubleshooting. This blog post will go over how to setup Hibernate query logging, and then compare it to the logging that can be obtained with log4jdbc. The Hibernate query logging functionality Hibernate does not log the real SQL queries sent to the database. This is because Hibernate interacts with the database via the JDBC driver, to which it sends prepared statements but not the actual queries. So Hibernate can only log the prepared statements and the values of their binding parameters, but not the actual SQL queries themselves. This is how a query looks like when logged by Hibernate: select /* load your.package.Employee */ this_.code, ... from employee this_ where this_.employee_id=?TRACE 12-04-2014@16:06:02 BasicBinder - binding parameter [1] as [NUMBER] - 1000 See this post Why and where is Hibernate doing this SQL query? for how to setup this type of logging. Using log4jdbc For a developer it’s useful to be able to copy paste a query from the log and be able to execute the query directly in an SQL client, but the variable placeholders ? make that unfeasible. Log4jdbc in an open source tool that allows to do just that, and more. Log4jdbc is a spy driver that will wrap itself around the real JDBC driver, logging queries as they go through it. The version linked from this post provides Spring integration, unlike several other log4jdbc forks. Setting up log4jdbc First include the log4jdbc-remix library in your pom.xml. This library is a fork of the original log4jdbc: <dependency> <groupId>org.lazyluke</groupId> <artifactId>log4jdbc-remix</artifactId <version>0.2.7</version> </dependency> Next, find in the Spring configuration the definition of the data source. As an example, when using the JNDI lookup element this is how the data source looks like: <jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/some-db" /> After finding the data source definition, rename it to the following name: <jee:jndi-lookup id="dataSourceSpied" jndi-name="java:comp/env/jdbc/some-db" /> Then define a new log4jdbc data source that wraps the real data source, and give it the original name: <bean id="dataSource" class="net.sf.log4jdbc.Log4jdbcProxyDataSource" > <constructor-arg ref="dataSourceSpied" /> <property name="logFormatter"> <bean class="net.sf.log4jdbc.tools.Log4JdbcCustomFormatter" > <property name="loggingType" value="SINGLE_LINE" /> <property name="margin" value="19" /> <property name="sqlPrefix" value="SQL:::" /> </bean> </property> </bean > With this configuration, the query logging should already be working. It’s possible to customize the logging level of the several log4jdbc loggers available. The original log4jdbc documentation provides more information on the available loggers:jdbc.sqlonly: Logs only SQL jdbc.sqltiming: Logs the SQL, post-execution, including timing execution statistics jdbc.audit: Logs ALL JDBC calls except for ResultSets jdbc.resultset: all calls to ResultSet objects are logged jdbc.connection: Logs connection open and close eventsThe jdbc.audit logger is especially useful to validate the scope of transactions, as it logs the begin/commit/rollback events of a database transaction. This is the proposed log4j configuration that will print only the SQL queries together with their execution time: <logger name="jdbc.sqltiming" additivity ="false"> <level value="info" /> </logger> <logger name="jdbc.resultset" additivity ="false"> <level value="error" /> </logger> <logger name="jdbc.audit" additivity ="false"> <level value="error" /> </logger> <logger name="jdbc.sqlonly" additivity ="false"> <level value="error" /> </logger> <logger name="jdbc.resultsettable" additivity ="false"> <level value="error" /> </logger> <logger name="jdbc.connection" additivity ="false"> <level value="error" /> </logger> <logger name="jdbc.resultsettable" additivity ="false"> <level value="error" /> </logger> Conclusion Using log4jdbc does simply some initial setup, but once it’s in place it’s really convenient to have. Having a true query log is also useful for performance troubleshooting, as will be described in an upcoming post.Reference: Spring/Hibernate improved SQL logging with log4jdbc from our JCG partner Aleksey Novik at the The JHades Blog blog....

Beauty and strangeness of generics

Recently, I was preparing for my Oracle Certified Professional, Java SE 7 Programmer exam and I happened to encounter some rather strange-looking constructions in the realm of generics in Java. Nevertheless, I have also seen some clever and elegant pieces of code. I found these examples worth sharing not only because they can make your design choices easier and resulting code more robust and reusable, but also because some of them are quite tricky when you are not used to generics. I decided to break this post into four chapters that pretty much map my experience with generics during my studies and work experience.       Do you understand generics? When we take a look around we can observe that generics are quite heavily used in many different framework around Java universe. They span from web application frameworks to collections in Java itself. Since this topic has been explained by many before me, I will just list resources that I found valuable and move on to stuff that sometimes does not get any mention at all or is not explained quite well (usually in the notes or articles posted online). So, if you lack the understanding of core generics concepts, you can check out some of the following materials:SCJP Sun Certified Programmer for Java 6 Exam by Katherine Sierra and Bert BatesFor me, primary aim of this book was to prepare myself for OCP exams provided by Oracle. But I came to realize that notes in this book regarding generics can also be beneficial for anyone studying generics and how to use them. Definitely worth reading, however, the book was written for Java 6 so the explanation is not complete and you will have to look up missing stuff like diamond operator by yourself.Lesson: Generics (Updated) by OracleResource provided by Oracle itself. You can go through many simple examples in this Java tutorial. It will provide you with the general orientation in generics and sets the stage for more complex topics such as those in following book.Java Generics and Collections by Maurice Naftalin and Philip WadlerAnother great Java book from O’Reilly Media’s production. This book is well-organized and the material is well-presented with all details included. This book is unfortunately also rather dated, so same restrictions as with first resource apply.What is not allowed to do with generics? Assuming you are aware of generics and want to find out more, lets move to what can not be done. Surprisingly, there is quite a lot of stuff that cannot be used with generics. I selected following six examples of pitfalls to avoid, when working with generics. Static field of type <T> One common mistake many inexperienced programmers do is to try to declare static members. As you can see in following example, any attempt to do so ends up with compiler error like this one: Cannot make a static reference to the non-static type T. public class StaticMember<T> { // causes compiler error static T member; } Instance of type <T> Another mistake is to try instantiate any type by calling new on generic type. By doing so, compiler causes error saying: Cannot instantiate the type T. public class GenericInstance<T> {public GenericInstance() { // causes compiler error new T(); } } Incompatibility with primitive types One of the biggest limitation while working with generics is seemingly their incompatibility with primitive types. It is true that you can’t use primitives directly in your declarations, however, you can substitute them with appropriate wrapper types and you are fine to go. Whole situation is presented in the example below: public class Primitives<T> { public final List<T> list = new ArrayList<>();public static void main(String[] args) { final int i = 1;// causes compiler error // final Primitives<int> prim = new Primitives<>(); final Primitives<Integer> prim = new Primitives<>();prim.list.add(i); } } First instantiation of Primitives class would fail during compilation with an error similar to this one: Syntax error on token "int", Dimensions expected after this token. This limitation is bypassed using wrapper type and little bit of auto-boxing magic. Array of type <T> Another obvious limitation of using generics is the inability to instantiate generically typed arrays. The reason is pretty obvious given the basic characteristics of an array objects – they preserve their type information during runtime. Should their runtime type integrity be violated, the runtime exception ArrayStoreException comes to rescue the day. public class GenericArray<T> { // this one is fine public T[] notYetInstantiatedArray;// causes compiler error public T[] array = new T[5]; } However, if you try to directly instantiate a generic array, you will end up with compiler error like this one: Cannot create a generic array of T. Generic exception class Sometimes, programmer might be in need of passing an instance of generic type along with exception being thrown. This is not possible to do in Java. Following example depicts such an effort. // causes compiler error public class GenericException<T> extends Exception {} When you try to create such an exception, you will end up with message like this: The generic class GenericException<T> may not subclass java.lang.Throwable. Alternate meaning of keywords super and extends Last limitation worth mentioning, especially for the newcomers,  is the alternate meaning of keywords super and extends, when it comes to generics. This is really useful to know in order to produce well-designed code that makes use of generics.<? extends T>Meaning: Wildcard refers to any type extending type T and the type T itself.<? super T>Meaning: Wildcard refers to any super type of T and the type T itself.Bits of beauty One of my favorite things about Java is its strong typing. As we all know, generics were introduced in Java 5 and they were used to make it easier for us to work with collections (they were used in more areas than just collections, but this was one of the core arguments for generics in design phase). Even though generics provide only compile time protection and do not enter the bytecode, they provide rather efficient way to ensure type safety. Following examples show some of the nice features or use cases for generics. Generics work with classes as well as interfaces This might not come as a surprise at all, but yes – interfaces and generics are compatible constructs. Even though the use of generics in conjunction with interfaces is quite common occurrence, I find this fact to be actually pretty cool feature. This allows programmers to create even more efficient code with type safety and code reuse in mind. For example, consider following example from interface Comparable from package java.lang: public interface Comparable<T> { public int compareTo(T o); } Simple introduction of generics made it possible to omit instance of check from compareTo method making the code more cohesive and increased its readability. In general, generics helped make the code easier to read and understand as well as they helped with introduction of type order. Generics allow for elegant use of bounds When it comes to bounding the wildcard, there is a pretty good example of what can be achieved in the library class Collections. This class declares method copy, which is defined in the following example and uses bounded wildcards to ensure type safety for copy operations of lists. public static <T> void copy(List<? super T> dest, List<? extends T> src) { ... } Lets take a closer look. Method copy is declared as a static generic method returning void. It accepts two arguments – destination and source (and both are bounded). Destination is bounded to store only types that are super types of T or T type itself. Source, on the other hand, is bounded to be made of only extending types of T type or T type itself. These two constraints guarantee that both collections as well as the operation of copying stay type safe. Which we don’t have to care for with arrays since they prevent any type safety violations by throwing aforementioned ArrayStoreException exception. Generics support multibounds It is not hard to imagine why would one want to use more that just one simple bounding condition. Actually, it is pretty easy to do so. Consider following example: I need to create a method that accepts argument that is both Comparable and List of numbers. Developer would be forced to create unnecessary interface ComparableList in order to fulfill described contract in pre-generic times. public class BoundsTest { interface ComparableList extends List, Comparable {}class MyList implements ComparableList { ... }public static void doStuff(final ComparableList comparableList) {}public static void main(final String[] args) { BoundsTest.doStuff(new BoundsTest().new MyList()); } } With following take on this task we get to disregard the limitations. Using generics allows us to create concrete class that fulfills required contract, yet leaves doStuff method to be as open as possible. The only downside I found was this rather verbose syntax. But since it still remains nicely readable and easily understandable, I can overlook this flaw. public class BoundsTest {class MyList<T> implements List<T>, Comparable<T> { ... }public static <T, U extends List<T> & Comparable<T>> void doStuff(final U comparableList) {}public static void main(final String[] args) { BoundsTest.doStuff(new BoundsTest().new MyList<String>()); } } Bits of strangeness I decided to dedicate the last chapter of this post two the strangest constructs or behaviors I have encountered so far. It is highly possible that you will never encounter code like this, but I find it interesting enough to mention it. So without any further ado, lets meet the weird stuff. Awkward code As with any other language construct, you might end up facing some really weird looking code. I was wondering what would the most bizarre code look like and whether it would even pass the compilation. Best I could come up with is following piece of code. Can you guess whether this code compiles or not? public class AwkwardCode<T> { public static <T> T T(T T) { return T; } } Even though this is an example of really bad coding, it will compile successfully and the application will run without any problems. First line declares generic class AwkwardCode and second line declares generic method T. Method T is generic method returning instances of T. It takes parameter of type T unfortunately called T. This parameter is also returned in method body. Generic method invocation This last example shows how type inference works when combined with generics. I stumbled upon this problem when I saw a piece of code that did not contain generic signature for a method call yet claimed to pass the compilation. When someone has only a little experience with generics, code like this might startle them at first sight. Can you explain the behavior of following code? public class GenericMethodInvocation {public static void main(final String[] args) { // 1. returns true System.out.println(Compare.<String> genericCompare("1", "1")); // 2. compilation error System.out.println(Compare.<String> genericCompare("1", new Long(1))); // 3. returns false System.out.println(Compare.genericCompare("1", new Long(1))); } }class Compare {public static <T> boolean genericCompare(final T object1, final T object2) { System.out.println("Inside generic"); return object1.equals(object2); } } Ok, let’s break this down. First call to genericCompare is pretty straight forward. I denote what type methods arguments will be of and supply two objects of that type – no mysteries here. Second call to genericCompare fails to compile since Long is not String. And finally, third call to genericCompare returns false. This is rather strange since this method is declared to accept two parameters of the same type, yet it is all good to pass it String literal and a Long object. This is caused by type erasure process executed during compilation. Since the method call is not using <String> syntax of generics, compiler has no way to tell you, that you are passing two different types. Always remember that the closest shared inherited type is used to find matching method declaration. Meaning, when genericCompare accepts object1 and object2, they are casted to Object, yet compared as String and Long instances due to runtime polymorphism – hence the method returns false. Now let’s modify this code a little bit. public class GenericMethodInvocation {public static void main(final String[] args) { // 1. returns true System.out.println(Compare.<String> genericCompare("1", "1")); // 2. compilation error System.out.println(Compare.<String> genericCompare("1", new Long(1))); // 3. returns false System.out.println(Compare.genericCompare("1", new Long(1)));// compilation error Compare.<? extends Number> randomMethod(); // runs fine Compare.<Number> randomMethod(); } }class Compare {public static <T> boolean genericCompare(final T object1, final T object2) { System.out.println("Inside generic"); return object1.equals(object2); }public static boolean genericCompare(final String object1, final Long object2) { System.out.println("Inside non-generic"); return object1.equals(object2); }public static void randomMethod() {} } This new code sample modifies Compare class by adding a non-generic version of genericCompare method and defining a new randomMethod that does nothing and gets called twice from main method in GenericMethodInvocation class. This code makes the second call to genericCompare possible since I provided new method that matches given call. But this raises a question about yet another strange behavior – Is the second call generic or not? As it turns out – no, it is not. Yet, it is still possible to use <String> syntax of generics. To demonstrate this ability more clearly I created new call to randomMethod with this generic syntax. This is possible thanks to the type erasure process again – erasing this generic syntax. However, this changes when a bounded wildcard comes on the stage. Compiler sends us clear message in form of compiler error saying: Wildcard is not allowed at this location, which makes it impossible to compile the code. To make the code compile and run you have to comment out line number 12. When the code is modified this way it produces following output: Inside generic true Inside non-generic false Inside non-generic falseReference: Beauty and strangeness of generics from our JCG partner Jakub Stas at the Jakub Stas blog....

Really Understanding Javascript Closures

This post will explain in a simple way how Javascript Closures work. We will go over these topics and frequently asked questions:What is a Javascript Closure What is the reason behind the name ‘Closure’ Actually viewing closures in a debugger how to reason about closures while coding the most common pitfalls of it’s use    A Simple Example (bug included) The simplest way to understand closures is by realizing what problem they are trying to solve. Let’s take a simple code example with a counter being incremented 3 times inside a loop. But inside the loop, something asynchronous is done with the counter. It could be that a server call was made, in this case let’s simply call setTimeout that will defer it’s execution until a timeout occurs: // define a function that increments a counter in a loop function closureExample() {var i = 0;for (i = 0; i< 3 ;i++) { setTimeout(function() { console.log('counter value is ' + i); }, 1000); }} // call the example function closureExample(); Some things to bear in mind:the variable i exists in the scope of the closureExample function and is not accessible externally while looping through the variable the console.log statement is not immediately executed console/log will be executed asynchronously 3 times, and only after each timeout of 1 second elapses This means that 3 timeouts are set, and then the closureExample returns almost immediatelyWhich leads us to the main question about this code: When the anonymous logging function gets executed, how can it have access to the variable ‘i’? The question comes bearing in mind that:the variable i was not passed as an argument when the console.log statement gets executed, the closureExample function has long ended.So What is a Closure then? When the logging function is passed to the setTimeout method, the Javascript engine detects that for the function to be executed in the future, a reference will be needed to variable i. To solve this, the engine keeps a link to this variable for later use, and stores that link in a special function scoped execution context. Such a function with ‘memory’ about the environment where it was created is simply known as: a Closure. Why the name Closure then? This is because the function inspects it’s environment and closes over the variables that it needs to remember for future use. The references to the variables are closed in a special data structure that can only be accessed by the Javascript runtime itself. Is there any way to See the Closure? The simplest way is to use the Chrome Developer Tools debugger, and set a breakpoint in line 7 of the code snippet above. When the first timeout gets hit, the closure will show up in the Scope Variables panel of the debugger:As we can see, the closure is just a simple data structure with links to the variables that the function needs to ‘remember’, in this case the i variable. But then, where is the Pitfall? We could expect that the execution log would show: counter value is 0 counter value is 1 counter value is 2 But the real execution log is actually: counter value is 3 counter value is 3 counter value is 3 This is not a bug, it’s the way closures work. The logging function is a closure (or has a closure, as the term is used in both ways) containing a reference to the i variable. This is a reference, and not a copy, so what happens is:the loop finishes and the i variable value is 3 only later will the first timeout expire, and the logging function will log the value 3 the second timeout expires, and the logging function still logs 3, etc.How to have a different counter value per async operation? This can be done for example by creating a separate function to trigger the async operation. The following snippet would give the expected result: function asyncOperation(counter) { setTimeout(function() { console.log('counter value is ' + counter); }, 1000); }function otherClosureExample() { var i = 0;for (i = 0; i < 3 ;i++) { asyncOperation(i); } }otherClosureExample(); This works because when calling asyncOperation a copy is made of the counter value, and the logging will ‘close over’ that copied value. This means each invocation of the logging function will see a different variable with values 0, 1, 2. Conclusion Javascript closures are a powerful feature that is mostly transparent in the day to day use of the language. They can be a convenient way to reduce the number of parameters passed to a function. But mostly the fact that the closed variables are inaccessible to outside of the function makes closures a good way to achieve ‘private’ variables and encapsulation in Javascript. Mostly the feature ‘just works’ and Javascript functions transparently remember any variables needed for future execution in a convenient way. But beware of the pitfall: closures keep references and not copies (even of primitive values), so make sure that that is really the intended logic.Reference: Really Understanding Javascript Closures from our JCG partner Aleksey Novik at the The JHades Blog blog....

Performance Tuning of Spring/Hibernate Applications

For most typical Spring/Hibernate enterprise applications, the application performance depends almost entirely on the performance of it’s persistence layer. This post will go over how to confirm that we are in presence of a ‘database-bound’ application, and then walk through 7 frequently used ‘quick-win’ tips that can help improve application performance. How to confirm that an application is ‘database-bound’ To confirm that an application is ‘database-bound’, start by doing a typical run in some development environment, using VisualVM for monitoring. VisualVM is a Java profiler shipped with the JDK and launchable via the command line by calling jvisualvm. After launching Visual VM, try the following steps:double click on your running application Select Sampler click on Settings checkbox Choose Profile only packages, and type in the following packages:your.application.packages.* org.hibernate.* org.springframework.* your.database.driver.package, for example oracle.* Click Sample CPUThe CPU profiling of a typical ‘database-bound’ application should look something like this:We can see that the client Java process spends 56% of it’s time waiting for the database to return results over the network. This is a good sign that the queries on the database are what’s keeping the application slow. The 32.7% in Hibernate reflection calls is normal and nothing much can be done about it. First step for tuning – obtaining a baseline run The first step to do tuning is to define a baseline run for the program. We need to identify a set of functionally valid input data that makes the program go through a typical execution similar to the production run. The main difference is that the baseline run should run in a much shorter period of time, as a guideline an execution time of around 5 to 10 minutes is a good target. What makes a good baseline? A good baseline should have the following characteristics:it’s functionally correct the input data is similar to production in it’s variety it completes in a short amount of time optimizations in the baseline run can be extrapolated to a full runGetting a good baseline is solving half of the problem. What makes a bad baseline? For example, in a batch run for processing call data records in a telecommunications system, taking the first 10 000 records could be the wrong approach. The reason being, the first 10 000 might be mostly voice calls, but the unknown performance problem is in the processing of SMS traffic. Taking the first records of a large run would lead us to a bad baseline, from which wrong conclusions would be taken. Collecting SQL logs and query timings The SQL queries executed with their execution time can be collected using for example log4jdbc. See this blog post for how to collect SQL queries using log4jdbc – Spring/Hibernate improved SQL logging with log4jdbc. The query execution time is measured from the Java client side, and it includes the network round-trip to the database. The SQL query logs look like this: 16 avr. 2014 11:13:48 | SQL_QUERY /* insert your.package.YourEntity */ insert into YOUR_TABLE (...) values (...) {executed in 13 msec} The prepared statements themselves are also a good source of information – they allow to easily identify frequent query types. They can be logged by following this blog post – Why and where is Hibernate doing this SQL query? What metrics can be extracted from SQL logs The SQL logs can give the answer these questions:What are slowest queries being executed? What are the most frequent queries? What is the amount of time spent generating primary keys? Is there some data that could benefit from caching ?How to parse the SQL logs Probably the only viable option for large log volumes is to use command line tools. This approach has the advantage of being very flexible. At the expense of writing a small script or command, we can extract mostly any metric needed. Any command line tool will work as long as you are comfortable with it. If you are used to the Unix command line, bash might be a good option. Bash can be used also in Windows workstations, using for example Cygwin, or Git that includes a bash command line. Frequently applied Quick-Wins The quick-wins bellow identify common performance problems in Spring/Hibernate applications, and their corresponding solutions. Quick-win Tip 1 – Reduce primary key generation overhead In processes that are ‘insert-intensive’, the choice of a primary key generation strategy can matter a lot. One common way to generate id’s is to use database sequences, usually one per table to avoid contention between inserts on different tables. The problem is that if 50 records are inserted, we want to avoid that 50 network round-trips are made to the database in order to obtain 50 id’s, leaving the Java process hanging most of the time. How does Hibernate usually handle this? Hibernate provides new optimized ID generators that avoid this problem. Namely for sequences, a HiLo id generator is used by default. This is how the HiLo sequence generator it works:call a sequence once and get 1000 (the High value) calculate 50 id’s like this:1000 * 50 + 0 = 50000 1000 * 50 + 1 = 50001 … 1000 * 50 + 49 = 50049, Low value (50) reached call sequence for new High value 1001 … etc …So from a single sequence call, 50 keys where generated, reducing the overhead caused my inumerous network round-trips. These new optimized key generators are on by default in Hibernate 4, and can even be turned off if needed by setting hibernate.id.new_generator_mappings to false. Why can primary key generation still be a problem? The problem is, if you declared the key generation strategy as AUTO, the optimized generators are still off, and your application will end up with a huge amount of sequence calls. In order to make sure the new optimized generators are on, make sure to use the SEQUENCE strategy instead of AUTO: @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "your_key_generator") private Long id; With this simple change, an improvement in the range of 10%-20% can be measured in ‘insert-intensive’ applications, with basically no code changes. Quick-win Tip 2 – Use JDBC batch inserts/updates For batch programs, JDBC drivers usually provide an optimization for reducing network round-trips named ‘JDBC batch inserts/updates’. When these are used, inserts/updates are queued at the driver level before being sent to the database. When a threshold is reached, then the whole batch of queued statements is sent to the database in one go. This prevents the driver from sending the statements one by one, which would waist multiple network round-trips. This is the entity manager factory configuration needed to active batch inserts/updates: <prop key="hibernate.jdbc.batch_size">100</prop> <prop key="hibernate.order_inserts">true</prop> <prop key="hibernate.order_updates">true</prop> Setting only the JDBC batch size won’t work. This is because the JDBC driver will batch the inserts only when receiving insert/updates for the exact same table. If an insert to a new table is received, then the JDBC driver will first flush the batched statements on the previous table, before starting to batch statements on the new table. A similar functionality is implicitly used if using Spring Batch. This optimization can easily buy you 30% to 40% to ‘insert intensive’ programs, without changing a single line of code. Quick-win Tip 3 – Periodically flush and clear the Hibernate session When adding/modifying data in the database, Hibernate keeps in the session a version of the entities already persisted, just in case they are modified again before the session is closed. But many times we can safely discard entities once the corresponding inserts where done in the database. This releases memory in the Java client process, preventing performance problems caused by long running Hibernate sessions. Such long-running sessions should be avoided as much as possible, but if by some reason they are needed, this is how to contain memory consumption: entityManager.flush(); entityManager.clear(); The flush will trigger the inserts from new entities to be sent to the database. The clear releases the new entities from the session. Quick-win Tip 4 – Reduce Hibernate dirty-checking overhead Hibernate uses internally a mechanism to keep track of modified entities called dirty-checking. This mechanism is not based on the equals and hashcode methods of the entity classes. Hibernate does it’s most to keep the performance cost of dirty-checking to a minimum, and to dirty-check only when it needs to, but the mechanism does have a cost, which is more noticeable in tables with a large number of columns. Before applying any optimization, the most important is to measure the cost of dirty-checking using VisualVM. How to avoid dirty-checking? In Spring business methods that we know are read-only, dirty-checking can be turned off like this: @Transactional(readOnly=true) public void someBusinessMethod() { .... } An alternative to avoid dirty-checking is to use the Hibernate Stateless Session, which is detailed in the documentation. Quick-win Tip 5 – Search for ‘bad’ query plans Check the queries in the slowest queries list to see if they have good query plans. The most usual ‘bad’ query plans are:Full table scans: they happen when the table is being fully scanned due to usually a missing index or outdated table statistics. Full cartesian joins: This means that the full cartesian product of several tables is being computed. Check for missing join conditions, or if this can be avoided by splitting a step into several.Quick-win Tip 6 – check for wrong commit intervals If you are doing batch processing, the commit interval can make a large difference in the performance results, as in 10 to 100 times faster. Confirm that the commit interval is the one expected (usually around 100-1000 for Spring Batch jobs). It happens often that this parameter is not correctly configured. Quick-win Tip 7 – Use the second-level and query caches If some data is identified as being eligible for caching, then have a look at this blog post for how to setup the Hibernate caching: Pitfalls of the Hibernate Second-Level / Query Caches Conclusions To solve application performance problems, the most important action to take is to collect some metrics that allow to find what the current bottleneck is. Without some metrics it is often not possible to guess in useful time what the correct problem cause is. Also, many but not all of the typical performance pitfalls of a ‘database-driven’ application can be avoided in the first place by using the Spring Batch framework.Reference: Performance Tuning of Spring/Hibernate Applications from our JCG partner Aleksey Novik at the The JHades Blog blog....

SQL Developer’s “Securely” Encrypted Passwords

Recently, while at one of our customers’ site, the customer and I needed to get access to a database. On my machine, I had stored the password, but the customer obviously didn’t want to rely on my machine, and the password itself is hashed, so we couldn’t guess it. But guess what? Yes we can! I googled a bit, and incredibly, I found instructions to write the following little utility programme, which I’m licensing to you under the terms of the ASL 2.0: DISCLAIMER: This program is BY NO MEANS intended for you to do any harm. You could have found this information anywhere else on the web. Please use this ONLY to recover your own “lost” passwords. Like I did.   Note also, this only works with SQL Developer versions less than 4. import java.io.File; import java.security.GeneralSecurityException;import javax.crypto.Cipher; import javax.crypto.spec.IvParameterSpec; import javax.crypto.spec.SecretKeySpec; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.xpath.XPath; import javax.xml.xpath.XPathConstants; import javax.xml.xpath.XPathExpression; import javax.xml.xpath.XPathFactory;import org.w3c.dom.Document; import org.w3c.dom.Element; import org.w3c.dom.NodeList;public class SQLDeveloperDecrypter { public static void main(String[] args) throws Exception { if (args.length == 0) { System.err.println(" Usage 1: " + SQLDeveloperDecrypter.class.getName() + " 0501F83890..... (a single encrypted password)"); System.err.println(" Usage 2: " + SQLDeveloperDecrypter.class.getName() + " C:\\Users\\...... (the path to the connections.xml file)"); System.err.println(); System.err.println(" Pass the password hash code from your connections.xml file. The file might be located at (example)"); System.err.println(" C:\\Users\\[User]\\AppData\\Roaming\\SQL Developer\\system2.\\o.jdeveloper.db.connection.");System.exit(-1); }if (args[0].startsWith("05")) { System.out.println(decryptPassword(args[0])); } else { File file = new File(args[0]); if (file.isDirectory()) file = new File(file, "connections.xml");DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = factory.newDocumentBuilder(); Document doc = builder.parse(file.toURI().toString());// The relevant structure is: // // <Reference name="connection name"> // <RefAddresses> // <StringRefAddr addrType="password"> // <Contents>057D3DE2...XPathFactory xPathfactory = XPathFactory.newInstance(); XPath xpath = xPathfactory.newXPath(); XPathExpression expr = xpath.compile("//StringRefAddr[@addrType='password']/Contents");NodeList nodes = (NodeList) expr.evaluate(doc, XPathConstants.NODESET); for (int i = 0; i < nodes.getLength(); i++) { Element e = (Element) nodes.item(i);System.out.println("Connection name : " + ((Element) e.getParentNode().getParentNode().getParentNode()).getAttribute("name") );System.out.println("Password (encrypted): " + e.getTextContent() );System.out.println("Password (decrypted): " + decryptPassword(e.getTextContent()) );System.out.println(); } } }// From: http://stackoverflow.com/a/140861 public static byte[] hexStringToByteArray(String s) { int len = s.length(); byte[] data = new byte[len / 2]; for (int i = 0; i < len; i += 2) { data[i / 2] = (byte) ((Character.digit(s.charAt(i), 16) << 4) + Character.digit(s.charAt(i+1), 16)); } return data; }// From: http://stackoverflow.com/a/3109774 public static String decryptPassword(String result) throws GeneralSecurityException { return new String(decryptPassword(hexStringToByteArray(result))); }public static byte[] decryptPassword(byte[] result) throws GeneralSecurityException { byte constant = result[0]; if (constant != 5) { throw new IllegalArgumentException(); }byte[] secretKey = new byte[8]; System.arraycopy(result, 1, secretKey, 0, 8);byte[] encryptedPassword = new byte[result.length - 9]; System.arraycopy(result, 9, encryptedPassword, 0, encryptedPassword.length);byte[] iv = new byte[8]; for (int i = 0; i < iv.length; i++) { iv[i] = 0; }Cipher cipher = Cipher.getInstance("DES/CBC/PKCS5Padding"); cipher.init(Cipher.DECRYPT_MODE, new SecretKeySpec(secretKey, "DES"), new IvParameterSpec(iv)); return cipher.doFinal(encryptedPassword); } } Parts of the source code were borrowed, from here and here. In other words, virtually any hacker could’ve come up with the above programme. And the output? This: Connection name : SAKILA Password (encrypted): 0517CB1A41E3C2CC3A3163234A6A8E92F8 Password (decrypted): SAKILAConnection name : TEST Password (encrypted): 05B03F45511F83F6CD4D322C9E173B5A94 Password (decrypted): TEST Wonderful! All the passwords on my machine are now recovered in constant time (no brute force). Does this make you think? I hope that your DBA doesn’t store their passwords in SQL Developer. On a laptop. Which they forget in the train. With access to your customers’ credit card information. In the meantime, though, I’m glad I could recover the “lost” password for my client!Reference: SQL Developer’s “Securely” Encrypted Passwords from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

7 New Tools Java Developers Should Know

                       Get ready to lock and load through this quick overview of some of the newest most innovative, tools around. In case you’ve missed it, RebelLabs recently released the results of a global survey of the Java tools and technologies landscape. Alongside the big names and established tools, the market is bubbling with fresh tools and frameworks that not so many people have heard of (yet). In this post I decided to gather a short list of such tools, most of them launched just recently. Some are Java specific and some support other languages as well, but they’re all great for Java projects and share a vision of simplicity. Let’s roll. JClarity – Performance MonitoringLaunched last September, JClarity is now offering two products around Java performance: Illuminate and Censum. Illuminate is a performance monitoring tool, while Censum is an application focused on garbage collection logs analysis. More than just collecting data or visualizing it, both tools provide actionable insights to solve the issues they detect. “What we want to do is to move the problem from data collection to data analysis and insight” – JClarity Co-Founder Ben Evans.Key features:Bottleneck detection (Disk I/O, Garbage Collection, Deadlocks, and more). Action plan – Recommendations to solve the problem, such as “The application needs to increase the number of active threads”. Explanation – Defining the problem in general and the common causes for it, for example “A high percentage of time spent paused in GC may mean that the heap has been under-sized”.What’s unique about it: offers the next step after monitoring and identifying your performance problems – actionable insights to solve issues on the spot. Behind the curtain: London based JClarity was founded by Martijn Verburg, Kirk Pepperdin and Ben Evans, both are well known Java performance veterans. Read more about how JClarity came to be, right here.  Bintray – Social Platform for BinariesJava developers are being kept in the dark in a way when importing libraries from “anonymous” repositories. Bintray adds a face to the code and actually, serves as a social platform for developers to share open-source packages (Did someone say GitHub for binaries? Login with GitHub for the full inception effect to kick in). It has over 85,000 packages in 18,000 repositories, while showcasing popular repositories and new releases.Key features:Upload your binaries for the world to see, get feedback and interact with other developers. Download libraries with Gradle / Maven / Yum / Apt, or just directly. Manage release notes and documentation. REST API – Search / Retrieve binaries and automate distribution.What’s unique about it: Bintray’s basic functionality is similar to Maven Central. However, it adds a social layer and offers an easy process to upload files to a CDN. Behind the curtain: Bintray is developed by JFrog, based in Israel and California. It was made public April last year and won Duke’s choice award at the last JavaOne conference. JFrog is also the company behind Artifactory. Which is also hosted on Bintray. Of course.  Librato – Monitoring & Visualization Cloud ServicesA hosted service for monitoring and managing cloud applications, Librato can create custom dashboards in seconds without a need to set up or deploy any software. Oh, and it just looks and feels so buttery smooth compared to other dashboards. “Data is only as valuable as the actionable insights you can surface from it”, says Joe Ruscio, Co-Founder & CTO.Key features:Data collection: Integration with Heroku, AWS, tens of collection agents (Even Nest) and pure language bindings with Java, Clojure and others. Custom reports: Metrics & alerts through email, HipChat, Campfire, and just HTTP POST requests to integrate with anything you can think of. Data visualization: Beautiful graphs with annotations, correlations, sharing and embedding options. Alerts: Automatic notifications when metrics cross certain thresholds.What’s unique about it: It would be hard to find anything that Librato doesn’t know how to talk with and help make sense of its data. Behind the curtain: Based in San Francisco, Librato was founded by Fred van den Bosch, Joe Ruscio, Mike Heffner and Dan Stodin.  Takipi – Error tracking and analysisTakipi was built with a simple objective in mind: Telling developers exactly when and why production code breaks. Whenever a new exception is thrown or a log error occurs – Takipi captures it and shows you the variable state which caused it, across methods and machines. Takipi will overlay this over the actual code which executed at the moment of error – so you can analyze the exception as if you were there when it happened.Key features:Detect – Caught/uncaught exceptions, Http and logged errors. Prioritize – How often errors happen across your cluster, if they involve new or modified code, and whether that rate is increasing. Analyze – See the actual code and variable state, even across different machines and applications.What’s unique about it: God mode in production code. Shows you the exact code and variable state at the moment of error, as if you were there when it happened. Behind the curtain: Psst, it’s us. Takipi was founded in 2012 and based in San Francisco and Tel Aviv. Each exception type and error has a unique monster that represents it.  Elasticsearch – Search & Analytics platformElasticsearch has been around for a while, but Elasticsearch 1.0.0 was released just recently in February. It’s an open-source project built on top of Apache Lucene and hosted on GitHub with over 200 contributors. You can check out the code right here. The main promise Elasticsearch provides is an easy to use scalable distributed RESTful search.Key features:Near real-time document store where each field is indexed and searchable. Distributed search with an architecture built to scale from small to large applications. A RESTful and a native Java API among others. It also has a library for Hadoop. Works out of the box and doesn’t necessarily require deep understanding of search, it can also be schema free so you can start real fast.What’s unique about it: Like it says on the tin, it’s elastic. Built with flexibility and ease of use in mind, provides an easy place to start and to scale without compromising on hardcore features and customization options. Behind the curtain: Elasticsearch was founded by Shay Banon back in 2010 and just recently raised $70M in funding. Before founding it Banon ran the Compass open-source project and is now a renowned search expert. His motivation to get into search? An application he built for his wife to store and retrieve her favorite recipes.  Spark – Micro Web FrameworkBack to pure Java, Spark is a Sinatra inspired micro web framework for quickly creating web applications. It was rewritten last month to support Java 8 and lambdas, Spark is open-source and its code is available on GitHub right here. It’s being developed by Per Wendel and a small number of contributors over the last few years in a mission to support rapid creation of web applications with minimal effort.Key features:Quick and simple setup for your first deployment. Intuitive route matcher. A template engine to create reusable components that supports Freemarker, Apache Velocity and Mustache. Standalone Spark runs on Jetty but can also run on Tomcat.What’s unique about it: A picture is worth a 1000 words, but a screenshot would be more straightforward. Check it out. Behind the curtain: Per Wendel is the Sweden based founder of Spark, working on Spark with over 20 contributors. Check out the discussion group and learn more about Spark, how you can contribute and solve issues.  Plumbr – Memory Leak DetectionGoing deeper in the JVM, the Garbage Collector scans for objects that are no longer being used. However, sometimes developers will still hold references to objects in memory they no longer use. This is where memory leaks happen, and where Plumbr comes in. It detects and reports if the application has memory leakage issues and provides actionable information to fix it.Key features:Live memory leak detection and alerts. A report with time, size, velocity (MB/h) and significance of the leak. The location of the memory leak in your code.What’s unique about it: Quick and to the point, gathering insights from your code and telling you what you need to fix. Behind the curtain: Based in Estonia, Plumbr was founded by Priit Potter, Ivo Mägi, Nikita Salnikov-Tarnovski and Vladimir Šor. Joining forces in a seasoned Java team, mostly known as “the guys who help projects that are stuck”. Makes sense.Did we miss any other cool tools? What’s the best new tool you use? Please let us know.Reference: 7 New Tools Java Developers Should Know from our JCG partner Alex Zhitnitsky at the Takipi blog....

Listing and filtering directory contents in NIO.2

There hasn’t been much happening in the area of listing directory contents until the release of Java 7. But since NIO.2 introduced a new way to do this it might be worth it to cover this area. One of big pluses of NIO.2 is the ability to use listing and filtering at once in one method call. This provides an elegant solution to most of listing/filtering needs related to work with a file system. Listing root directories Unless we are working with relative paths we need to be aware of the environment where our application lives, so we can define absolute paths. Since file systems are usually hierarchical structures there is at least one root directory. To properly address files and directories we need to be able to list all these root directories. To do this, we turn to the FileSystem instance itself to use its method getRootDirectories, which is an alternative to Java 6 construct File.listRoots(). Iterable<Path> it = FileSystems.getDefault().getRootDirectories();System.out.println("Root file system locations: " + Sets.newHashSet(it)); *Please note that class Sets is not part of JDK, but comes from Google’s Guava library. I used it here, just for convenience to get nicely formated string representation of root directories. With following output: Root file system locations: C:\, D:\, E:\, F:\, G:\, H:\, I:\, Listing and filtering directory contents Standard task when working with file system is to list or filter files within given directory. We might need to modify, analyze or simply list them – whatever the reason, class java.nio.file.Files has our backs. It offers three variants of method newDirectoryStream that return object of type DirectoryStream<Path> to allow us to iterate over the entries in a directory. Here we see an apparent difference between current and prior versions of IO library (returning simple arrays) preventing NullPointerException. Following example shows how simple it is to list contents of given directory: Path directoryPath = Paths.get("C:", "Program Files/Java/jdk1.7.0_40/src/java/nio/file");if (Files.isDirectory(directoryPath)) { try (DirectoryStream<Path> stream = Files.newDirectoryStream(directoryPath)) { for (Path path : stream) { System.out.println(path); } } catch (IOException e) { throw new RuntimeException(e); } } Please notice the use of isDirectory checking method that prevents NotDirectoryException. Also note the use of the try-with-resources construct – DirectoryStream is both AutoCloseable and Closeable (meaning it needs to be closed at some time) so try-with-resources comes in handy. Code returns following output: ... C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\CopyOption.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\DirectoryIteratorException.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\DirectoryNotEmptyException.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\DirectoryStream.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\FileAlreadyExistsException.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\Files.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\FileStore.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\FileSystem.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\FileSystemAlreadyExistsException.java ... To ensure universal usability of DirectoryStream<Path> we can filter using two basic mechanisms:newDirectoryStream(Path dir, String glob)Filtering using GLOBnewDirectoryStream (Path dir, DirectoryStream.Filterfilter)Filtering using  DirectoryStream.FilterFiltering with GLOB pattern First of all we need to know what a GLOB is. GLOB patterns are string expressions that follow specific syntax rules and they are used for matching purposes. Please refer to the following article for more information on GLOBs and GLOB syntax. When it comes to filtering using GLOBs, Files class provides us with an easy way to do so. Lets take a look at following example. Path directoryPath = Paths.get("C:", "Program Files/Java/jdk1.7.0_40/src/java/nio/file");if (Files.isDirectory(directoryPath)) { try (DirectoryStream<Path> stream = Files.newDirectoryStream(directoryPath, "File*Exception*")) { for (Path path : stream) { System.out.println(path); } } catch (IOException e) { throw new RuntimeException(e); } } With following output: C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\FileAlreadyExistsException.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\FileSystemAlreadyExistsException.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\FileSystemException.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\FileSystemLoopException.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\FileSystemNotFoundException.java Filtering with DirectoryStream.Filter When the task at hand requires more complex filtering options rather than just simple file name matching, we need to implement interface DirectoryStream.Filter<Path>. This is the most powerful filtering option available at our disposal since we have an access to the rest of the application and might use third party libraries. Following example shows such a situation with two filtering conditions:File size must be an even number Time of execution in milliseconds must be an even numberPath directoryPath = Paths.get("C:", "Program Files/Java/jdk1.7.0_40/src/java/nio/file"); DirectoryStream.Filter<Path> filter = new Filter<Path>() {@Override public boolean accept(Path entry) throws IOException { long size = Files.readAttributes(entry, BasicFileAttributes.class, LinkOption.NOFOLLOW_LINKS).size(); long milis = new Date().getTime();boolean isSizeEvenNumber = size % 2 == 0; boolean isTheTimeRight = milis % 2 == 0;return isTheTimeRight && isSizeEvenNumber; } };if (Files.isDirectory(directoryPath)) { try (DirectoryStream<Path> stream = Files.newDirectoryStream(directoryPath, filter)) { for (Path path : stream) { System.out.println(path); } } catch (IOException e) { throw new RuntimeException(e); } } With following output: C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\DirectoryStream.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\FileAlreadyExistsException.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\Files.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\NotDirectoryException.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\NotLinkException.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\package-info.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\WatchEvent.java C:\Program Files\Java\jdk1.7.0_40\src\java\nio\file\WatchService.java *Please note that based on used conditions filtered files may differ per execution.Reference: Listing and filtering directory contents in NIO.2 from our JCG partner Jakub Stas at the Jakub Stas blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books