Featured FREE Whitepapers

What's New Here?

gradle-logo

Getting Started with Gradle: Dependency Management

It is challenging, if not impossible, to create real life applications which don’t have any external dependencies. That is why dependency management is a vital part of every software project. This blog post describes how we can manage the dependencies of our projects with Gradle. We will learn to configure the used repositories and the required dependencies. We will also apply this theory to practice by implementing a simple example application. Let’s get started.   Additional Reading:Getting Started with Gradle: Introduction helps you to install Gradle, describes the basic concepts of a Gradle build, and describes how you can add functionality to your build by using Gradle plugins. Getting Started with Gradle: Our First Java Project describes how you can create a Java project by using Gradle and package your application to an executable jar file.Introduction to Repository Management Repositories are essentially dependency containers, and each project can use zero or more repositories. Gradle supports the following repository formats:Ivy repositories Maven repositories Flat directory repositoriesLet’s find out how we can configure each repository type in our build. Adding Ivy Repositories to Our Build We can add an Ivy repository to our build by using its url address or its location in the local file system. If we want to add an Ivy repository by using its url address, we have to add the following code snippet to the build.gradle file: repositories { ivy { url "http://ivy.petrikainulainen.net/repo" } } If we want to add an Ivy repository by using its location in the file system, we have to add the following code snippet to the build.gradle file: repositories { ivy { url "../ivy-repo" } } If you want to get more information about configuring Ivy repositories, you should check out the following resources:Section 50.6.6 Ivy Repositories of the Gradle User Guide The API documentation of the IvyArtifactRepositoryLet’s move on and find out how we can add Maven repositories to our build. Adding Maven Repositories to Our Build We can add a Maven repository to our build by using its url address or its location in the local file system. If we want to add a Maven repository by using its url, we have to add the following code snippet to the build.gradle file: repositories { maven { url "http://maven.petrikainulainen.net/repo" } } If we want to add a Maven repository by using its location in the file system, we have to add the following code snippet to the build.gradle file: repositories { maven { url "../maven-repo" } } Gradle has three “aliases” which we can use when we are adding Maven repositories to our build. These aliases are:The mavenCentral() alias means that dependencies are fetched from the central Maven 2 repository. The jcenter() alias means that dependencies are fetched from the Bintray’s JCenter Maven repository. The mavenLocal() alias means that dependencies are fetched from the local Maven repository.If we want to add the central Maven 2 repository in our build, we must add the following snippet to our build.gradle file: repositories { mavenCentral() } If you want to get more information about configuring Maven repositories, you should check out section 50.6.4 Maven Repositories of the Gradle User Guide. Let’s move on and find out how we can add flat directory repositories to our build. Adding Flat Directory Repositories to Our Build If we want to use flat directory repositories, we have to add the following code snippet to our build.gradle file: repositories { flatDir { dirs 'lib' } } This means that dependencies are searched from the lib directory. Also, if we want to, we can use multiple directories by adding the following snippet to the build.gradle file: repositories { flatDir { dirs 'libA', 'libB' } } If you want get more information about flat directory repositories, you should check out the following resources:Section 50.6.5 Flat directory repository of the Gradle User Guide Flat Dir Repository post to the gradle-user mailing listLet’s move on and find out how we can manage the dependencies of our project with Gradle. Introduction to Dependency Management After we have configured the repositories of our project, we can declare its dependencies. If we want to declare a new dependency, we have to follow these steps:Specify the configuration of the dependency. Declare the required dependency.Let’s take a closer look at these steps. Grouping Dependencies into Configurations In Gradle dependencies are grouped into a named set of dependencies. These groups are called configurations, and we use them to declare the external dependencies of our project. The Java plugin specifies several dependency configurations which are described in the following:The dependencies added to the compile configuration are required when our the source code of our project is compiled. The runtime configuration contains the dependencies which are required at runtime. This configuration contains the dependencies added to the compile configuration. The testCompile configuration contains the dependencies which are required to compile the tests of our project. This configuration contains the compiled classes of our project and the dependencies added to the compile configuration. The testRuntime configuration contains the dependencies which are required when our tests are run. This configurations contains the dependencies added to compile, runtime, and testCompile configurations. The archives configuration contains the artifacts (e.g. Jar files) produced by our project. The default configuration group contains the dependencies which are required at runtime.Let’s move on and find out how we can declare the dependencies of our Gradle project. Declaring the Dependencies of a Project The most common dependencies are called external dependencies which are found from an external repository. An external dependency is identified by using the following attributes:The group attribute identifies the group of the dependency (Maven users know this attribute as groupId). The name attribute identifies the name of the dependency (Maven users know this attribute as artifactId). The version attribute specifies the version of the external dependency (Maven users know this attribute as version).These attributes are required when you use Maven repositories. If you use other repositories, some attributes might be optional. For example, if you use a flat directory repository, you might have to specify only name and version. Let’s assume that we have to declare the following dependency:The group of the dependency is ‘foo’. The name of the dependency is ‘foo’. The version of the dependency is 0.1. The dependency is required when our project is compiled.We can declare this dependency by adding the following code snipped to the build.gradle file: dependencies { compile group: 'foo', name: 'foo', version: '0.1' } We can also declare the dependencies of our project by using a shortcut form which follows this syntax: [group]:[name]:[version]. If we want to use the shortcut form, we have to add the following code snippet to the build.gradle file: dependencies { compile 'foo:foo:0.1' } We can also add multiple dependencies to the same configuration. If we want to use the “normal” syntax when we declare our dependencies, we have to add the following code snippet to the build.gradle file: dependencies { compile ( [group: 'foo', name: 'foo', version: '0.1'], [group: 'bar', name: 'bar', version: '0.1'] ) } On the other hand, if we want to use the shortcut form, the relevant part of the build.gradle file looks as follows: dependencies { compile 'foo:foo:0.1', 'bar:bar:0.1' } It is naturally possible to declare dependencies which belong to different configurations. For example, if we want to declare dependencies which belong to the compile and testCompile configurations, we have to add the following code snippet to the build.gradle file: dependencies { compile group: 'foo', name: 'foo', version: '0.1' testCompile group: 'test', name: 'test', version: '0.1' } Again, it is possible to use the shortcut form. If we want to declare the same dependencies by using the shortcut form, the relevant part of the build.gradle file looks as follows: dependencies { compile 'foo:foo:0.1' testCompile 'test:test:0.1' } You can get more information about declaring your dependencies by reading the section 50.4 How to declare your dependencies of Gradle User Guide. We have now learned the basics of dependency management. Let’s move on and implement our example application. Creating the Example Application The requirements of our example application are described in thefollowing:The build script of the example application must use the Maven central repository. The example application must write the received message to log by using Log4j. The example application must contain unit tests which ensure that the correct message is returned. These unit tests must be written by using JUnit. Our build script must create an executable jar file.Let’s find out how we can fulfil these requirements. Configuring the Repositories of Our Build One of the requirements of our example application was that its build script must use the Maven central repository. After we have configured our build script to use the Maven central repository, its source code looks as follows (The relevant part is highlighted): apply plugin: 'java'repositories { mavenCentral() }jar { manifest { attributes 'Main-Class': 'net.petrikainulainen.gradle.HelloWorld' } } Let’s move on and declare the dependencies of our example application. Declaring the Dependencies of Our Example Application We have to declare two dependencies in the build.gradle file:Log4j (version 1.2.17) is used to write the received message to the log. JUnit (version 4.11) is used to write unit tests for our example application.After we have declared these dependencies, the build.gradle file looks as follows (the relevant part is highlighted): apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' testCompile 'junit:junit:4.11' }jar { manifest { attributes 'Main-Class': 'net.petrikainulainen.gradle.HelloWorld' } } Let’s move on and write some code. Writing the Code In order to fulfil the requirements of our example application, “we have to over-engineer it”. We can create the example application by following these steps:Create a MessageService class which returns the string ‘Hello World!’ when its getMessage() method is called. Create a MessageServiceTest class which ensures that the getMessage() method of the MessageService class returns the string ‘Hello World!’. Create the main class of our application which obtains the message from a MessageService object and writes the message to log by using Log4j. Configure Log4j.Let’s go through these steps one by one. First, we have to create a MessageService class to the src/main/java/net/petrikainulainen/gradle directory and implement it. After we have do this, its source code looks as follows: package net.petrikainulainen.gradle;public class MessageService {public String getMessage() { return "Hello World!"; } } Second, we have create a MessageServiceTest to the src/main/test/net/petrikainulainen/gradle directory and write a unit test to the getMessage() method of the MessageService class. The source code of the MessageServiceTest class looks as follows: package net.petrikainulainen.gradle;import org.junit.Before; import org.junit.Test;import static org.junit.Assert.assertEquals;public class MessageServiceTest {private MessageService messageService;@Before public void setUp() { messageService = new MessageService(); }@Test public void getMessage_ShouldReturnMessage() { assertEquals("Hello World!", messageService.getMessage()); } } Third, we have create a HelloWorld class to the src/main/java/net/petrikainulainen/gradle directory. This class is the main class of our application. It obtains the message from a MessageService object and writes it to a log by using Log4j. The source code of the HelloWorld class looks as follows: package net.petrikainulainen.gradle;import org.apache.log4j.Logger;public class HelloWorld {private static final Logger LOGGER = Logger.getLogger(HelloWorld.class);public static void main(String[] args) { MessageService messageService = new MessageService();String message = messageService.getMessage(); LOGGER.info("Received message: " + message); } } Fourth, we have to configure Log4j by using the log4j.properties which is found from the src/main/resources directory. The log4j.properties file looks as follows: log4j.appender.Stdout=org.apache.log4j.ConsoleAppender log4j.appender.Stdout.layout=org.apache.log4j.PatternLayout log4j.appender.Stdout.layout.conversionPattern=%-5p - %-26.26c{1} - %m\nlog4j.rootLogger=DEBUG,Stdout That is it. Let’s find out how we can run the tests of our example application. Running the Unit Tests We can run our unit test by using the following command: gradle test When our test passes, we see the following output: > gradle test :compileJava :processResources :classes :compileTestJava :processTestResources :testClasses :testBUILD SUCCESSFULTotal time: 4.678 secs However, if our unit test would fail, we would see the following output (the interesting section is highlighted): > gradle test :compileJava :processResources :classes :compileTestJava :processTestResources :testClasses :testnet.petrikainulainen.gradle.MessageServiceTest > getMessage_ShouldReturnMessageFAILED org.junit.ComparisonFailure at MessageServiceTest.java:221 test completed, 1 failed :test FAILEDFAILURE: Build failed with an exception.* What went wrong: Execution failed for task ':test'. > There were failing tests. See the report at: file:///Users/loke/Projects/Java/Blog/gradle-examples/dependency-management/build/reports/tests/index.html* Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.BUILD FAILEDTotal time: 4.461 secs As we can see, if our unit tests fails, the describes:which tests failed. how many tests were run and how many tests failed. the location of the test report which provides additional information about the failed (and passed) tests.When we run our unit tests, Gradle creates test reports to the following directories:The build/test-results directory contains the raw data of each test run. The build/reports/tests directory contains a HTML report which describes the results of our tests.The HTML test report is very useful tool because it describes the reason why our test failed. For example, if our unit test would expect that the getMessage() method of the MessageService class returns the string ‘Hello Worl1d!’, the HTML test report of that test case would look as follows:Let’s move on and find out how we can package and run our example application. Packaging and Running Our Example Application We can package our application by using one of these commands: em>gradle assembly or gradle build. Both of these commands create the dependency-management.jar file to the build/libs directory. When run our example application by using the command java -jar dependency-management.jar, we see the following output: > java -jar dependency-management.jar Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/log4j/Logger at net.petrikainulainen.gradle.HelloWorld.<clinit>(HelloWorld.java:10) Caused by: java.lang.ClassNotFoundException: org.apache.log4j.Logger at java.net.URLClassLoader$1.run(URLClassLoader.java:372) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:360) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 1 more The reason for this exception is that the Log4j dependency isn’t found from the classpath when we run our application. The easiest way to solve this problem is to create a so called “fat” jar file. This means that we will package the required dependencies to the created jar file. After we have followed the instructions given in the Gradle cookbook, our build script looks as follows (the relevant part is highlighted): apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' testCompile 'junit:junit:4.11' }jar { from { configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } } manifest { attributes 'Main-Class': 'net.petrikainulainen.gradle.HelloWorld' } } We can now run the example application (after we have packaged it) and as we can see, everything is working properly: > java -jar dependency-management.jar INFO - HelloWorld - Received message: Hello World! That is all for today. Let’s summarize what we learned from this blog post. Summary This blog post has taught us four things:We learned how we can configure the repositories used by our build. We learned how we can declare the required dependencies and group these dependencies into configurations. We learned that Gradle creates a HTML test report when our tests are run. We learned how we can create a so called “fat” jar file.If you want to play around with the example application of this blog post, you can get it from Github.Reference: Getting Started with Gradle: Dependency Management from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
java-logo

Making operations on volatile fields atomic

Overview The expected behaviour for volatile fields is that they should behave in a multi-threaded application the same as they do in a single threaded application.  They are not forbidden to behave the same way, but they are not guaranteed to behave the same way. The solution in Java 5.0+ is to use AtomicXxxx classes however these are relatively inefficient in terms of memory (they add a header and padding), performance (they add a references and little control over their relative positions), and syntactically they are not as clear to use. IMHO A simple solution if for volatile fields to act as they might be expected to do, the way JVM must support in AtomicFields which is not forbidden in the current JMM (Java- Memory Model) but not guaranteed. Why make fields volatile? The benefit of volatile fields is that they are visible across threads and some optimisations which avoid re-reading them are disabled so you always check again the current value even if you didn’t change them. e.g. without volatile Thread 2: int a = 5;Thread 1: a = 6; (later) Thread 2: System.out.println(a); // prints 5 or 6 With volatile Thread 2: volatile int a = 5;Thread 1: a = 6;(later) Thread 2: System.out.println(a); // prints 6 given enough time. Why not use volatile all the time? Volatile read and write access is substantially slower.  When you write to a volatile field it stalls the entire CPU pipeline to ensure the data has been written to cache.  Without this, there is a risk the next read of the value sees an old value, even in the same thread (See AtomicLong.lazySet() which avoids stalling the pipeline) The penalty can be in the order of 10x slower which you don’t want to be doing on every access. What are the limitations of volatile? A significant limitation is that operations on the field is not atomic, even when you might think it is.  Even worse than that is that usually, there is no difference.  I.e. it can appear to work for a long time even years and suddenly/randomly break due to an incidental change such as the version of Java used, or even where the object is loaded into memory. e.g. which programs you loaded before running the program. e.g. updating a value Thread 2: volatile int a = 5;Thread 1: a += 1; Thread 2: a += 2;(later) Thread 2: System.out.println(a); // prints 6, 7 or 8 even given enough time. This is an issue because the read of a and the write of a are done separately and you can get a race condition. 99%+ of the time it will behave as expect, but sometimes it won’t. What can you do about it? You need to use AtomicXxxx classes. These wrap volatile fields with operations which behave as expected. Thread 2: AtomicInteger a = new AtomicInteger(5);Thread 1: a.incrementAndGet(); Thread 2: a.addAndGet(2);(later) Thread 2: System.out.println(a); // prints 8 given enough time. What do I propose? The JVM has a means to behave as expected,  the only surprising thing is you need to use a special class to do what the JMM won’t guarantee for you.  What I propose is that the JMM be changed to support the behaviour currently provided by the concurrency AtomicClasses. In each case the single threaded behaviour is unchanged. A multi-threaded program which does not see a race condition will behave the same. The difference is that a multi-threaded program does not have to see a race condition but changing the underlying behaviour.  current method suggested syntax notesx.getAndIncrement() x++ or x += 1x.incrementAndGet() ++xx.getAndDecrment() x– or x -= 1x.decrementAndGet() –xx.addAndGet(y) (x += y)x.getAndAdd(y) ((x += y)-y)x.compareAndSet(e, y) (x == e ? x = y, true : false) Need to add the comma syntax used in other languages.  These operations could be supported for all the primitive types such as boolean, byte, short, int, long, float and double. Additional assignment operators could be supported such as:  current method suggested syntax notesAtomic multiplication x *= 2;Atomic subtraction x -= y;Atomic division x /= y;Atomic modulus x %= y;Atomic shift x <<= y;Atomic shift x >>= z;Atomic shift x >>>= w;Atomic and x &= ~y; clears bitsAtomic or x |= z; sets bitsAtomic xor x ^= w; flips bits  What is the risk? This could break code which relies on these operations occasionally failing due to race conditions. It might not be possible to support more complex expressions in a thread safe manner.  This could lead to surprising bugs as the code can look like the works, but it doesn’t.  Never the less it will be no worse than the current state. JEP 193 – Enhanced Volatiles There is a JEP 193 to add this functionality to Java. An example is: class Usage { volatile int count; int incrementCount() { return count.volatile.incrementAndGet(); } } IMHO there is a few limitations in this approach.The syntax is fairly significant change.  Changing the JMM might not require many changes the the Java syntax and possibly no changes to the compiler. It is a less general solution.  It can be useful to support operations like volume += quantity; where these are double types. It places more burden on the developer to understand why he/she should use this instead of x++;I am not convinced that a more cumbersome syntax makes it clearer as to what is happening. Consider this example: volatile int a, b;a += b; or a.volatile.addAndGet(b.volatile); orAtomicInteger a, b;a.addAndGet(b.get()); Which of these operations, as a line are atomic. Answer none of them, however systems with Intel TSX can make these atomic and if you are going to change the behaviour of any of these lines of code I would make the the a += b;  rather than invent a new syntax which does the same thing most of the time, but one is guaranteed and not the other. Conclusion Much of the syntactic and performance overhead of using AtomicInteger and AtomicLong could be removed if the JMM guaranteed the equivalent single threaded operations behaved as expected for multi-threaded code. This feature could be added to earlier versions of Java by using byte code instrumentation.Reference: Making operations on volatile fields atomic from our JCG partner Peter Lawrey at the Vanilla Java blog....
software-development-2-logo

10 things you can do as a developer to make your app secure: #7 Logging and Intrusion Detection

This is part 7 of a series of posts on the OWASP Top 10 Proactive Development Controls: 10 things you can do as a developer to make your app secure. Logging and Intrusion Detection Logging is a straightforward part of any system. But logging is important for more than troubleshooting and debugging. It is also critical for activity auditing, intrusion detection (telling ops when the system is being hacked) and forensics (figuring out what happened after the system was hacked). You should take all of this into account in your logging strategy.   What to log, what to log… and what not to log Make sure that you are always logging when, who, where and what: timestamps (you will need to take care of syncing timestamps across systems and devices or be prepared to account for differences in time zones, accuracy and resolution), user id, source IP and other address information, and event details. To make correlation and analysis easier, follow a common logging approach throughout the application and across systems where possible. Use an extensible logging framework like SLF4J with Logback or Apache Log4j/Log4j2. Compliance regulations like PCI DSS may dictate what information you need to record and when and how, who gets access to the logs, and how long you need to keep this information. You may also need to prove that audit logs and other security logs are complete and have not been tampered with (using a HMAC for example), and ensure that these logs are always archived. For these reasons, it may be better to separate out operations and debugging logs from transaction audit trails and security event logs. There is data that you must log (complete sequential history of specific events to meet compliance or legal requirements). Data that you must not log (PII or credit card data or opt-out/do-not-track data or intercepted communications). And other data you should not log (authentication information and other personal data). And watch out for Log Forging attacks where bad guys inject delimiters like extra CRLF sequences into text fields which they know will be logged in order to try to cover their tracks, or inject Javascript into data which will trigger an XSS attack when the log entry is displayed in a browser-based log viewer. Like other injection attacks, protect the system by encoding user data before writing it to the log. Review code for correct logging practices and test the logging code to make sure that it works. OWASP’s Logging Cheat Sheet provides more guidelines on how to do logging right, and what to watch out for. AppSensor – Intrusion Detection Another OWASP project, the OWASP AppSensor explains how to build on application logging to implement application-level intrusion detection. AppSensor outlines common detection points in an application, places that you should add checks to alert you that your system is being attacked. For example, if a server-side edit catches bad data that should already have been edited at the client, or catches a change to a non-editable field, then you either have some kind of coding bug or (more likely) somebody has bypassed client-side validation and is attacking your app. Don’t just log this case and return an error: throw an alert or take some kind of action to protect the system from being attacked like disconnecting the session. You could also check for known attack signatures:.Nick Galbreath (formerly at Etsy and now at startup Signal Sciences) has done some innovative work on detecting SQL injection and HTML injection attacks by mining logs to find common fingerprints and feeding this back into filters to detect when attacks are in progress and potentially block them. In the next 3 posts, we’ll step back from specific problems, and look at the larger issues of secure architecture, design and requirements.Reference: 10 things you can do as a developer to make your app secure: #7 Logging and Intrusion Detection from our JCG partner Jim Bird at the Building Real Software blog....
enterprise-java-logo

Using @NamedEntityGraph to load JPA entities more selectively in N+1 scenarios

The N+1 problem is a common issue when working with ORM solutions. It happens when you set the fetchType for some @OneToMany relation to lazy, in order to load the child entities only when the Set/List is accessed. Let’s assume we have a Customer entity with two relations: a set of orders and a set of addresses for each customer.             @OneToMany(mappedBy = "customer", cascade = CascadeType.ALL, fetch = FetchType.LAZY) private Set<OrderEntity> orders;@OneToMany(mappedBy = "customer", cascade = CascadeType.ALL, fetch = FetchType.LAZY) private Set<AddressEntity> addresses; To load all customers, we can issue the following JPQL statement and afterwards load all orders for each customer: List<CustomerEntity> resultList = entityManager.createQuery("SELECT c FROM CustomerEntity AS c", CustomerEntity.class).getResultList(); for(CustomerEntity customerEntity : resultList) { Set<OrderEntity> orders = customerEntity.getOrders(); for(OrderEntity orderEntity : orders) { ... } } Hibernate 4.3.5 (as shipped with JBoss AS Wildfly 8.1.0CR2) will generate the following series of SQL statements out of it for only two(!) customers in the database: Hibernate: select customeren0_.id as id1_1_, customeren0_.name as name2_1_, customeren0_.numberOfPurchases as numberOf3_1_ from CustomerEntity customeren0_ Hibernate: select orders0_.CUSTOMER_ID as CUSTOMER4_1_0_, orders0_.id as id1_2_0_, orders0_.id as id1_2_1_, orders0_.campaignId as campaign2_2_1_, orders0_.CUSTOMER_ID as CUSTOMER4_2_1_, orders0_.timestamp as timestam3_2_1_ from OrderEntity orders0_ where orders0_.CUSTOMER_ID=? Hibernate: select orders0_.CUSTOMER_ID as CUSTOMER4_1_0_, orders0_.id as id1_2_0_, orders0_.id as id1_2_1_, orders0_.campaignId as campaign2_2_1_, orders0_.CUSTOMER_ID as CUSTOMER4_2_1_, orders0_.timestamp as timestam3_2_1_ from OrderEntity orders0_ where orders0_.CUSTOMER_ID=? As we can see, the first query selects all customers from the table CustomerEntity. The following two selects fetch then the orders for each customer we have loaded in the first query. When we have 100 customers instead of two, we will get 101 queries. One initial query to load all customers and then for each of the 100 customers an additional query for the orders. That is the reason why this problem is called N+1. A common idiom to solve this problem is to force the ORM to generate an inner join query. In JPQL this can be done by using the JOIN FETCH clause like demonstrated in the following code snippet: entityManager.createQuery("SELECT c FROM CustomerEntity AS c JOIN FETCH c.orders AS o", CustomerEntity.class).getResultList(); As expected the ORM now generates an inner join with the OrderEntity table and therewith only needs one SQL statement to load all data: select customeren0_.id as id1_0_0_, orders1_.id as id1_1_1_, customeren0_.name as name2_0_0_, orders1_.campaignId as campaign2_1_1_, orders1_.CUSTOMER_ID as CUSTOMER4_1_1_, orders1_.timestamp as timestam3_1_1_, orders1_.CUSTOMER_ID as CUSTOMER4_0_0__, orders1_.id as id1_1_0__ from CustomerEntity customeren0_ inner join OrderEntity orders1_ on customeren0_.id=orders1_.CUSTOMER_ID In situations where you know that you will have to load all orders for each customer, the JOIN FETCH clause minimizes the number of SQL statements from N+1 to 1. This comes of course with the drawback that you now transfer for all orders of one customer the customer data again and again (due to the additional customer columns in the query). The JPA specification introduces with version 2.1 so called NamedEntityGraphs. This annotation lets you describe the graph a JPQL query should load in more detail than a JOIN FETCH clause can do and therewith is another solution to the N+1 problem. The following example demonstrates a NamedEntityGraph for our customer entity that is supposed to load only the name of the customer and its orders. The orders are described in the subgraph ordersGraph in more detail. Here we see that we only want to load the fields id and campaignId of the order. @NamedEntityGraph( name = "CustomersWithOrderId", attributeNodes = { @NamedAttributeNode(value = "name"), @NamedAttributeNode(value = "orders", subgraph = "ordersGraph") }, subgraphs = { @NamedSubgraph( name = "ordersGraph", attributeNodes = { @NamedAttributeNode(value = "id"), @NamedAttributeNode(value = "campaignId") } ) } ) The NamedEntityGraph is given as a hint to the JPQL query, after it has been loaded via EntityManager using its name: EntityGraph entityGraph = entityManager.getEntityGraph("CustomersWithOrderId"); entityManager.createQuery("SELECT c FROM CustomerEntity AS c", CustomerEntity.class).setHint("javax.persistence.fetchgraph", entityGraph).getResultList(); Hibernate supports the @NamedEntityGraph annotation since version 4.3.0.CR1 and creates the following SQL statement for the JPQL query shown above: Hibernate: select customeren0_.id as id1_1_0_, orders1_.id as id1_2_1_, customeren0_.name as name2_1_0_, customeren0_.numberOfPurchases as numberOf3_1_0_, orders1_.campaignId as campaign2_2_1_, orders1_.CUSTOMER_ID as CUSTOMER4_2_1_, orders1_.timestamp as timestam3_2_1_, orders1_.CUSTOMER_ID as CUSTOMER4_1_0__, orders1_.id as id1_2_0__ from CustomerEntity customeren0_ left outer join OrderEntity orders1_ on customeren0_.id=orders1_.CUSTOMER_ID We see that Hibernate does not issue N+1 queries but that instead the @NamedEntityGraph annotation has forced Hibernate to load the orders per left outer join. This is of course a subtle difference to the FETCH JOIN clause, where Hibernate created an inner join. The left outer join would also load customers for which no order exists in contrast to the FETCH JOIN clause, where we would only load customers that have at least one order. Interestingly is also that Hibernate loads more than the specified attributes for the tables CustomerEntity and OrderEntity. As this conflicts with the specification of @NamedEntityGraph (section 3.7.4) I have created an JIRA issue for that. Conclusion We have seen that with JPA 2.1 we have two solutions for the N+1 problem: We can either use the FETCH JOIN clause to eagerly fetch a @OneToMany relation, which results in an inner join, or we can use @NamedEntityGraph feature that lets us specify which @OneToMany relation to load via left outer join.Reference: Using @NamedEntityGraph to load JPA entities more selectively in N+1 scenarios from our JCG partner Martin Mois at the Martin’s Developer World blog....
enterprise-java-logo

Spring 4: CGLIB-based proxy classes with no default constructor

In Spring, if the class of a target object that is to be proxied doesn’t implement any interfaces, then a CGLIB-based proxy will be created. Prior to Spring 4, CGLIB-based proxy classes require a default constructor. And this is not the limitation of CGLIB library, but Spring itself. Fortunately, as of Spring 4 this is no longer an issue. CGLIB-based proxy classes no longer require a default constructor. How can this impact your code? Let’s see. One of the idioms of dependency injection is constructor injection. It can be generally used when the injected dependencies are required and must not change after the object is initiated. In this article I am not going to discuss why and when you should use constructor dependency injection. I assume you use this idiom in your code or you consider using it. If you are interested in learning more, see the resources section in the bottom of this article. Contructor injection with no-proxied beans Having the following collaborator: package pl.codeleak.services;import org.springframework.stereotype.Service;@Service public class Collaborator { public String collaborate() { return "Collaborating"; } } we can easily inject it via constructor: package pl.codeleak.services;import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service;@Service public class SomeService {private final Collaborator collaborator;@Autowired public SomeService(Collaborator collaborator) { this.collaborator = collaborator; }public String businessMethod() { return collaborator.collaborate(); }} You may notice that both Collaborator and the Service have no interfaces, but they are no proxy candidates. So this code will work perfectly fine with Spring 3 and Spring 4: package pl.codeleak.services;import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import pl.codeleak.Configuration;import static org.assertj.core.api.Assertions.assertThat;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = Configuration.class) public class WiringTests {@Autowired private SomeService someService;@Autowired private Collaborator collaborator;@Test public void hasValidDependencies() { assertThat(someService) .isNotNull() .isExactlyInstanceOf(SomeService.class);assertThat(collaborator) .isNotNull() .isExactlyInstanceOf(Collaborator.class);assertThat(someService.businessMethod()) .isEqualTo("Collaborating"); } } Contructor injection with proxied beans In many cases your beans need to be decorated with an AOP proxy at runtime, e.g when you want to use declarative transactions with @Transactional annotation. To visualize this, I created an aspect that will advice all methods in SomeService. With the below aspect defined, SomeService becomes a candidate for proxying: package pl.codeleak.aspects;import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; import org.springframework.stereotype.Component;@Aspect @Component public class DummyAspect {@Before("within(pl.codeleak.services.SomeService)") public void before() { // do nothing }} When I re-run the test with Spring 3.2.9, I get the following exception: Could not generate CGLIB subclass of class [class pl.codeleak.services.SomeService]: Common causes of this problem include using a final class or a non-visible class; nested exception is java.lang.IllegalArgumentException: Superclass has no null constructors but no arguments were given This can be simply fixed by providing a default, no argument, constructor to SomeService, but this is not what I want to do – as I would also need to make dependencies non-final. Another solution would be to provide an interface for SomeService. But again, there are many situations when you don’t need to create interfaces. Updating to Spring 4 solves the problem immediately. As documentation states: CGLIB-based proxy classes no longer require a default constructor. Support is provided via the objenesis library which is repackaged inline and distributed as part of the Spring Framework. With this strategy, no constructor at all is being invoked for proxy instances anymore. The test I created will fail, but it visualizes that CGLIB proxy was created for SomeService: java.lang.AssertionError: Expecting: <pl.codeleak.services.SomeService@6a84a97d> to be exactly an instance of: <pl.codeleak.services.SomeService> but was an instance of: <pl.codeleak.services.SomeService$$EnhancerBySpringCGLIB$$55c3343b> After removing the first assertion from the test, it will run just perfectly fine. ResourcesIn case you need to read more about constructor dependency injection, have a look at this great article by Petri Kainulainen: http://www.petrikainulainen.net/software-development/design/why-i-changed-my-mind-about-field-injection. Core Container Improvements in Spring 4: http://docs.spring.io/spring/docs/current/spring-framework-reference/html/new-in-4.0.html#_core_container_improvements You may also be interested in reading my other article about Spring: Spring 4: @DateTimeFormat with Java 8 Date-Time API and Better error messages with Bean Validation 1.1 in Spring MVC applicationReference: Spring 4: CGLIB-based proxy classes with no default constructor from our JCG partner Rafal Borowiec at the Codeleak.pl blog....
agile-logo

Do You Encourage People to Bring You Problems?

One of the familiar tensions in management is how you encourage or discourage people from bringing you problems. One of my clients had a favorite saying, “Don’t bring me problems. Bring me solutions.” I could see the problems that saying caused in the organization. He prevented people from bringing him problems until the problems were enormous. He didn’t realize that his belief that he was helping people solve their own problems was the cause of these huge problems. How could I help?   I’d only been a consultant for a couple of years. I’d been a manager for several years, and a program manager and project manager for several years before that. I could see the system. This senior manager wasn’t really my client. I was consulting to a project manager, who reported to him, but not him. His belief system was the root cause of many of the problems. What could I do? I tried coaching my project manager, about what to say to his boss. That had some effect, but didn’t work well. My client, the project manager, was so dejected going into the conversation that the conversation was dead before it started. I needed to talk to the manager myself. I thought about this first. I figured I would only get one shot before I was out on my ear. I wasn’t worried about finding more consulting – but I really wanted to help this client. Everyone was suffering. I asked for a one-on-one with the senior manager. I explained that I wanted to discuss the project, and that the project manager was fine with this meeting. I had 30 minutes. I knew that Charlie, this senior manager cared about these things: how fast we could release so we could move to the next project and what the customers would see (customer perception). He thought those two things would affect sales and customer retention. Charlie had put tremendous pressure on the project to cut corners to release faster. But that would change the customer perception of what people saw and how they would use the product. I wanted to change his mind and provide him other options. “Hey Charlie, this time still good?” “Yup, come on in. You’re our whiz-bang consultant, right?” “Yes, you could call me that. My job is to help people think things through and see alternatives. That way they can solve problems on the next project without me.” “Well, I like that. You’re kind of expensive.” “Yes, I am. But I’m very good. That’s why you pay me. So, let’s talk about how I’m helping people solve problems.” “I help people solve problems. I always tell them, ‘Don’t bring me problems. Bring me solutions.’ It works every time.” He actually laughed when he said this. I waited until he was done laughing. I didn’t smile. “You’re not smiling.” He started to look puzzled. “Well, in my experience, when you say things like that, people don’t bring you small problems. They wait until they have no hope of solving the problem at all. Then, they have such a big problem, no one can solve the problem. Have you seen that?” He narrowed his eyes. “Let’s talk about what you want for this project. You want a great release in the next eight weeks, right? You want customers who will be reference accounts, right? I can help you with that.” Now he looked really suspicious. “Okay, how are you going to pull off this miracle? John, the project manager was in here the other day, crying about how this project was a disaster.” “Well, the project is in trouble. John and I have been talking about this. We have some plans. We do need more people. We need you to make some decisions. We have some specific actions only you can take. John has specific actions only he can take. “Charlie, John needs your support. You need to say things like, “I agree that cross-functional teams work. I agree that people need to work on just one thing at a time until they are complete. I agree that support work is separate from project work, and that we won’t ask the teams to do support work until they are done with this project.” Can you do that? Those are specific things that John needs from you. But even those won’t get the project done in time. “Well, what will get the project done in time?” He practically growled at me. “We need consider alternatives to the way the project has been working. I’ve suggested alternatives to the teams. They’re afraid of you right now, because they don’t know which solution you will accept.” “AFRAID? THEY’RE AFRAID OF ME?” He was screaming by this time. “Charlie, do you realize you’re yelling at me?” I did not tell him to calm down. I knew better than that. I gave him the data. “Oh, sorry. No. Maybe that’s why people are afraid of me.” I grinned at him. “You’re not afraid of me.” “Not a chance. You and I are too much alike.” I kept smiling. “Would you like to hear some options? I like to use the Rule of Three to generate alternatives. Is it time to bring John in?” We discussed the options with John. Remember, this is before agile. We discussed timeboxing, short milestones with criteria, inch-pebbles, yellow-sticky scheduling, and decided to go with what is now a design-to-schedule lifecycle for the rest of the project. We also decided to move some people over from support to help with testing for a few weeks. We didn’t release in eight weeks. It took closer to twelve weeks. But the project was a lot better after that conversation. And, after I helped the project, I gained Charlie as a coaching client, which was tons of fun. Many managers have rules about their problem solving and how to help or not help their staff. “Don’t bring me a problem. Bring me a solution” is not helpful. That is the topic of this month’s management myth: Myth 31: I Don’t Have to Make the Difficult Choices. When you say, “Don’t bring me a problem. Bring me a solution” you say, “I’m not going to make the hard choices. You are.” But you’re the manager. You get paid to make the difficult choices. Telling people the answer isn’t always right. You might have to coach people. But not making decisions isn’t right either. Exploring options might be the right thing. You have to do what is right for your situation. Go read Myth 31: I Don’t Have to Make the Difficult Choices.Reference: Do You Encourage People to Bring You Problems? from our JCG partner Johanna Rothman at the Managing Product Development blog....
software-development-2-logo

Locking and Logging

Plumbr has been known as the tool to tackle memory leaks. As little as two months ago we released GC optimization features. But we have not been sitting idle after this – for months we have been working on lock contention detection. From the test runs we have discovered many awkward concurrency issues in hundreds of different applications. Many of those issues are unique to the application at hand, but one particular type of issues stands out. What we found out was that almost every Java application out there is using either Log4j or Logback. As a matter of fact, from the data we had available, it appears to be that more than 90% of the applications are using either of those frameworks for logging. But this is not the interesting part. Interesting is the fact that about third of those applications are facing rather significant lock wait times during logging calls. As it stands, more than 10% of the Java applications seem to halt for more than 5,000 milliseconds every once in a while during the innocent-looking log.debug() call. Why so? Default choice of an appender for any server environment is some sort of File appender, such as RollingFileAppender for example. What is important is the fact that these appenders are synchronized. This is an easy way to guarantee that the sequence of log entries from different threads is preserved. To demonstrate the side effects for this approach, we setup a simple JMH test (MyBenchmark) which is doing nothing besides calling log.debug(). This benchmark was ran on a quad-core MacBook Pro with 1,2 and 50 threads. 50 threads was chosen to simulate a typical setup for a servlet application with 50 HTTP worker threads. @State(Scope.Benchmark) public class LogBenchmark {static final Logger log = LoggerFactory.getLogger(LogBenchmark.class);AtomicLong counter;@Benchmark public void testMethod() { log.debug(String.valueOf(counter.incrementAndGet())); }@Setup public void setup() { counter = new AtomicLong(0); }@TearDown public void printState() { System.out.println("Expected number of logging lines in debug.log: " + counter.get()); } } From the test results we see a dramatic decrease in throughput 278,898 ops/s -> 84,630 ops/s -> 73,789 ops/s we can see that going from 1 to 2 threads throughput of the system decreases 3.3x. So how can you avoid this kind of locking issues?  The solution is simple – for more than a decade there has been an appender called AsyncAppender present in logging frameworks. The idea behind this appender is to store the log message in the queue and return the flow back to the application. In such a way the framework can deal with storing the log message asynchronously in a separate thread. Let’s see how AsyncAppender can cope with multithreaded application. We set up similar simple benchmark but configure the logger for that class to use AsyncAppender. Now, when we run the benchmark with the same 1, 2 and 50 threads we get stunning results: 4,941,874 ops/s -> 6,608,732 ops/s -> 5,517,848 ops/s. The improvement in throughput is so good, that it raises suspicion that there’s something fishy going on. Let’s look at the documentation of the AsyncAppender. It says AsyncAppender is by default a lossy logger, meaning that when the logger queue will get full, appender will start dropping trace, debug and info level messages, so that warnings and errors would surely get written. This behavior is configured using 2 parameters – discardingThreshold and queueSize. First specifies how full should be the queue when messages will start to be dropped, second obviously specifies how big is the queue. The default queue size set to 256 can for example configured to 0 disabling discarding altogether so that the appender becomes blocking when the queue will get full. To better understand the results, let’s count the expected number of messages in the log file (as the number of benchmark invocations by the JMH is non-deterministic) and then compare how many were actually written to see how many messages are actually discarded to get such brilliant throughput. We run the benchmark with 50 threads, varied the queue size and turned discarding on and off. The results are as follows:  Queue size DiscardNo discardOps/s Expected msg Actual msg Lost msg Ops/s256 4,180,312 184,248,790 1,925,829 98.95% 1183404,096 4,104,997 182,404,138 694,902 99.62% 11353465,536 3,558,543 157,385,651 1,762,404 98.88% 1375831,048,576 3,213,489 141,409,403 1,560,612 98.90% 1178202,000,000 3,306,476 141,454,871 1,527,133 98.92% 108603  What can we conclude from them? There’s no free lunches and no magic. Either we discard 98% of the log messages to get such massive throughput gain, or when the queue fills, we start blocking and fall back to performance comparable to synchronous appender. Interestingly the queue size doesn’t affect much. In case you can sacrifice the debug logs, using AsyncAppender does make sense.Reference: Locking and Logging from our JCG partner Vladimir Sor at the Plumbr Blog blog....
enterprise-java-logo

Java EE7 and Maven project for newbies – part 7

Resuming from the previous parts Part #1, Part #2, Part #3, Part #4, Part #5 , Part #6 In the previous post (num 6) we discovered how we can unit test our JPA2 domain model, using Arquillian and Wildfly 8.1 In the post we made a simple configuration decision, we used the internal H2 database that is bundled with Wildfly 8.1 and the already configured Datasource (called ExampleDS). But what about a real DBMS? In this post we are going to extend a bit the previous work, use the same principles and    test towards a running PostgreSQL in our localhost use some of the really nice features the ShrinkWrap APi of Arquillian Offers.Pre-requisites You need to install locally a PostgreSQL RBDMS, my example is based on a server running on localhost and the Database name is papodb. Adding some more dependencies Eventually we will need to add some more dependencies in our sample-parent (pom). Some of the are related to Arquillian and specifically the ShrinkWrap Resolvers features (more on this later). So our we need to add to the parent pom. xml the following: <shrinkwrap.bom-version>2.1.1</shrinkwrap.bom-version> <!-- jbdc drivers --> <postgreslq.version>9.1-901-1.jdbc4</postgreslq.version> ... <!-- shrinkwrap BOM--> <dependency> <groupId>org.jboss.shrinkwrap.resolver</groupId> <artifactId>shrinkwrap-resolver-bom</artifactId> <version>${shrinkwrap.bom-version}</version> <type>pom</type> <scope>import</scope> </dependency> <!-- shrinkwrap dependency chain--> <dependency> <groupId>org.jboss.shrinkwrap.resolver</groupId> <artifactId>shrinkwrap-resolver-depchain</artifactId> <version>${shrinkwrap.bom-version}</version> <type>pom</type> </dependency> <!-- arquillian itself--> <dependency> <groupId>org.jboss.arquillian</groupId> <artifactId>arquillian-bom</artifactId> <version>${arquillian-version}</version> <scope>import</scope> <type>pom</type> </dependency> <!-- the JDBC driver for postgresql --> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> <version>${postgreslq.version}</version> </dependency> Some notes on the above change: In order to avoid any potential conflicts between dependencies, make sure to define the ShrinkWrap BOM on top of Arquillian BOMNow on the sample-services (pom.xml) , the project that hosts are simple tests, we need to reference some of these dependencies. <dependency> <groupId>org.jboss.shrinkwrap.resolver</groupId> <artifactId>shrinkwrap-resolver-depchain</artifactId> <scope>test</scope> <type>pom</type> </dependency> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> </dependency> Restructuring our test code In the previous example, our test was simple, we we only used a certain test configuration. That resulted to single test-persistence.xml file and no web.xml file, since we were packaging our test application as a jar. Now we will upgrade our testing archive to a war. War packaging in JavaEE7 has become a first level citizen when it comes to bundling and deploying an enterprise application. The main difference with the previous example is that we would like to keep both the previous settings, meaning test using the internal H2 on wildfly, and the new setting testing towards a real RDBMS server. So we need to maintain 2 set of configuration files, and making use of the Maven Profiles feature, package them accordingly depending our mode. If you are new to Maven make sure to look on the concepts of profiles. Adding separate configurations per profiles So our test resources (watch out these are under src/test/resources) are now as illustrated below.There are differences in both cases. The test-persistence.xml of h2 is pointing to the ExampleDS datasource, where the one on postgre is pointing to a new datasource that we have defined in the web.xml! Please have a look on the actual code, from the git link down below. This is how we define a datasource in web.xmlNotes on the abovethe standard naming in the JNDI name java:jboss/datasources/datasourceName the application server, once it reads the contents of the web.xml file, will automatically deploy and configure a new Datasource.This is our persistence.xmlNotes on the aboveMake sure the 2 JNDI entries are the same both in the datasource definition and in the persistence.xml Of course the Hibernate Dialect used for postGresql is different The line that is highlighted is a special setting that is required for Wildfly 8.1 in cases that you want to deploy with one go, the datasource, the jdbc driver and the code. It hints the application server to initialize and configure first the datasource and then initialize the EntityManager. In cases that you have already deployed /configured the datasource this setting is not needed.Define the profiles in our pom In the sample-services pom.xml we add the following section. This is our profile definition. <profiles> <profile> <id>h2</id> <build> <testResources <testResource> <directory>/resources-h2</directory> <includes> <include>**/*</include> </includes> </testResource> </testResources> </build> </profile> <profile> <id>postgre</id> <build> <testResources> <testResource> <directory>/resources-postgre</directory> <includes> <include>**/*</include> </includes> </testResource> </testResources> </build> </profile> </profiles> Depending on the profile actived, we instruct Maven to include and work with the xml files under a specific subfolder. So if we apply the following command: mvn clean test -Pdb2 Then maven will include the persistence.xml and web.xml under the resource-h2 folder and our tests will make use of the interall H2 DB. If we issue though: mvn clean test -Ppostgre Then our test web archive will be packaged with data source definition specific to our local postgresql server. Writting a simple test Eventually our new JUnit test is not very different from the previous one. Here is a screenshot indicating some key points.   Some notes on the code above:The Junit test and basic annotations are the same with the previous post. The init() method is again the same, we just create and persist a new SimpleUser Entity The first major different is the use of ShrinkWrap Api, that makes use of our test dependencies in our pom, and we can locate the JBDC driver as a jar. Once located ShrinkWrap makes sure to package it along with the rest of resources and code in our test.war. Packaging only the jdbc driver though is NOT enough, in order this to work, we need a datasource to be present (configured) in the server. We would like this to be automatic, meaning we dont want to preconfigure anything on our test Wildfly Server. We make use of the feature to define a datasource on web.xml. (open it up in the code).The application server, once it scans the web.xml will pick up the entry and will configure a datasource under the java:jboss/datasources/testpostgre name. So we have bundled the driver, the datasource definition, we have a persistence.xml pointing to the correct datasourc. we are ready to test Our test method is similar with the previous one.We have modified a bit the resources for the H2 profile so that we package the same war structure every time. That means if we run the test using the -Ph2 profile, the web.xml included is empty, because we actually we don’t need to define a datasource there, since the datasource is already deployed by Wildfly. The persistence.xml though is different, because in one case the dialect defined is specific to H2 and in the other is specific to Postgre. You can follow the same principle and add a new resource subfolder, configure a Datasource for another RDBMS eg MySQL, add the appropriate code to fetch the driver and package it along.You can get the code for this post on this bitbucket repo-tag.ResourceShrinkwrap resolver API page (lots of nice examples for this powerful API) Defining Datasources for Wildfly 8.1Reference: Java EE7 and Maven project for newbies – part 7 from our JCG partner Paris Apostolopoulos at the Papo’s log blog....
software-development-2-logo

Behavior-Driven RESTful APIs

In the RESTBucks example, the authors present a useful state diagram that describes the actions a client can perform against the service. Where does such an application state diagram come from? Well, it’s derived from the requirements, of course. Since I like to specify requirements using examples, let’s see how we can derive an application state diagram from BDD-style requirements.       Example: RESTBucks state diagram Here are the three scenarios for the Order a Drink story: Scenario: Order a drinkGiven the RESTBucks service When I create an order for a large, semi milk latte for takeaway Then the order is created When I pay the order using credit card xxx1234 Then I receive a receipt And the order is paid When I wait until the order is ready And I take the order Then the order is completedScenario: Change an orderGiven the RESTBucks service When I create an order for a large, semi milk latte for takeaway Then the order is created And the size is large When I change the order to a small size Then the order is created And the size is smallScenario: Cancel an orderGiven the RESTBucks service When I create an order for a large, semi milk latte for takeaway Then the order is created When I cancel the order Then the order is canceled Let’s look at this in more detail, starting with the happy path scenario. Given the RESTBucks service When I create an order for a large, semi milk latte for takeaway The first line tells me there is a REST service, at some given billboard URL. The second line tells me I can use the POST method on that URI to create an Order resource with the given properties.    Then the order is created This tells me the POST returns 201 with the location of the created Order resource. When I pay the order using credit card xxx1234 This tells me there is a pay action (link relation).    Then I receive a receipt This tells me the response of the pay action contains the representation of a Receipt resource.    And the order is paid This tells me there is a link from the Receipt resource back to the Order resource. It also tells me the Order is now in paid status.    When I wait until the order is ready This tells me that I can refresh the Order using GET until some other process changes its state to ready.    And I take the order This tells me there is a take action (link relation).    Then the order is completed This tells me that the Order is now in completed state.    Analyzing the other two scenarios in similar fashion gives us a state diagram that is very similar to the original in the RESTBucks example.    The only difference is that this diagram here contains an additional action to navigate from the Receipt to the Order. This navigation is also described in the book, but not shown in the diagram in the book. Using BDD techniques for developing RESTful APIs Using BDD scenarios it’s quite easy to discover the application state diagram. This shouldn’t come as a surprise, since the Given/When/Then syntax of BDD scenarios is just another way of describing states and state transitions. From the application state diagram it’s only a small step to the complete resource model. When the resource model is implemented, you can re-use the BDD scenarios to automatically verify that the implementation matches the requirements. So all in all, BDD techniques can help us a lot when developing RESTful APIs.Reference: Behavior-Driven RESTful APIs from our JCG partner Remon Sinnema at the Secure Software Development blog....
enterprise-java-logo

Hibernate and UUID identifiers

Introduction In my previous post I talked about UUID surrogate keys and the use cases when there are more appropriate than the more common auto-incrementing identifiers. A UUID database type There are several ways to represent a 128-bit UUID, and whenever in doubt I like to resort to Stack Exchange for an expert advice.     Because table identifiers are usually indexed, the more compact the database type the less space will the index require. From the most efficient to the least, here are our options:Some databases (PostgreSQL, SQL Server) offer a dedicated UUID storage type Otherwise we can store the bits as a byte array (e.g. RAW(16) in Oracle or the standard BINARY(16) type) Alternatively we can use 2 bigint (64-bit) columns, but a composite identifier is less efficient than a single column one We can store the hex value in a CHAR(36) column (e.g 32 hex values and 4 dashes), but this will take the most amount of space, hence it’s the least efficient alternativeHibernate offers many identifier strategies to choose from and for UUID identifiers we have three options:the assigned generator accompanied by the application logic UUID generation the hexadecimal “uuid” string generator the more flexible “uuid2″ generator, allowing us to use java.lang.UUID, a 16 byte array or a hexadecimal String valueThe assigned generator The assigned generator allows the application logic to control the entity identifier generation process. By simply omitting the identifier generator definition, Hibernate will consider the assigned identifier. This example uses a BINARY(16) column type, since the target database is HSQLDB. @Entity(name = "assignedIdentifier") public static class AssignedIdentifier {@Id @Column(columnDefinition = "BINARY(16)") private UUID uuid;public AssignedIdentifier() { }public AssignedIdentifier(UUID uuid) { this.uuid = uuid; } } Persisting an Entity: session.persist(new AssignedIdentifier(UUID.randomUUID())); session.flush(); Generates exactly one INSERT statement: Query:{[insert into assignedIdentifier (uuid) values (?)][[B@76b0f8c3]} Let’s see what happens when issuing a merge instead: session.merge(new AssignedIdentifier(UUID.randomUUID())); session.flush(); We get both a SELECT and an INSERT this time: Query:{[select assignedid0_.uuid as uuid1_0_0_ from assignedIdentifier assignedid0_ where assignedid0_.uuid=?][[B@23e9436c]} Query:{[insert into assignedIdentifier (uuid) values (?)][[B@2b37d486]} The persist method takes a transient entity and attaches it to the current Hibernate session. If there is an already attached entity or if the current entity is detached we’ll get an exception. The merge operation will copy the current object state into the existing persisted entity (if any). This operation works for both transient and detached entities, but for transient entities persist is much more efficient than the merge operation. For assigned identifiers, a merge will always require a select, since Hibernate cannot know if there is already a persisted entity having the same identifier. For other identifier generators Hibernate looks for a null identifier to figure out if the entity is in the transient state. That’s why the Spring Data SimpleJpaRepository#save(S entity) method is not the best choice for Entities using an assigned identifier: @Transactional public <S extends T> S save(S entity) { if (entityInformation.isNew(entity)) { em.persist(entity); return entity; } else { return em.merge(entity); } } For assigned identifiers, this method will always pick merge instead of persist, hence you will get both a SELECT and an INSERT for every newly inserted entity. The UUID generators This time we won’t assign the identifier ourselves but have Hibernate generate it on our behalf. When a null identifier is encountered, Hibernate assumes a transient entity, for whom it generates a new identifier value. This time, the merge operation won’t require a select query prior to inserting a transient entity. The UUIDHexGenerator The UUID hex generator is the oldest UUID identifier generator and it’s registered under the “uuid” type. It can generate a 32 hexadecimal UUID string value (it can also use a separator) having the following pattern: 8{sep}8{sep}4{sep}8{sep}4. This generator is not IETF RFC 4122 compliant, which uses the 8-4-4-4-12 digit representation. @Entity(name = "uuidIdentifier") public static class UUIDIdentifier {@GeneratedValue(generator = "uuid") @GenericGenerator(name = "uuid", strategy = "uuid") @Column(columnDefinition = "CHAR(32)") @Id private String uuidHex; } Persisting or merging a transient entity: session.persist(new UUIDIdentifier()); session.flush(); session.merge(new UUIDIdentifier()); session.flush(); Generates one INSERT statement per operation: Query:{[insert into uuidIdentifier (uuidHex) values (?)][2c929c6646f02fda0146f02fdbfa0000]} Query:{[insert into uuidIdentifier (uuidHex) values (?)][2c929c6646f02fda0146f02fdbfc0001]} You can check out the string parameter value sent to the SQL INSERT queries. The UUIDGenerator The newer UUID generator is IETF RFC 4122 compliant (variant 2) and it offers pluggable generation strategies. It’s registered under the “uuid2″ type and it offers a broader type range to choose from:java.lang.UUID a 16 byte array a hexadecimal String value@Entity(name = "uuid2Identifier") public static class UUID2Identifier {@GeneratedValue(generator = "uuid2") @GenericGenerator(name = "uuid2", strategy = "uuid2") @Column(columnDefinition = "BINARY(16)") @Id private UUID uuid; } Persisting or merging a transient entity: session.persist(new UUID2Identifier()); session.flush(); session.merge(new UUID2Identifier()); session.flush(); Generates one INSERT statement per operation: Query:{[insert into uuid2Identifier (uuid) values (?)][[B@68240bb]} Query:{[insert into uuid2Identifier (uuid) values (?)][[B@577c3bfa]} This SQL INSERT queries are using a byte array as we configured the @Id column definition.Code available on GitHub.Reference: Hibernate and UUID identifiers from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close