Featured FREE Whitepapers

What's New Here?


Amdahl’s law illustrated

The article will explain the Amdahl’s law in simple terms. We are going to demonstrate via a case study how throughput and latency are changing when you change the number of threads performing the tasks. We also help to draw right conclusions in the context of your own performance tuning task at hand. First of all, let’s refresh our memory on the definitions.Throughput – number of computing tasks closed per time unit. Example – 1,000 credit card payments in a minute. Latency – delay between invoking the operation and getting the response. Example – maximum time taken to process a credit card transaction is 25ms.The case we built is simulating a typical Java EE application doing some computation in the JVM and calling an external data source. Just like many of the Java EE apps out there – user initiates HttpRequest processed in the application server in turn calling either a relational database or a web service. The simulation achieves this via burning through CPU cycles in calculating large primes and enforcing the thread to sleep for a while. Even though the sleeping part sounds weird at the first glance – this is similar to how the threads are handled when waiting for an external resource. Sleeping / waiting threads are removed from the list of threads waiting for their schedule. Until the response arrives. The application we have built also contains a thread pool. Again, just as most of the modern application servers do. We are going to change the size of the pool. And we bombard the application with a lot of simulated “HttpRequests” to find out how the latency and throughput look under the load. But before doing so, could we give our best guess on what would the results look like? The tests were ran using a mid-2010 Macbook Pro on 2.66MHz Intel i7. Two cores, hyperthreading enabled. So four virtual cores for our threads to play with. At any given moment each of our four cores is making progress in one thread. In single-threaded environment the code used along our tests contained a snippet burning CPU cycles for ~50ms and a 1,000ms sleep. So 20-1 ratio between executing the code in JVM versus being blocked on the external call. Considering that total request time is 1,050ms out of which 1,000ms is kept waiting we can calculate the number of tasks optimal for each core to process. Number of optimal tasks per core is equal to 1050/(1050-1000) = 21 tasks. Out of which one is currently being processed and 20 are waiting for the external resource. Considering we have four cores in total the optimal number of threads should be close to 84. Now after running the tests with 4000 tasks to be executed and varying the number of threads in the pool between 1 and 3,200 we got the following results:We have beautiful case of Amdahl’s law in practice – adding number of threads do not have significant effect on performance after 60-100 threads. But it does not exactly prove our initial guess having the best results on 84 threads. Using 84 threads we are able to fulfil approximately 32 tasks per second. But when the pools sized between 200-1600 threads were all able to execute approximately 39 tasks per second. Why so? Apparently modern processors are a lot smarter than our naive calculation and the schedulers are not just applying simple round-robin algos to select the threads. If any of our readers can explain the surprise, we are more than eager to listen. But would this indicate that for this particular case we can throw in a lot more threads than our initial 84 indicated? Not so fast. Lets look at another metrics – latency:We see two significant bumps in latency – median latency is increased from 1,100ms to 1,500ms when we go from 16 threads to 32. And another more severe increase when we go from 100 to 200 – then the increase is already from 3,200 to 5,000 and it keeps growing quickly. Indicating that the context switches start taking their toll. Now in real life this means that it might not be a good decision to throw in 800 threads in this case. Even though the throughput is nice and we are processing approximately 39 requests per second, the average user has to wait 18 seconds to have a response to their request. Now what to conclude from here? Without actual requirements for throughput and latency we cannot push further. But in real life you have requirements in place. In form of “No requests can span more than 3,000ms and the average request must be completed in less than 2,000ms”. And “We must be able to process 30 requests per second”. Using those sample criterias we see that our sweet spot is anywhere between 64 and 100 threads. Another important remark – when increasing the count of threads in the pool to 3,200 (actually already 2,400 was enough) we have exhausted the resources available and face the good’ol java.lang.OutOfMemoryError: unable to create new native thread message. Which is the JVM’s way of saying that OS is running out of its resources for your application. Depending on your operating system you can bypass those limits (ulimit in Linux for example), but this goes beyond the scope of this article.   Reference: Amdahl’s law illustrated from our JCG partner Vladimir Sor at the Plumbr Blog blog. ...

Going REST: embedding Tomcat with Spring and JAX-RS (Apache CXF)

This post is logical continuation of the previous one. The only difference is the container we are going to use: instead of Jetty it will be our old buddy Apache Tomcat. Surprisingly, it was very easy to embed the latest Apache Tomcat 7 so let me show that now. I won’t repeat the last post in full as there are no any changes except in POM file and Starter class. Aside from those two, we are reusing everything we have done before. For a POM file, we need to remove Jetty dependencies and replace it with Apache Tomcat ones. The first change would be within properties section, we will replace org.eclipse.jetty.version with org.apache.tomcat.         So this line: <org.eclipse.jetty.version>8.1.8.v20121106</org.eclipse.jetty.version> becomes: <org.apache.tomcat>7.0.34</org.apache.tomcat> The second change would be dependencies themselves, we will replace these lines: <dependency> <groupid>org.eclipse.jetty</groupid> <artifactid>jetty-server</artifactid> <version>${org.eclipse.jetty.version}</version> </dependency> <dependency> <groupid>org.eclipse.jetty</groupid> <artifactid>jetty-webapp</artifactid> <version>${org.eclipse.jetty.version</version> </dependency>with these ones: <dependency> <groupid>org.apache.tomcat.embed</groupid> <artifactid>tomcat-embed-core</artifactid> <version>${org.apache.tomcat}</version> </dependency> <dependency> <groupid>org.apache.tomcat.embed</groupid> <artifactid>tomcat-embed-logging-juli</artifactid> <version>${org.apache.tomcat}</version> </dependency>Great, this part is done. The last part is dedicated to changes in our main class implementation, where we will replace Jetty with Apache Tomcat. package com.example;import java.io.File; import java.io.IOException;import org.apache.catalina.Context; import org.apache.catalina.loader.WebappLoader; import org.apache.catalina.startup.Tomcat; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.cxf.transport.servlet.CXFServlet; import org.springframework.web.context.ContextLoaderListener; import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;import com.example.config.AppConfig;public class Starter { private final static Log log = LogFactory.getLog( Starter.class ); public static void main(final String[] args) throws Exception { final File base = createBaseDirectory(); log.info( "Using base folder: " + base.getAbsolutePath() ); final Tomcat tomcat = new Tomcat(); tomcat.setPort( 8080 ); tomcat.setBaseDir( base.getAbsolutePath() ); Context context = tomcat.addContext( "/", base.getAbsolutePath() ); Tomcat.addServlet( context, "CXFServlet", new CXFServlet() ); context.addServletMapping( "/rest/*", "CXFServlet" ); context.addApplicationListener( ContextLoaderListener.class.getName() ); context.setLoader( new WebappLoader( Thread.currentThread().getContextClassLoader() ) ); context.addParameter( "contextClass", AnnotationConfigWebApplicationContext.class.getName() ); context.addParameter( "contextConfigLocation", AppConfig.class.getName() ); tomcat.start(); tomcat.getServer().await(); }private static File createBaseDirectory() throws IOException { final File base = File.createTempFile( "tmp-", "" ); if( !base.delete() ) { throw new IOException( "Cannot (re)create base folder: " + base.getAbsolutePath() ); } if( !base.mkdir() ) { throw new IOException( "Cannot create base folder: " + base.getAbsolutePath() ); } return base; } }The code looks pretty simple but verbose because of the fact that it seems impossible to run Apache Tomcat in embedded mode without specifying some working directory. The small createBaseDirectory() function creates a temporary folder which we are feeding to Apache Tomcat as a baseDir. Implementation reveals that we are running Apache Tomcat server instance on port 8080, we are configuring Apache CXF servlet to handle all request at /rest/* path, we are adding Spring context listener and finally we are starting server up. After building the project as a fat or one jar, we have a full-blown server hosting our JAR-RS application: mvn clean package java -jar target/spring-one-jar-0.0.1-SNAPSHOT.one-jar.jar And we should see the output like that: Jan 28, 2013 5:54:56 PM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ['http-bio-8080'] Jan 28, 2013 5:54:56 PM org.apache.catalina.core.StandardService startInternal INFO: Starting service Tomcat Jan 28, 2013 5:54:56 PM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.34 Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register WARNING: Could not get url for /javax/servlet/jsp/resources/jsp_2_0.xsd Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register WARNING: Could not get url for /javax/servlet/jsp/resources/jsp_2_1.xsd Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register WARNING: Could not get url for /javax/servlet/jsp/resources/jsp_2_2.xsd Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register WARNING: Could not get url for /javax/servlet/jsp/resources/web-jsptaglibrary_1_1.dtd Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register WARNING: Could not get url for /javax/servlet/jsp/resources/web-jsptaglibrary_1_2.dtd Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register WARNING: Could not get url for /javax/servlet/jsp/resources/web-jsptaglibrary_2_0.xsd Jan 28, 2013 5:54:56 PM org.apache.catalina.startup.DigesterFactory register WARNING: Could not get url for /javax/servlet/jsp/resources/web-jsptaglibrary_2_1.xsd Jan 28, 2013 5:54:57 PM org.apache.catalina.loader.WebappLoader setClassPath INFO: Unknown loader com.simontuffs.onejar.JarClassLoader@187a84e4 class com.simontuffs.onejar.JarClassLoader Jan 28, 2013 5:54:57 PM org.apache.catalina.core.ApplicationContext log INFO: Initializing Spring root WebApplicationContext Jan 28, 2013 5:54:57 PM org.springframework.web.context.ContextLoader initWebApplicationContext INFO: Root WebApplicationContext: initialization started Jan 28, 2013 5:54:58 PM org.springframework.context.support.AbstractApplicationContext prepareRefresh INFO: Refreshing Root WebApplicationContext: startup date [Mon Jan 28 17:54:58 EST 2013]; root of context hierarchy Jan 28, 2013 5:54:58 PM org.springframework.context.annotation.ClassPathScanningCandidateComponentProvider registerDefaultFilters INFO: JSR-330 'javax.inject.Named' annotation found and supported for component scanning Jan 28, 2013 5:54:58 PM org.springframework.web.context.support.AnnotationConfigWebApplicationContext loadBeanDefinitions INFO: Successfully resolved class for [com.example.config.AppConfig] Jan 28, 2013 5:54:58 PM org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessorINFO: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring Jan 28, 2013 5:54:58 PM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@62770d2e: defining beans [org.springframework.context.annotation.internal ConfigurationAnnotationProcessor,org.springframework.context.annotation. internalAutowiredAnnotationProcessor,org.springframework.context.annotation. internalRequiredAnnotationProces sor,org.springframework.context.annotation.internalCommonAnnotationProcessor, appConfig,org.springframework.context.annotation.ConfigurationClassPostProcessor. importAwareProcessor, cxf,jaxRsServer,jaxRsApiApplication,peopleRestService,peopleService,jsonProvider]; root of factory hierarchy Jan 28, 2013 5:54:59 PM org.apache.cxf.endpoint.ServerImpl initDestination INFO: Setting the server's publish address to be /api Jan 28, 2013 5:54:59 PM org.springframework.web.context.ContextLoader initWebApplicationContext INFO: Root WebApplicationContext: initialization completed in 1747 ms Jan 28, 2013 5:54:59 PM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ['http-bio-8080'] Let’s issue some HTTP requests so to be sure everything works as we expected: > curl http://localhost:8080/rest/api/people?page=2 [ {'email':'person+6@at.com','firstName':null,'lastName':null}, {'email':'person+7@at.com','firstName':null,'lastName':null}, {'email':'person+8@at.com','firstName':null,'lastName':null}, {'email':'person+9@at.com','firstName':null,'lastName':null}, {'email':'person+10@at.com','firstName':null,'lastName':null} ]> curl http://localhost:8080/rest/api/people -X PUT -d 'email=a@b.com' {'email':'a@b.com','firstName':null,'lastName':null} And we are still 100% XML free! One important note though: we create a temporary folder every time but never delete it (calling deleteOnShutdown for base doesn’t work as expected for non-empty folders). Please keep it in mind (add your own shutdown hook, for example) as I decided to leave code clean.   Reference: http://aredko.blogspot.gr/2013/01/going-rest-embedding-tomcat-with-spring.html from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog. ...

Hello Camel: Automatic File Transfer

Apache Camel is described on its main web page (and in the Camel User Guide) as ‘a versatile open-source integration framework based on known Enterprise Integration Patterns.’ The Camel framework is based on the book Enterprise Integration Patterns and provides implementations of the patterns described in that book. I look at a ‘Hello World’ type example of using Camel in this post. The Camel web page and Users Guide also reference the StackOverflow thread what exactly is Apache Camel? that includes several good descriptions of Apache Camel. David Newcomb has described Camel there:  Apache Camel is messaging technology glue with routing. It joins together messaging start and end points allowing the transference of messages from different sources to different destinations. For example: JMS->JSON, HTTP->JMS or funneling FTP->JMS, HTTP->JMS, JMS=>JSON.In this post, I look at a simple use of Camel that doesn’t require use of a JMS provider or even FTP or HTTP. Keeping the example simple makes it clearer how to use Camel. This example uses Camel to transfer files automatically from a specified directory to a different specified directory. Three cases will be demonstrated. In the first case, files placed in the ‘input’ directory are automatically copied to an ‘output’ directory without affecting the source files. In the second case, the files placed in the ‘input’ directory are automatically copied to an ‘output’ directory and then the files in the ‘input’ directory are stored in a special ‘.camel’ subdirectory under the ‘input’ directory. The third case removes the files from the ‘input’ directory upon copying to the ‘output’ directory (effectively a ‘move’ operation). All three cases are implemented with almost identical code. The only difference between the three is in the single line specifying how Camel should handle the file transfers. The next code listing shows the basic code needed to use Camel to automatically copy files placed in an input directory into a different output directory with Camel. /** * Simple executable function to demonstrate Camel file transfer. * * @param arguments Command line arguments; excepting duration in milliseconds * as single argument. */ public static void main(final String[] arguments) { final long durationMs = extractDurationMsFromCommandLineArgs(arguments); final CamelContext camelContext = new DefaultCamelContext(); try { camelContext.addRoutes( new RouteBuilder() { @Override public void configure() throws Exception { from('file:C:\\datafiles\\input?noop=true').to('file:C:\\datafiles\\output'); } }); camelContext.start(); Thread.sleep(durationMs); camelContext.stop(); } catch (Exception camelException) { LOGGER.log( Level.SEVERE, 'Exception trying to copy files - {0}', camelException.toString()); } } The code above demonstrates minimal use of the Camel API and Camel’s Java DSL support. A CamelContext is defined with an instantiation of DefaultCamelContext (line 10). Lines 13-21 add the Camel route to this instantiated context and line 22 starts the context with line 24 stopping the context. It’s all pretty simple, but the most interesting part to me is the specification of the routing on line 19. Because the instance implementing the RoutesBuilder interface provided to the Camel Context only requires its abstract configure method to be overridden, it is an easy class to instantiate as an anonymous class inline with the call to CamelContext.addRoutes(RoutesBuilder). This is what I did in the code above and is what is done in many of the Camel examples that are available online. Line 19 shows highly readable syntax describing ‘from’ and ‘to’ portions of routing. In this case, files placed in the input directory (‘from’) are to be copied to the output directory (‘to’). The ‘file’ protocol is used on both the ‘from’ and ‘to’ portions because the file system is where the ‘message’ is coming from and going to. The ‘?noop=true’ in the ‘from’ call indicates that nothing should be changed about the files in the ‘input’ directory (the processing should have ‘noop’ effect on the source files). As just mentioned, Line 19 in the code above instructs Camel to copy files already in or placed in the ‘input’ directory to the specified ‘output’ directory without impacting the files in the ‘input’ directory. In some cases, I may want to ‘move’ the files rather than ‘copying’ them. In such cases, ?delete=true can be specified instead of ?noop=true when specifying the ‘from’ endpoint. In other words, line 19 above could be replaced with this to have files removed from the ‘input’ directory when placed in the ‘output’ directory. If no parameter is designated on the input (neither ?noop=true nor ?delete=true), then an action that falls in-between those occurs: the files in the ‘input’ directory are moved into a specially created new subdirectory under the ‘input’ directory called .camel. The three cases are highlighted next. Files Copied from datafiles\input to datafiles\output Without Impacting Original Files from('file:C:\\datafiles\\input?noop=true').to('file:C:\\datafiles\\output'); Files Moved from datafiles\input to datafiles\output from('file:C:\\datafiles\\input?delete=true').to('file:C:\\datafiles\\output'); Files Copied from datafiles\input to datafiles\output and Original Files Moved to .camel Subdirectory from('file:C:\\datafiles\\input').to('file:C:\\datafiles\\output'); As a side note, the uses of fluent ‘from’ and ‘to’ are examples of Camel’s Java DSL. Camel implements this via implementation inheritance (methods like ‘from’ and ‘to’ are defined in the RouteBuilder class) rather than through static imports (an approach often used for Java-based DSLs.) Although it is common to pass anonymous instances of RouteBuilder to the Camel Context, this is not a requirement. There can be situations in which it is advantageous to have standalone classes that extend RouteBuilder and have instances of those extended classes passed to the Camel Context. I’ll use this approach to demonstrate all three cases I previously described. The next code listing shows a class that extends RouteBuilder. In many cases, I would have a no-arguments constructor, but in this case I use the constructor to determine which type of file transfer should be supported by the Camel route. The next code listing shows a named standalone class that handles all three cases shown above (copying, copying with archiving of input files, and moving). This single extension of RouteBuilder takes an enum in its constructor to determine how to configure the input endpoint. package dustin.examples.camel;import org.apache.camel.builder.RouteBuilder;/** * Camel-based Route Builder for transferring files. * * @author Dustin */ public class FileTransferRouteBuilder extends RouteBuilder { public enum FileTransferType { COPY_WITHOUT_IMPACTING_ORIGINALS('C'), COPY_WITH_ARCHIVED_ORIGINALS('A'), MOVE('M');private final String letter;FileTransferType(final String newLetter) { this.letter = newLetter; }public String getLetter() { return this.letter; }public static FileTransferType fromLetter(final String letter) { FileTransferType match = null; for (final FileTransferType type : FileTransferType.values()) { if (type.getLetter().equalsIgnoreCase(letter)) { match = type; break; } } return match; } }private final String fromEndPointString; private final static String FROM_BASE = 'file:C:\\datafiles\\input'; private final static String FROM_NOOP = FROM_BASE + '?noop=true'; private final static String FROM_MOVE = FROM_BASE + '?delete=true';public FileTransferRouteBuilder(final FileTransferType newFileTransferType) { if (newFileTransferType != null) { switch (newFileTransferType) { case COPY_WITHOUT_IMPACTING_ORIGINALS : this.fromEndPointString = FROM_NOOP; break; case COPY_WITH_ARCHIVED_ORIGINALS : this.fromEndPointString = FROM_BASE; break; case MOVE : this.fromEndPointString = FROM_MOVE; break; default : this.fromEndPointString = FROM_NOOP; } } else { fromEndPointString = FROM_NOOP; } }@Override public void configure() throws Exception { from(this.fromEndPointString).to('file:C:\\datafiles\\output'); } } This blog post has demonstrated use of Camel to easily route files from one directory to another. Camel supports numerous other transport mechanisms and data formats that are not shown here. Camel also supports the ability to transform the messages/data being routed, which is also not shown here. This post focused on what is likely to be as simplest possible example of how to apply Camel in a useful manner, but Camel supports far more than shown in this simple example.   Reference: Hello Camel: Automatic File Transfer from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...

Appsec and Technical Debt

Technical debt is a fact of life for anyone working in software development: work that needs to be done to make the system cleaner and simpler and cheaper to run over the long term, but that the business doesn’t know about or doesn’t see as a priority. This is because technical debt is mostly hidden from the people that use the system: the system works ok, even if there are shortcuts in design that make the system harder for developers to understand and change than it should be; or code that’s hard to read or that has been copied too many times; maybe some bugs that the customers don’t know about and that the development team is betting they won’t have to fix; and the platform has fallen behind on patches. It’s the same for most application security vulnerabilities. The system runs fine, customers can’t see anything wrong, but there’s something missing or not-quite-right   under the hood, and bad things might happen if these problems aren’t taken care of in time. Where does Technical Debt come from? Technical debt is the accumulation of many decisions made over the life of a system. Martin Fowler has a nice 2×2 matrix that explains how these decisions add to a system’s debt load:I think that this same matrix can be used to understand more about where application security problems come from, and how to deal with them. Deliberate Decisions Many appsec problems come from the top half of the quadrant, where people make deliberate, conscious decisions to short cut security work when they are designing and developing software. This is where the “debt” metaphor properly applies, because someone is taking out a loan against the future, trading off time against cost – making a strategic decision to save time now, get the software out the door knowing that they have taken on risks and costs that will have to be repaid later. This is the kind of decision that technology startups make all the time. Thinking Lean, it really doesn’t matter if a system is secure if nobody ever uses it. So build out important features first and get customers using them, then take care of making sure everything’s secure later if the company lasts that long. Companies that do make it this far often end up in a vicious cycle of getting hacked, fixing vulnerabilities and getting hacked again until they rewrite a lot of the code and eventually change how they think about security and secure development. Whether you are acting recklessly (top left) or prudently (top right) depends on whether you understand what your security and privacy obligations are, and understand what risks you are taking on by not meeting them. Are you considering security in requirements and in the design of the system and in how it’s built? Are you keeping track of the trade-offs that you are making? Do you know what it takes to build a secure system, and are you prepared to build more security in later, knowing how much this is going to cost? Unfortunately, when it comes to application security, many of these decisions are made irresponsibly. But there also situations when people don’t know enough about application security to make conscious trade-off decisions, even reckless decisions. They are in the bottom half of the quadrant, making mistakes and taking on significant risks without knowing it. Inadvertent Mistakes Many technical debt problems (and a lot of application security vulnerabilities) are the result of ignorance: from developers not understanding enough about the kind of system they are building or the language or platform that they are using or even the basics of making software to know if they are doing something wrong or if they aren’t doing something that they should be doing. This is technical debt that is hidden even from people inside the team. When it comes to appsec, there are too many simple things that too many developers still don’t know about, like how to write embedded SQL properly to protect an app from SQL Injection, or how important data input validation is and how to do it right, or even how to do something as simple as a Forgot Password function without messing it up and creating security holes. When they’re writing code badly without knowing it, they’re in the bottom left corner of the technical debt quadrant – reckless and ignorant. But it’s also too easy for teams who are trying to be responsible (bottom right) to miss things or make bad mistakes, because they don’t understand the black magic of how to store passwords securely or because they don’t know about Content Security Policy protection against XSS in web apps, or how to use tokens to protect sessions against CSRF, or any of the many platform-specific and situation-specific security holes that they have to plug. Most developers won’t know about these problems unless they get training, or until they fail an audit or a pen test, or until the system gets hacked, or maybe they will never know about them, whether the system has been hacked or not. Appsec Vulnerabilities as Debt Thinking of application security vulnerabilities as debt offers some new insights, and a new vocabulary when talking with developers and managers who already understand the idea of technical debt. Chris Wysopal at Veracode has gone farther and created a sensible application security debt model that borrows from existing cost models for technical debt, calculating the cost of latent application security vulnerabilities based on risk factors: breach probability and potential breach cost. Financial debt models like this are intended to help people (especially managers) understand the potential cost of technical debt or application security debt, and make them act more responsibly towards managing their debt. But unfortunately tracking debt costs hasn’t helped the world’s major governments face up to their debt obligations and it doesn’t seem to affect how most individuals manage their personal debt. And I don’t think that this approach will create real change in how businesses think of application security debt or technical debt, or how much effort they will put in to addressing it. Too many people in too many organizations have become too accustomed to living with debt, and they have learned to accept it as part of how they work. Paying off debt can always be put off until later, even if later never comes. Adding appsec vulnerabilities to the existing debt that most managers and developers are already dealing with isn’t going to get vulnerabilities taken care of faster, even vulnerabilities that have a high “interest cost”. We need a different way to convince managers and developers that application security needs to be taken seriously.   Reference: Appsec and Technical Debt from our JCG partner Jim Bird at the Building Real Software blog. ...


JNDI stands for Java Naming and Directory Interface. It is an API to providing access to a directory service, that is, a service mapping name (strings) with objects, reference to remote objects or simple data. This is called binding. The set of bindings is called the context. Applications use the JNDI interface to access resources. To put it very simply, it is like a hashmap with a String key and Object values representing resources on the web. Often, these resources are organized according to a hierarchy in directory services. Levels are defined with separators (for example ‘.’ for DNS, ‘,’ for LDAP). This is a naming convention. Each context has its naming convention. SPI stands for Service Provider Interface. In other words, these are APIs for services. JNDI specifies a SPI to implement directory services. Objects stored in directories can have attributes (id and value). CRUD operations can be performed on these attributes. Rather than providing a name, one can also search for objects according to their attributes, if the directory allows it. The information provided by user applications is called a search filter. What Issues Does JNDI Solve? Without JNDI, the location or access information of remote resources would have to be hard-coded in applications or made available in a configuration. Maintaining this information is quite tedious and error prone. If a resources has been relocated on another server, with another IP address, for example, all applications using this resource would have to be updated with this new information. With JNDI, this is not necessary. Only the corresponding resource binding has to be updated. Applications can still access it with its name and the relocation is transparent. Another common use is when applications are moved from a development environment, to a testing environment and finally to production. At each stage, one may want to use a different database for development, testing and production. In each context, one can make a different binding to each database. The application does not need to be udpated. What is LDAP? LDAP stands for Lightweight Directory Application Protocol. It is often used as a directory service in JNDI. Today, companies set LDAP server dedicated to responding to JNDI requests. A common use is to maintain a list of company employees, together with their emails and access credentials to miscellaneous application. By centralizing this information, each application does not have to store multiple copies of employees information in their own databases, which is easier to maintain and less prone to errors and incoherencies. What About JCA and CCI? JCA stands for Java EE Connector Architecture. It is a Java technology helping application servers and their applications connect to other information systems, by providing them with connections to these. JCA defines its own SPI for the connector service. CCI stands for Common Client Interface. It is defined as part of JCA. It is the API user applications use to access JCA connection services. JCA helps integrating information systems developed separately. Typically, instead of using JDBC to access databases, which is more or less equivalent to hard-coding configurations, a user application can use JCA to connect to these databases (or information systems). The JCA instance can be registered in the JDNI directory and retrieved by the user applications too. What About Web Applications? Typically, web applications run in containers called application servers. Web applications can create their own JNDI service to store objects, but they can also retrieve these from the container itself by using their corresponding name. In this case, the resource (often a database) is configured at the container level.   Reference: What Is JNDI, SPI, CCI, LDAP And JCA? from our JCG partner Jerome Versrynge at the Technical Notes blog. ...

Android UI: Taking a look at iosched open source app

So, yes, open source is right, great and helps everyone to learn how to do right things (or even to just learn more about some framework). This week I needed to look at the source code of two open source apps for Android: iosched and ubuntu one for android. They’re both pretty great and they deserve attention and take a closer look to their source code. Now, I’m going to talk you about a nice implementation I found on IOSched app. The SinglePane pattern. I really didn’t find it very useful first time, but when you start using it, you find it really cool! It provides you a simple way to define Activities that just have one fragment for its content. Simple, uh?       /** * Copyright 2012 Google Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */package com.google.android.apps.iosched.ui;import com.google.android.apps.iosched.R;import android.content.Intent; import android.os.Bundle; import android.support.v4.app.Fragment;/** * A {@link BaseActivity} that simply contains a single fragment. The intent used to invoke this * activity is forwarded to the fragment as arguments during fragment instantiation. Derived * activities should only need to implement {@link SimpleSinglePaneActivity#onCreatePane()}. */ public abstract class SimpleSinglePaneActivity extends BaseActivity { private Fragment mFragment;@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_singlepane_empty);if (getIntent().hasExtra(Intent.EXTRA_TITLE)) { setTitle(getIntent().getStringExtra(Intent.EXTRA_TITLE)); }final String customTitle = getIntent().getStringExtra(Intent.EXTRA_TITLE); setTitle(customTitle != null ? customTitle : getTitle());if (savedInstanceState == null) { mFragment = onCreatePane(); mFragment.setArguments(intentToFragmentArguments(getIntent())); getSupportFragmentManager().beginTransaction() .add(R.id.root_container, mFragment, "single_pane") .commit(); } else { mFragment = getSupportFragmentManager().findFragmentByTag("single_pane"); } }/** * Called in <code>onCreate</code> when the fragment constituting this activity is needed. * The returned fragment's arguments will be set to the intent used to invoke this activity. */ protected abstract Fragment onCreatePane();public Fragment getFragment() { return mFragment; } }Although this time, it extends from BaseActivity, you could extend from Activity, SherlockFragmentActivity, RoboSherlockFragmentActivity or whatever fits your project. Just take into account that It must be able to use fragments. As you can see it’s pretty simple, you just have to extend this class and override onCreatePane() on your child activity. public class ExampleOneFragmentActivity extends SinglePaneActivity {@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); }@Override protected Fragment onCreatePane() { return new ExampleOneFragment(); }} and this would be the fragment: <ScrollView xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent"><LinearLayout android:orientation="vertical" android:layout_width="match_parent" android:layout_height="wrap_content" android:paddingLeft="@dimen/content_padding_normal" android:paddingRight="@dimen/content_padding_normal" android:paddingTop="@dimen/content_padding_normal" android:paddingBottom="@dimen/content_padding_normal"><TextView android:id="@+id/vendor_name" android:layout_width="match_parent" android:layout_height="wrap_content" style="@style/TextHeader" /><TextView android:id="@+id/vendor_url" android:layout_width="match_parent" android:layout_height="wrap_content" android:autoLink="web" android:paddingBottom="@dimen/element_spacing_normal" style="@style/TextBody" /><com.google.android.apps.iosched.ui.widget.BezelImageView android:id="@+id/vendor_logo" android:scaleType="centerCrop" android:layout_width="@dimen/vendor_image_size" android:layout_height="@dimen/vendor_image_size" android:layout_marginTop="@dimen/element_spacing_normal" android:src="@drawable/sandbox_logo_empty"/><TextView android:id="@+id/vendor_desc" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_marginTop="@dimen/element_spacing_normal" android:paddingTop="@dimen/element_spacing_normal" style="@style/TextBody" /> </LinearLayout> </ScrollView> *That one is picked from iosched Don’t forget that this simple pattern also provides passing extra to set a new title! Or you can use this two methods from BaseActivity to pass arguments from activities to fragments. /** * Converts an intent into a {@link Bundle} suitable for use as fragment arguments. */ public static Bundle intentToFragmentArguments(Intent intent) { Bundle arguments = new Bundle(); if (intent == null) { return arguments; }final Uri data = intent.getData(); if (data != null) { arguments.putParcelable("_uri", data); }final Bundle extras = intent.getExtras(); if (extras != null) { arguments.putAll(intent.getExtras()); }return arguments; }/** * Converts a fragment arguments bundle into an intent. */ public static Intent fragmentArgumentsToIntent(Bundle arguments) { Intent intent = new Intent(); if (arguments == null) { return intent; }final Uri data = arguments.getParcelable("_uri"); if (data != null) { intent.setData(data); }intent.putExtras(arguments); intent.removeExtra("_uri"); return intent; }   Reference: Android UI: Taking a look at iosched open source app from our JCG partner Javier Manzano at the Javier Manzano’s Blog blog. ...

Static factory methods vs traditional constructors

I’ve previously talked a little bit about the Builder Pattern, a useful pattern to instantiate classes with several (possibly optional) attributes that results in easier to read, write and maintain client code, among other benefits. Today, I’m going to continue exploring object creation techniques but this time for a more general case. Take the following example, which is by no means a useful class other than to make my point. We have a RandomIntGenerator class that, as the name suggests, generates random int numbers. Something like:       public class RandomIntGenerator { private final int min; private final int max;public int next() {...} } Our generator takes a minimum and maximum and then generates random numbers between those 2 values. Notice that the two attributes are declared final so we have to initialize them either on their declaration or in the class constructor. Let’s go with the constructor: public RandomIntGenerator(int min, int max) { this.min = min; this.max = max; } Now, we also want to give our clients the possibility to specify just a minimum value and then generate random values between that minimum and the max possible value for ints. So we add a second constructor: public RandomIntGenerator(int min) { this.min = min; this.max = Integer.MAX_VALUE; } So far so good, right? But in the same way that we provided a constructor to just specify the minimum value, we want to do the same for just the maximum. We’ll just add a third constructor like: public RandomIntGenerator(int max) { this.min = Integer.MIN_VALUE; this.max = max; } If you try that, you’ll get a compilation error that goes: Duplicate method RandomIntGenerator(int) in type RandomIntGenerator. What’s wrong? The problem is that constructors, by definition, have no names. As such, a class can only have one constructor with a given signature in the same way that you can’t have two methods with the same signature (same return type, name and parameters type). That is why when we tried to add the RandomIntGenerator(int max) constructor we got that compilation error, because we already had the RandomIntGenerator(int min) one. Is there something we can do in cases like this one? Not with constructors but fortunately there’s something else we can use: static factory methods, which are simply public static methods that return an instance of the class. You’ve probably used this technique without even realizing it. Have you ever used Boolean.valueOf? It looks something like: public static Boolean valueOf(boolean b) { return (b ? TRUE : FALSE); } Applying static factories to our RandomIntGenerator example, we could get: public class RandomIntGenerator { private final int min; private final int max;private RandomIntGenerator(int min, int max) { this.min = min; this.max = max; }public static RandomIntGenerator between(int max, int min) { return new RandomIntGenerator(min, max); }public static RandomIntGenerator biggerThan(int min) { return new RandomIntGenerator(min, Integer.MAX_VALUE); }public static RandomIntGenerator smallerThan(int max) { return new RandomIntGenerator(Integer.MIN_VALUE, max); }public int next() {...} } Note how the constructor was made private to ensure that the class is only instantiated through its public static factory methods. Also note how your intent is clearly expressed when you have a client with RandomIntGenerator.between(10,20) instead of new RandomIntGenerator(10,20). It’s worth mentioning that this technique is not the same as the Factory method Design Pattern from the Gang of Four. Any class can provide static factory methods instead of, or in addition to, constructors. So what are the advantages and disadvantages of this technique? We already mentioned the first advantage of static factory methods: unlike constructors they have names. This has two direct consequences,We can provide a meaningful name for our constructors. We can provide several constructors with the same number and type of parameters, something that as we saw earlier we can’t do with class constructors.Another advantage of static factories is that, unlike constructors, they are not required to return a new object every time they are invoked. This is extremely useful when working with immutable classes to provide constant objects for common used values and avoid creating unnecessary duplicate objects. The Boolean.valueOf code that I showed previously illustrates this point perfectly. Notice that this static method returns either TRUE or FALSE, both immutable Boolean objects. A third advantage of static factory methods is that they can return an object of any subtype of their return type. This gives you the possibility to change the return type freely without affecting clients. Moreover, you can hide implementation classes and have an interface-based API, which is usually a really good idea. But I think this can be better seen by an example. Remember the RandomIntGenerator at the beginning of this post? Let’s make that a little bit more complicated. Imagine that we now want to provide random generators not just for integers but for other data-types like String, Double or Long. They are all going to have a next() method that returns a random object of a particular type, so we could start with an interface like: public interface RandomGenerator<T> { T next(); } Our first implementation of the RandomIntGenerator now becomes: class RandomIntGenerator implements RandomGenerator<Integer> { private final int min; private final int max;RandomIntGenerator(int min, int max) { this.min = min; this.max = max; }public Integer next() {...} } We could also have a String generator: class RandomStringGenerator implements RandomGenerator<String> { private final String prefix;RandomStringGenerator(String prefix) { this.prefix = prefix; }public String next() {...} } Notice how all the classes are declared package-private (default scope) and so are their constructors. This means that no client outside of their package can create instances of these generators. So what do we do? Tip: It starts with “static” and ends with “methods”. Consider the following class: public final class RandomGenerators { // Suppresses default constructor, ensuring non-instantiability. private RandomGenerators() {}public static final RandomGenerator<Integer> getIntGenerator() { return new RandomIntGenerator(Integer.MIN_VALUE, Integer.MAX_VALUE); }public static final RandomGenerator<String> getStringGenerator() { return new RandomStringGenerator(''); } } RandomGenerators is just a noninstantiable utility class with nothing else than static factory methods. Being on the same package as the different generators this class can effectively access and instantiate those classes. But here comes the interesting part. Note that the methods only return the RandomGenerator interface, and that’s all the clients need really. If they get a RandomGenerator<Integer> they know that they can call next() and get a random integer. Imagine that next month we code a super efficient new integer generator. Provided that this new class implements RandomGenerator<Integer> we can change the return type of the static factory method and all clients are now magically using the new implementation without them even noticing the change. Classes like RandomGenerators are quite common both on the JDK and on third party libraries. You can see examples in Collections (in java.util), Lists, Sets or Maps in Guava. The naming convention is usually the same: if you have an interface named Type you put your static factory methods in a noninstantiable class named Types. A final advantage of static factories is that they make instantiating parameterized classes a lot less verbose. Have you ever had to write code like this? Map<String, List<String>> map = new HashMap<String, List<String>>(); You are repeating the same parameters twice on the same line of code. Wouldn’t it be nice if the right side of the assign could be inferred from the left side? Well, with static factories it can. The following code is taken from Guava’s Maps class: public static <K, V> HashMap<K, V> newHashMap() { return new HashMap<K, V>(); } So now our client code becomes: Map<String, List<String>> map = Maps.newHashMap(); Pretty nice, isn’t it? This capability is known as Type inference. It’s worth mentioning that Java 7 introduced type inference through the use of the diamond operator. So if you’re using Java 7 you can write the previous example as: Map<String, List<String>> map = new HashMap<>(); The main disadvantage of static factories is that classes without public or protected constructors cannot be extended. But this might be actually a good thing in some cases because it encourages developers to favor composition over inheritance. To summarize, static factory methods provide a lot of benefits and just one drawback that might actually not be a problem when you think about it. Therefore, resist the urge to automatically provide public constructors and evaluate if static factories are a better fit for your class.   Reference: Static factory methods vs traditional constructors from our JCG partner Jose Luis at the Development the way it should be blog. ...

The ins and outs of immutability

So in my first post I talked a little bit about the builder pattern and I mentioned a really powerful but yet overlooked concept: immutability. What is an immutable class? It’s simply a class whose instances can’t be modified. Every value for the class’ attributes is set on their declaration or in its constructor and they keep those values for the rest of the object’s life-cycle. Java has quite a few immutable classes, such as String, all the boxed primitives (Double, Integer, Float, etc), BigInteger and BigDecimal among others. There is a good reason for this: immutable classes are easier to design, implement and use than mutable classes. Once they are instantiated they can only be in one state so they are less error prone and, as we’ll see later in this post, they are more secure. How do you ensure that a class is immutable? Just follow these 5 simple steps:Don’t provide any public methods that modify the object’s state, also known as mutators (such as setters). Prevent the class from being extended. This doesn’t allow any malicious or careless class to extend our class and compromise its immutable behavior. The usual and easier way to do this is to mark the class as final, but there’s another way that I’ll mention in this post. Make all fields final. This is a way to let the compiler enforce point number 1 for you. Additionally, it clearly lets anyone who sees your code know that you don’t want those fields to change their values once they are set. Make all fields private. This one should be pretty obvious and you should follow it regardless of whether you’re taking immutability into consideration or not, but I’m mentioning this just in case. Never provide access to any mutable attribute. If your class has a mutable object as one of its properties (such as a List, a Map or any other mutable object from your domain problem) make sure that clients of your class can never get a reference to that object. This means that you should never directly return a reference to them from an accessor (e.g., a getter) and you should never initialize them on your constructor with a reference passed as parameter from a client. You should always make defensive copies in this case.That’s a lot of theory and no code, so lets see what a simple immutable class looks like and how it deals with the 5 steps I mentioned before: public class Book { private final String isbn; private final int publicationYear; private final List reviews; private Book(BookBuilder builder) { this.isbn = builder.isbn; this.publicationYear = builder.publicationYear; this.reviews = Lists.newArrayList(builder.reviews); } public String getIsbn() { return isbn; } public int getPublicationYear() { return publicationYear; } public List getReviews() { return Lists.newArrayList(reviews); } public static class BookBuilder { private String isbn; private int publicationYear; private List reviews; public BookBuilder isbn(String isbn) { this.isbn = isbn; return this; } public BookBuilder publicationYear(int year) { this.publicationYear = year; return this; } public BookBuilder reviews(List reviews) { this.reviews = reviews == null ? new ArrayList() : reviews; return this; } public Book build() { return new Book(this); } } } We’ll go through the important points in this pretty simple class. First of all, as you’ve probably noticed, I’m using the builder pattern again. This is not just because I’m a big fan of it but also because I wanted to illustrate a few points that I didn’t want to get into my previous post without first giving you a basic understanding of the concept of immutability. Now, let’s go through the 5 steps that I mentioned you need to follow to make a class immutable and see if they hold valid for this Book example:Don’t provide any public methods that modify the object’s state. Notice that the only methods on the class are its private constructor and getters for its properties but no method to change the object’s state. Prevent the class from being extended. This one is quite tricky. I mentioned that the easiest way to ensure this was to make the class final but the Book class is clearly not final. However, notice that the only constructor available is private. The compiler makes sure that a class without public or protected constructors cannot be subclassed. So in this case the final keyword on the class declaration is not necessary but it might be a good idea to include it anyway just to make your intention clear to anyone who sees your code. Make all fields final. Pretty straightforward, all attributes on the class are declared as final. Never provide access to any mutable attribute. This one is actually quite interesting. Notice how the Book class has a List<String> attribute that is declared as final and whose value is set on the class constructor. However, this List is a mutable object. That is, while the reviews reference cannot change once it is set, the content of the list can. A client with a reference to the same list could add or delete an element and, as a result, change the state of the Book object after its creation. For this reason, note that on the Book constructor we don’t assign the reference directly. Instead, we use the Guava library to make a copy of the list by calling ”this.reviews = Lists.newArrayList(builder.reviews);“. The same situation can be seen on the getReviews method, where we return a copy of the list instead of the direct reference. It is worth noting that this example might be a bit oversimplified, because the reviews list can only contain strings, which are immutable. If the type of the list is a mutable class then you would also have to make a copy of each object in the list and not just the list itself.That last point illustrates why immutable classes result in cleaner designs and easier to read code. You can just share around those immutable objects without having to worry about defensive copies. In fact, you should never make any copies at all because any copy of the object would be forever equal to the original. A corollary is that immutable objects are just plain simple. They can be in only one state and they keep that state for their entire life. You can use the class constructor to check any invariants (i,e,. conditions that need to be valid on the class like range of values for one of its attributes) and then you can ensure that those invariants will remain true without any effort on your part or your clients. Another huge benefit of immutable objects is that they are inherently thread-safe. They cannot be corrupted by multiple threads accessing the objects concurrently. This is, by far, the easiest and less error prone approach to provide thread safety in your application. But what if you already have a Book instance and you want to change the value of one of its attributes? In other words, you want to change the state of the object. On an immutable class this is, by definition, not possible. But, as with most things in software, there’s always a workaround. In this particular case there’s actually two. The first option is to use the Fluent Interface technique on the Book class and have setter-like methods that actually create an object with the same values for all its attributes except for the one you want to change. In our example we would have to add the following to the Book class: private Book(BookBuilder builder) { this(builder.isbn, builder.publicationYear, builder.reviews); } private Book(String isbn, int publicationYear, List reviews) { this.isbn = isbn; this.publicationYear = publicationYear; this.reviews = Lists.newArrayList(reviews); } public Book withIsbn(String isbn) { return new Book(isbn,this.publicationYear, this.reviews); } Note that we added a new private constructor where we can specify the value of each attribute and modified the old constructor to use the new one. Additionally, we added a new method that returns a new Book object with the value we wanted for the isbn attribute. The same concept applies to the rest of the class’ attributes. This is known as a functional approach because methods return the result of operating on their parameters without modifying them. This is to contrast it from the procedural or imperative approach where methods apply a procedure to their operands, thus changing their state. This approach to generate new objects shows the only real disadvantage of immutable classes: they require us to create a new object for each distinct value we need and this can produce a considerable overhead in performance and memory consumption. This problem is magnified if you want to change several attributes of the object because you are generating a new object in each step and you end up discarding all intermediate objects and keeping just the last result. We can provide a better alternative for the case of multi-step operations like the one I described on the last paragraph with the help of the builder pattern. Basically, we add a new constructor to the builder that takes an already created instance to set all its initial values. Then, the client can use the builder in the usual way to set all the desired values and then use the build method to create the final object. That way, we avoid creating intermediate objects with only some of the values we need. In our example this technique would look something like this on the builder side: public BookBuilder(Book book) { this.isbn = book.getIsbn(); this.publicationYear = book.getPublicationYear(); this.reviews = book.getReviews(); } Then, on our clients, we can have: Book originalBook = getRandomBook();Book modifiedBook = new BookBuilder(originalBook).isbn('123456').publicationYear(2011).build(); Now, obviously the builder is not thread-safe so you have to take all the usual precautions, such as not sharing a builder with multiple threads. I mentioned that the fact that we have to create a new object for every change in state can have an overhead in performance and this is the only real disadvantage of immutable classes. However, object creation is one of the aspects of the JVM that is under continuous improvement. In fact, except for exceptional cases, object creation is a lot more efficient than you probably think. In any case, it’s usually a good idea to come up with a simple and clear design and then, only after measuring, refactor for performance. Nine out of ten times when you try to guess where your code is taking so much time you’ll discover that you were wrong. Additionally, the fact that immutable objects can be shared freely without having to worry about the consequences gives you the chance to encourage clients to reuse existing instances wherever possible, thus reducing considerably the number of objects created. A common way to do this is to provide public static final constants for the most common values. This technique is heavily used on the JDK, for example in Boolean.FALSE or BigDecimal.ZERO. To conclude this post, if you want to take something out of it let it be this: classes should be immutable unless there’s a very good reason to make them mutable. Don’t automatically add a setter for every class attribute. If for some reason you absolutely can’t make your class immutable, then limit its mutability as much as possible. The fewer states in which an object can be, the easier it is to think about the object and its invariants. And don’t worry about the performance overhead of immutability, chances are that you won’t have to worry about it.   Reference: The ins and outs of immutability from our JCG partner Jose Luis at the Development the way it should be blog. ...

The builder pattern in practice

I’m not going to dive into much details about the pattern because there’s already tons of posts and books that explain it in fine detail. Instead, I’m going to tell you why and when you should consider using it. However, it is worth mentioning that this pattern is a bit different to the one presented in the Gang of Four book. While the original pattern focuses on abstracting the steps of construction so that by varying the builder implementation used we can get a different result, the pattern explained in this post deals with removing the unnecessary complexity that stems from multiple constructors, multiple optional parameters and overuse of setters. Imagine you have a class with a substantial amount of attributes like the User class below. Let’s assume that you want to make the class immutable (which, by the way,   unless there’s a really good reason not to you should always strive to do. But we’ll get to that in a different post). public class User { private final String firstName; //required private final String lastName; //required private final int age; //optional private final String phone; //optional private final String address; //optional ... } Now, imagine that some of the attributes in your class are required while others are optional. How would you go about building an object of this class? All attributes are declared final so you have to set them all in the constructor, but you also want to give the clients of this class the chance of ignoring the optional attributes. A first and valid option would be to have a constructor that only takes the required attributes as parameters, one that takes all the required attributes plus the first optional one, another one that takes two optional attributes and so on. What does that look like? Something like this: public User(String firstName, String lastName) { this(firstName, lastName, 0); }public User(String firstName, String lastName, int age) { this(firstName, lastName, age, ''); }public User(String firstName, String lastName, int age, String phone) { this(firstName, lastName, age, phone, ''); }public User(String firstName, String lastName, int age, String phone, String address) { this.firstName = firstName; this.lastName = lastName; this.age = age; this.phone = phone; this.address = address; } The good thing about this way of building objects of the class is that it works. However, the problem with this approach should be pretty obvious. When you only have a couple of attributes is not such a big deal, but as that number increases the code becomes harder to read and maintain. More importantly, the code becomes increasingly harder for clients. Which constructor should I invoke as a client? The one with 2 parameters? The one with 3? What is the default value for those parameters where I don’t pass an explicit value? What if I want to set a value for address but not for age and phone? In that case I would have to call the constructor that takes all the parameters and pass default values for those that I don’t care about. Additionally, several parameters with the same type can be confusing. Was the first String the phone number or the address? So what other choice do we have for these cases? We can always follow the JavaBeans convention, where we have a default no-arg constructor and have setters and getters for every attribute. Something like: public class User { private String firstName; // required private String lastName; // required private int age; // optional private String phone; // optional private String address; //optionalpublic String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public int getAge() { return age; } public void setAge(int age) { this.age = age; } public String getPhone() { return phone; } public void setPhone(String phone) { this.phone = phone; } public String getAddress() { return address; } public void setAddress(String address) { this.address = address; } } This approach seems easier to read and maintain. As a client I can just create an empty object and then set only the attributes that I’m interested in. So what’s wrong with it? There are two main problems with this solution. The first issue has to do with having an instance of this class in an inconsistent state. If you want to create an User object with values for all its 5 attributes then the object will not have a complete state until all the setX methods have been invoked. This means that some part of the client application might see this object and assume that is already constructed while that’s actually not the case. The second disadvantage of this approach is that now the User class is mutable. You’re loosing all the benefits of immutable objects. Fortunately there is a third choice for these cases, the builder pattern. The solution will look something like the following. public class User { private final String firstName; // required private final String lastName; // required private final int age; // optional private final String phone; // optional private final String address; // optionalprivate User(UserBuilder builder) { this.firstName = builder.firstName; this.lastName = builder.lastName; this.age = builder.age; this.phone = builder.phone; this.address = builder.address; }public String getFirstName() { return firstName; }public String getLastName() { return lastName; }public int getAge() { return age; }public String getPhone() { return phone; }public String getAddress() { return address; }public static class UserBuilder { private final String firstName; private final String lastName; private int age; private String phone; private String address;public UserBuilder(String firstName, String lastName) { this.firstName = firstName; this.lastName = lastName; }public UserBuilder age(int age) { this.age = age; return this; }public UserBuilder phone(String phone) { this.phone = phone; return this; }public UserBuilder address(String address) { this.address = address; return this; }public User build() { return new User(this); }} } A couple of important points worth noting:The User constructor is private, which means that this class can not be directly instantiated from the client code. The class is once again immutable. All attributes are final and they’re set on the constructor. Additionally, we only provide getters for them. The builder uses the Fluent Interface idiom to make the client code more readable (we’ll see an example of this in a moment). The builder constructor only receives the required attributes and this attributes are the only ones that are defined “final” on the builder to ensure that their values are set on the constructor.The use of the builder pattern has all the advantages of the first two approaches I mentioned at the beginning and none of their shortcomings. The client code is easier to write and, more importantly, to read. The only critique that I’ve heard about the pattern is the fact that you have to duplicate the class’ attributes on the builder. However, given the fact that the builder class is usually a static member class of the class it builds, they can evolve together fairly easy. Now, how does the client code trying to create a new User object looks like? Let’s see: public User getUser() { return new User.UserBuilder('Jhon', 'Doe') .age(30) .phone('1234567') .address('Fake address 1234') .build(); } Pretty neat, isn’t it? You can build a User object in 1 line of code and, most importantly, is very easy to read. Moreover, you’re making sure that whenever you get an object of this class is not going to be on an incomplete state. This pattern is really flexible. A single builder can be used to create multiple objects by varying the builder attributes between calls to the “build” method. The builder could even auto-complete some generated field between each invocation, such as an id or serial number. An important point is that, like a constructor, a builder can impose invariants on its parameters. The build method can check these invariants and throw an IllegalStateException if they are not valid. It is critical that they be checked after copying the parameters from the builder to the object, and that they be checked on the object fields rather than the builder fields. The reason for this is that, since the builder is not thread-safe, if we check the parameters before actually creating the object their values can be changed by another thread between the time the parameters are checked and the time they are copied. This period of time is known as the “window of vulnerability”. In our User example this could look like the following: public User build() { User user = new user(this); if (user.getAge() 120) { throw new IllegalStateException(“Age out of range”); // thread-safe } return user; } The previous version is thread-safe because we first create the user and then we check the invariants on the immutable object. The following code looks functionally identical but it’s not thread-safe and you should avoid doing things like this: public User build() { if (age 120) { throw new IllegalStateException(“Age out of range”); // bad, not thread-safe } // This is the window of opportunity for a second thread to modify the value of age return new User(this); } A final advantage of this pattern is that a builder could be passed to a method to enable this method to create one or more objects for the client, without the method needing to know any kind of details about how the objects are created. In order to do this you would usually have a simple interface like: public interface Builder { T build(); } In the previous User example, the UserBuilder class could implement Builder<User>. Then, we could have something like: UserCollection buildUserCollection(Builder<? extends User> userBuilder){...} Well, that was a pretty long first post. To sum it up, the Builder pattern is an excellent choice for classes with more than a few parameters (is not an exact science but I usually take 4 attributes to be a good indicator for using the pattern), especially if most of those parameters are optional. You get client code that is easier to read, write and maintain. Additionally, your classes can remain immutable which makes your code safer. UPDATE: if you use Eclipse as your IDE, it turns out that you have quite a few plugins to avoid most of the boiler plate code that comes with the pattern. The three I’ve seen are:http://code.google.com/p/bpep/ http://code.google.com/a/eclipselabs.org/p/bob-the-builder/ http://code.google.com/p/fluent-builders-generator-eclipse-plugin/I haven’t tried any of them personally so I can’t really give an informed decision on which one is better. I reckon that similar plugins should exist for other IDEs.   Reference: The builder pattern in practice from our JCG partner Jose Luis at the Development the way it should be blog. ...

Spring Data JPA and pagination

Let us start with the classic JPA way to support pagination. Consider a simple domain class – A ‘Member’ with attributes first name, last name. To support pagination on a list of members, the JPA way is to support a finder which takes in the offset of the first result(firstResult) and the size of the result(maxResults) to retrieve, this way:               import java.util.List;import javax.persistence.TypedQuery;import org.springframework.stereotype.Repository;import mvcsample.domain.Member;@Repository public class JpaMemberDao extends JpaDao<Long, Member> implements MemberDao{public JpaMemberDao(){ super(Member.class); } @Override public List<Member> findAll(int firstResult, int maxResults) { TypedQuery<Member> query = this.entityManager.createQuery('select m from Member m', Member.class); return query.setFirstResult(firstResult).setMaxResults(maxResults).getResultList(); }@Override public Long countMembers() { TypedQuery<Long> query = this.entityManager.createQuery('select count(m) from Member m', Long.class); return query.getSingleResult(); } } An additional API which returns the count of the records is needed to determine the number of pages for the list of entity, as shown above. Given this API, two parameters are typically required from the UI:the current page being displayed (say ‘page.page’) the size of list per page (say ‘page.size’)The controller will be responsible for transforming these inputs to the one required by the JPA – firstResult and maxResults this way: @RequestMapping(produces='text/html') public String list(@RequestParam(defaultValue='1', value='page.page', required=false) Integer page, @RequestParam(defaultValue='10', value='page.size', required=false) Integer size, Model model){ int firstResult = (page==null)?0:(page-1) * size; model.addAttribute('members',this.memberDao.findAll(firstResult, size)); float nrOfPages = (float)this.memberDao.countMembers()/size; int maxPages = (int)( ((nrOfPages>(int)nrOfPages) || nrOfPages==0.0)?nrOfPages+1:nrOfPages); model.addAttribute('maxPages', maxPages); return 'members/list'; } Given a list as a model attribute and the count of all pages(maxPages above), the list can be transformed to a simple table in a jsp, there is a nice tag library that is packaged with Spring Roo which can be used to present the pagination element in a jsp page, I have included it with the reference.So this is the approach to pagination using JPA and Spring MVC. Spring-Data-JPA makes this even simpler, first is the repository interface to support retrieving a paginated list – in its simplest form the repository simply requires extending Spring-Data-JPA interfaces and at runtime generates the proxies which implements the real JPA calls: import mvcsample.domain.Member;import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.stereotype.Repository;public interface MemberRepository extends JpaRepository<Member, Long>{ // } Given this, the controller method which accesses the repository interface is also very simple: @RequestMapping(produces='text/html') public String list(Pageable pageable, Model model){ Page<Member> members = this.memberRepository.findAll(pageable); model.addAttribute('members', members.getContent()); float nrOfPages = members.getTotalPages(); model.addAttribute('maxPages', nrOfPages); return 'members/list'; } The controller method accepts a parameter called Pageable, this parameter is populated using a Spring MVC HandlerMethodArgumentResolver that looks for request parameters by name ‘page.page’ and ‘page.size’ and converts them into the Pageable argument. This custom HandlerMethodArgumentResolver is registered with Spring MVC this way: <mvc:annotation-driven> <mvc:argument-resolvers> <bean class='org.springframework.data.web.PageableArgumentResolver'></bean> </mvc:argument-resolvers> </mvc:annotation-driven> the JpaRepository API takes in the pageable argument and returns a page, internally automatically populating the count of pages also which can retrieved from the Page methods. If the queries need to be explicitly specified then this can be done in a number of ways, one of which is the following: @Query(value='select m from Member m', countQuery='select count(m) from Member m') Page<Member> findMembers(Pageable pageable); One catch which I could see is that that pageable’s page number is 0 indexed, whereas the one passed from the UI is 1 indexed, however the PageableArgumentResolver internally handles and converts the 1 indexed UI page parameter to the required 0 indexed value. Spring Data JPA thus makes it really simple to implement a paginated list page. I am including a sample project which ties all this together, along with the pagination tag library which makes it simple to show the paginated list. Resources:A sample projects which implements a paginated list is available here : https://github.com/bijukunjummen/spring-mvc-test-sample.git Spring-Data-JPA reference: http://static.springsource.org/spring-data/data-jpa/docs/current/reference/html/  Reference: Spring Data JPA and pagination from our JCG partner Biju Kunjummen at the all and sundry blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: