Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Going REST: embedding Jetty with Spring and JAX-RS (Apache CXF)

For hardcore server-side Java developer the only way to ‘speak’ out to the world is by using APIs. Today’s post is all about JAX-RS: writing and exposing RESTful services using Java. But we won’t do that using a traditional, heavyweight approach involving application server, WAR packaging and whatnot. Instead, we will use awesome Apache CXF framework and as always rely on Spring to wire all pieces together. And for sure we won’t stop on that either as we need a web server to run our services on. Using fat or one jar concept we will embed Jetty server into our application and make our final JAR redistributable (all dependencies included) and runnable.   It’s a lot of work so let’s get started. As we stated above, we will use Apache CXF, Spring and Jetty as a building blocks so let’s have them described in a POM file. The one additional dependency worth mentioning is excellent Jackson library for JSON processing. <project xmlns="" xmlns:xsi="" xsi:schemalocation=""> <modelversion>4.0.0</modelversion><groupid>com.example</groupid> <artifactid>spring-one-jar</artifactid> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging><properties> <>UTF-8</> <org.apache.cxf.version>2.7.2</org.apache.cxf.version> <org.springframework.version>3.2.0.RELEASE</org.springframework.version> <org.eclipse.jetty.version>8.1.8.v20121106</org.eclipse.jetty.version> </properties><dependencies> <dependency> <groupid>org.apache.cxf</groupid> <artifactid>cxf-rt-frontend-jaxrs</artifactid> <version>${org.apache.cxf.version}</version> </dependency><dependency> <groupid>javax.inject</groupid> <artifactid>javax.inject</artifactid> <version>1</version> </dependency><dependency> <groupid>org.codehaus.jackson</groupid> <artifactid>jackson-jaxrs</artifactid> <version>1.9.11</version> </dependency> <dependency> <groupid>org.codehaus.jackson</groupid> <artifactid>jackson-mapper-asl</artifactid> <version>1.9.11</version> </dependency> <dependency> <groupid>cglib</groupid> <artifactid>cglib-nodep</artifactid> <version>2.2</version> </dependency><dependency> <groupid>org.springframework</groupid> <artifactid>spring-core</artifactid> <version>${org.springframework.version}</version> </dependency><dependency> <groupid>org.springframework</groupid> <artifactid>spring-context</artifactid> <version>${org.springframework.version}</version> </dependency><dependency> <groupid>org.springframework</groupid> <artifactid>spring-web</artifactid> <version>${org.springframework.version}</version> </dependency> <dependency> <groupid>org.eclipse.jetty</groupid> <artifactid>jetty-server</artifactid> <version>${org.eclipse.jetty.version}</version> </dependency> <dependency> <groupid>org.eclipse.jetty</groupid> <artifactid>jetty-webapp</artifactid> <version>${org.eclipse.jetty.version</version> </dependency> </dependencies><build> <plugins> <plugin> <groupid>org.apache.maven.plugins</groupid> <artifactid>maven-compiler-plugin</artifactid> <version>3.0</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupid>org.apache.maven.plugins</groupid> <artifactid>maven-jar-plugin</artifactid> <configuration> <archive> <manifest> <mainclass>com.example.Starter</mainclass> </manifest> </archive> </configuration> </plugin> <plugin> <groupid>org.dstovall</groupid> <artifactid>onejar-maven-plugin</artifactid> <version>1.4.4</version> <executions> <execution> <configuration> <onejarversion>0.97</onejarversion> <classifier>onejar</classifier> </configuration> <goals> <goal>one-jar</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <pluginrepositories> <pluginrepository> <id></id> <url></url> </pluginrepository> </pluginrepositories> <repositories> <repository> <id></id> <url></url> </repository> </repositories> </project>It’s a lot of stuff but should be pretty clear. Now, we are ready to develop our first JAX-RS services by starting with simple JAX-RS application. package;import; import;@ApplicationPath( 'api' ) public class JaxRsApiApplication extends Application { } As simple as it looks like, our application defines an /api to be the entry path for the JAX-RS services. The sample service will manage people represented by Person class. package com.example.model;public class Person { private String email; private String firstName; private String lastName;public Person() { }public Person( final String email ) { = email; }public String getEmail() { return email; }public void setEmail( final String email ) { = email; }public String getFirstName() { return firstName; }public String getLastName() { return lastName; }public void setFirstName( final String firstName ) { this.firstName = firstName; }public void setLastName( final String lastName ) { this.lastName = lastName; } } And following bare bones business service (for simplicity, no database or any other storage are included). package;import java.util.ArrayList; import java.util.Collection;import org.springframework.stereotype.Service;import com.example.model.Person;@Service public class PeopleService { public Collection< Person > getPeople( int page, int pageSize ) { Collection< Person > persons = new ArrayList< Person >( pageSize );for( int index = 0; index < pageSize; ++index ) { persons.add( new Person( String.format( '', ( pageSize * ( page - 1 ) + index + 1 ) ) ) ); }return persons; }public Person addPerson( String email ) { return new Person( email ); } } As you can see, we will generate a list of persons on the fly depending on the page requested. Standard Spring annotation @Service marks this class as a service bean. Our JAX-RS service PeopleRestService will use it for retrieving persons as the following code demonstrates. package;import java.util.Collection;import javax.inject.Inject; import; import; import; import; import; import; import;import com.example.model.Person; import;@Path( '/people' ) public class PeopleRestService { @Inject private PeopleService peopleService;@Produces( { 'application/json' } ) @GET public Collection< Person > getPeople( @QueryParam( 'page') @DefaultValue( '1' ) final int page ) { return peopleService.getPeople( page, 5 ); }@Produces( { 'application/json' } ) @PUT public Person addPerson( @FormParam( 'email' ) final String email ) { return peopleService.addPerson( email ); } } Though simple, this class needs more explanations. First of all, we want to expose our RESTful service to /people endpoint. Combining it with /api (where our JAX-RS application resides), it gives as the /api/people as qualified path. Now, whenever someone issues HTTP GET to this path, the method getPeople should be invoked. This method accepts optional parameter page (with default value 1) and returns list of persons as JSON. In turn, if someone issues HTTP PUT to the same path, the method addPerson should be invoked (with required parameter email) and return new person as a JSON. Now let’s take a look on Spring configuration, the core of our application. package com.example.config;import java.util.Arrays;import;import org.apache.cxf.bus.spring.SpringBus; import org.apache.cxf.endpoint.Server; import org.apache.cxf.jaxrs.JAXRSServerFactoryBean; import org.codehaus.jackson.jaxrs.JacksonJsonProvider; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration;import; import; import;@Configuration public class AppConfig { @Bean( destroyMethod = 'shutdown' ) public SpringBus cxf() { return new SpringBus(); }@Bean public Server jaxRsServer() { JAXRSServerFactoryBean factory = RuntimeDelegate.getInstance().createEndpoint( jaxRsApiApplication(), JAXRSServerFactoryBean.class ); factory.setServiceBeans( Arrays.< Object >asList( peopleRestService() ) ); factory.setAddress( '/' + factory.getAddress() ); factory.setProviders( Arrays.< Object >asList( jsonProvider() ) ); return factory.create(); }@Bean public JaxRsApiApplication jaxRsApiApplication() { return new JaxRsApiApplication(); }@Bean public PeopleRestService peopleRestService() { return new PeopleRestService(); }@Bean public PeopleService peopleService() { return new PeopleService(); }@Bean public JacksonJsonProvider jsonProvider() { return new JacksonJsonProvider(); } } It doesn’t look complicated but a lot happens under the hood. Let’s dissect it into the peices. The two key component here are the factory JAXRSServerFactoryBean which does all heavy lifting for configuring our instance of JAX-RS server, and SpringBus instance which seamlessly glues Spring and Apache CXF together. All other components represent regular Spring beans. What’s not on a picture yet is embedding Jetty web server instance. Our main application class Starter does exactly that. package com.example;import org.apache.cxf.transport.servlet.CXFServlet; import org.eclipse.jetty.server.Server; import org.eclipse.jetty.servlet.ServletContextHandler; import org.eclipse.jetty.servlet.ServletHolder; import org.springframework.web.context.ContextLoaderListener; import;import com.example.config.AppConfig;public class Starter { public static void main( final String[] args ) throws Exception { Server server = new Server( 8080 );// Register and map the dispatcher servlet final ServletHolder servletHolder = new ServletHolder( new CXFServlet() ); final ServletContextHandler context = new ServletContextHandler(); context.setContextPath( '/' ); context.addServlet( servletHolder, '/rest/*' ); context.addEventListener( new ContextLoaderListener() );context.setInitParameter( 'contextClass', AnnotationConfigWebApplicationContext.class.getName() ); context.setInitParameter( 'contextConfigLocation', AppConfig.class.getName() );server.setHandler( context ); server.start(); server.join(); } } Looking through this code uncovers that we are running Jetty server instance on port 8080, we are configuring Apache CXF servlet to handle all request at /rest/* path (which together with our JAX-RS application and service gives us the /rest/api/people), we are adding Spring context listener parametrized with the configuration we have defined above and finally we are starting server up. What we have at this point is full-blown web server hosting our JAX-RS services. Let’s see it in action. Firstly, let’s package it as single, runnable and redistributable fat or one jar: mvn clean package Let’s pick up the bits from the target folder and run them: java -jar target/ And we should see the output like that: 2013-01-19 11:43:08.636:INFO:oejs.Server:jetty-8.1.8.v20121106 2013-01-19 11:43:08.698:INFO:/:Initializing Spring root WebApplicationContext Jan 19, 2013 11:43:08 AM org.springframework.web.context.ContextLoader initWebApplicationContext INFO: Root WebApplicationContext: initialization started Jan 19, 2013 11:43:08 AM prepareRefresh INFO: Refreshing Root WebApplicationContext: startup date [Sat Jan 19 11:43:08 EST 2013]; root of context hierarchy Jan 19, 2013 11:43:08 AM org.springframework.context.annotation.ClassPathScanningCandidateComponentProvider registerDefaultFilters INFO: JSR-330 'javax.inject.Named' annotation found and supported for component scanning Jan 19, 2013 11:43:08 AM loadBeanDefinitions INFO: Successfully resolved class for [com.example.config.AppConfig] Jan 19, 2013 11:43:09 AM org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor INFO: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring Jan 19, 2013 11:43:09 AM preInstantiateSingletons INFO: Pre-instantiating singletons in defining beans [org.springframework.context.annotation.internal ConfigurationAnnotationProcessor, org.springframework.context.annotation.internalAutowiredAnnotationProcessor, org.springframework.context.annotation.internalRequiredAnnotationProces sor,org.springframework.context.annotation.internalCommonAnnotationProcessor,appConfig, org.springframework.context.annotation.ConfigurationClassPostProcessor.importAwareProcessor,c xf,jaxRsServer,jaxRsApiApplication,peopleRestService,peopleService,jsonProvider]; root of factory hierarchy Jan 19, 2013 11:43:10 AM org.apache.cxf.endpoint.ServerImpl initDestination INFO: Setting the server's publish address to be /api Jan 19, 2013 11:43:10 AM org.springframework.web.context.ContextLoader initWebApplicationContext INFO: Root WebApplicationContext: initialization completed in 2227 ms 2013-01-19 11:43:10.957:INFO:oejsh.ContextHandler:started o.e.j.s.ServletContextHandler{/,null} 2013-01-19 11:43:11.019:INFO:oejs.AbstractConnector:Started SelectChannelConnector@ Having our server up and running, let’s issue some HTTP requests to it so to be sure everything works just as we expected: > curl http://localhost:8080/rest/api/people?page=2 [ {'email':'','firstName':null,'lastName':null}, {'email':'','firstName':null,'lastName':null}, {'email':'','firstName':null,'lastName':null}, {'email':'','firstName':null,'lastName':null}, {'email':'','firstName':null,'lastName':null} ]> curl http://localhost:8080/rest/api/people -X PUT -d '' {'email':'','firstName':null,'lastName':null} Awesome! And please notice, we are completely XML-free! Source code: Before ending the post, I would like to mention one great project, Dropwizard, which uses quite similar concepts but pushes it to the level of excellent, well-designed framework, thanks to Yammer guys for that.   Reference: Going REST: embedding Jetty with Spring and JAX-RS (Apache CXF) from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog. ...

Mixin in Java with Aspects – for a Scala traits sample

Scala traits allow new behaviors to be mixed into a class. Consider two traits to add auditing and version related fields to JPA entities:                   package mvcsample.domainimport javax.persistence.Version import scala.reflect.BeanProperty import java.util.Datetrait Versionable { @Version @BeanProperty var version: Int = _ }trait Auditable { @BeanProperty var createdAt: Date = _@BeanProperty var updatedAt: Date = _ } Now to mix in ‘Versionable’ and ‘Auditable’ with their fields and behavior in a Member entity: @Entity @Table(name = 'members') class Member(f: String, l: String) extends BaseDomain with Auditable with Versionable {def this() = this(null, null)@BeanProperty var first: String = f@BeanProperty var last: String = l@OneToMany(fetch = FetchType.EAGER, mappedBy = 'member') @BeanProperty var addresses: java.util.List[Address] = _ }trait BaseDomain { @BeanProperty @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = 'id') @Id var id: Long = 0 } The Member class above will now have the behavior of the BaseDomain class, and will have the behavior of Versionable trait and the Auditable trait. This kind of mixin is not possible with plain Java, as the equivalent of traits with fields and behavior would be an abstract(or concrete) class and Java allows deriving only from 1 base class. However with AspectJ it is possible to achieve an equivalent of mixin. Consider the following aspects defined using Aspectj language: package mvcsample.aspect;import javax.persistence.Column; import javax.persistence.Version; import mvcsample.annot.Versioned;public interface Versionable {static aspect VersionableAspect { declare parents: @Versioned mvcsample.domain.* implements Versionable;@Version @Column(name = 'version') private Integer Versionable.version;public Integer Versionable.getVersion() { return this.version; }public void Versionable.setVersion(Integer version) { this.version = version; } } }package mvcsample.aspect;import java.util.Date; import javax.persistence.Column;import mvcsample.annot.Audited; public interface Auditable { static aspect AuditableAspect { declare parents: @Audited mvcsample.domain.* implements Auditable ;@Column(name='created_at') private Date Auditable.createdAt;@Column(name='updated_at') private Date Auditable.updatedAt;public Date Auditable.getCreatedAt(){ return this.createdAt; }public void Auditable.setCreatedAt(Date createdAt) { this.createdAt = createdAt; }public Date Auditable.getUpdatedAt(){ return this.updatedAt; }public void Auditable.setUpdatedAt(Date updatedAt) { this.updatedAt = updatedAt; } } } ‘declare parents: @Versioned mvcsample.domain.* implements Versionable;’ aspectj construct adds ‘Versionable’ interface as a parent to any class in package ‘mvcsampple.domain’ annotated with @Versioned, similarly the one for ‘Auditable’. Then the aspect goes about adding fields to the Versionable interface which in turn ends up adding(mixing in) the fields to the targeted entity classes, this way the Audit related and Version related fields and methods get mixed into the entity classes. With these two aspects defined, a target entity class would look like this: @Entity @Table(name="members") @Access(AccessType.FIELD) @Versioned @Audited public class Member extends BaseDomain{ public Member(){} public Member(String first, String last){ this.first = first; this.last = last; } private String first; @Size(min=1) private String last; @OneToMany(fetch=FetchType.EAGER, mappedBy="member") private List<address>addresses = new ArrayList<>(); ..... } </address> The fields and behavior defined in the Versionable and Auditable aspects would be mixed into this entity(more generally into any entity with @Versioned and @Audited annotations). Probably not as clean as Scala traits but works nicely.   Reference: Mixin in Java with Aspects – for a Scala traits sample from our JCG partner Biju Kunjummen at the all and sundry blog. ...

MySql connections autodrop after a certain hours

MySql is configured to drop any connection which has been Idle for more than 8 hours. What is the implication of this? After you return to your deployed app after a gap of 8 hours (If default SQL parameters have not been changed), you will be greeted with an exception. How to solve this issue?Increase the wait_time parameter -Not a good Idea, it might unnecessarily hold on to the resources and not might be a sure shot way. Apart from that, being dependent on an “external” configuration for failover is not a very good idea -what if the server itself crashes, what if this configuration is missed in one of the instnaces, and many such   issues will pop up against this approach. Use the parameter autoReconnect=true with JDBC URL -My SQl itself does not recommend this, have a look at link and people have reported that this does not work as well, refer link. Custom handling -have your code identify that connection has been lost and then recover it and try to reconnect, but then it would be a lot of fail over mechanism in code. The best way I found was to configure pooling mechanism as c3p0. See this post how to configure c3p0 in JPA for hibernate, it’s simple, easy and reliable.So how do you test that issue is solved?Change wait_timeout in MySql to just 2 minutes, this is how it can be done from MySql workbench admin consoleKeep value of idleTestPeriod less than wait_timeout -A quick recap what idleTestPeriod signifies idleTestPeriod: default value=0; If this is a number greater than 0, c3p0 will test all idle, pooled but unchecked-out connections, every this number of seconds Login after wait_timeout has passed -it should not throw a exception  Reference: MySql connections autodrop after a certain hours from our JCG partner Chandan Pandey at the Thoughts on Software design and development blog. ...

The Lazy Developer’s Way to an Up-To-Date Libraries List

Last time I shared some tips on how to use libraries well. I now want to delve deeper into one of those: Know What Libraries You Use. Last week I set out to create such a list of embedded components for our product. This is a requirement for our Security Development Lifecycle (SDL). However, it’s not a fun task. As a developer, I want to write code, not update documents! So I turned to my friends Gradle and Groovy, with a little help from Jenkins and Confluence. Gradle Dependencies We use Gradle to build our product, and Gradle maintains the dependencies we have on third-party components. Our build defines a list of names of configurations for embedded components, copyBundleConfigurations, for copying those to the distribution directory. From there, I get to the external dependencies using Groovy’s collection methods: def externalDependencies() { copyBundleConfigurations.collectMany { configurations[it].allDependencies }.findAll { !(it instanceof ProjectDependency) && && !'com.emc') } } Adding Required Information However, Gradle dependencies don’t contain all the required information. For instance, we need the license under which the library is distributed, so that we can ask the Legal department permission for using it. So I added a simple XML file to hold the additional info. Combining that information with the dependencies that Gradle maintains is easy using Groovy’s XML support: ext.embeddedComponentsInfo = 'embeddedComponents.xml'def externalDependencyInfos() { def result = new TreeMap() def componentInfo = new XmlSlurper() .parse(embeddedComponentsInfo) externalDependencies().each { dependency -> def info = componentInfo.component.find { == '$$' && it.friendlyName?.text() } if (!info.isEmpty()) { def component = [ 'id':, 'friendlyName': info.friendlyName.text(), 'version': dependency.version, 'latestVersion': info.latestVersion.text(), 'license': info.license.text(), 'licenseUrl': info.licenseUrl.text(), 'comment': info.comment.text() ] result.put component.friendlyName, component } } result.values() } I then created a Gradle task to write the information to an HTML file. Our Jenkins build executes this task, so that we always have an up-to-date list. I used Confluence’s html-include macro to include the HTML file in our Wiki. Now our Wiki is always up-to-date. Automatically Looking Up Missing Information The next problem was to populate the XML file with additional information. Had we had this file from the start, adding that information manually would not have been a big deal. In our case, we already had over a hundred dependencies, so automation was in order. First I identified the components that miss the required information: def missingExternalDependencies() { def componentInfo = new XmlSlurper() .parse(embeddedComponentsInfo) externalDependencies().findAll { dependency -> componentInfo.component.find { == '$$' && it.friendlyName?.text() }.isEmpty() }.collect { '$$' }.sort() } Next, I wanted to automatically look up the missing information and add it to the XML file (using Groovy’s MarkupBuilder). In case the required information can’t be found, the build should fail: project.afterEvaluate { def missingComponents = missingExternalDependencies() if (!missingComponents.isEmpty()) { def manualComponents = [] def writer = new StringWriter() def xml = new MarkupBuilder(writer) xml.expandEmptyElements = true println 'Looking up information on new dependencies:' xml.components { externalDependencyInfos().each { existingComponent -> component { id( friendlyName(existingComponent.friendlyName) latestVersion(existingComponent.latestVersion) license(existingComponent.license) licenseUrl(existingComponent.licenseUrl) approved(existingComponent.approved) comment(existingComponent.comment) } } missingComponents.each { missingComponent -> def lookedUpComponent = collectInfo(missingComponent) component { id(missingComponent) friendlyName(lookedUpComponent.friendlyName) latestVersion(lookedUpComponent.latestVersion) license(lookedUpComponent.license) licenseUrl(lookedUpComponent.licenseUrl) approved('?') comment(lookedUpComponent.comment) } if (!lookedUpComponent.friendlyName || !lookedUpComponent.latestVersion || !lookedUpComponent.license) { manualComponents.add println ' => Please enter information manually' } } } writer.close() def embeddedComponentsFile = project.file(embeddedComponentsInfo) embeddedComponentsFile.text = writer.toString() if (!manualComponents.isEmpty()) { throw new GradleException('Missing library information') } } } Anyone who adds a dependency in the future is now forced to add the required information. So all that is left to implement is the collectInfo() method. There are two primary sources that I used to look up the required information: the SpringSource Enterprise Bundle Repository holds OSGi bundle versions of common libraries, while Maven Central holds regular jars. Extracting information from those sources is a matter of downloading and parsing XML and HTML files. This is easy enough with Groovy’s String.toURL() and URL.eachLine() methods and support for regular expressions. Conclusion All of this took me a couple of days to build, but I feel that the investment is well worth it, since I no longer have to worry about the list of used libraries being out of date.   Reference: The Lazy Developer’s Way to an Up-To-Date Libraries List from our JCG partner Remon Sinnema at the Secure Software Development blog. ...

Brace yourselves – Spring Framework 4.0 is coming!

A few days ago SpringSource announced that the 4.0 version of the popular Spring framework is on the road. The next iteration will be Spring Framework 4.0! As SpringSource states, the focus of the upcoming version is on “emerging enterprise themes in 2013 and beyond”:Support for Java SE 8 Spring applications Configuring and implementing Spring-style applications using Groovy 2 Support for key Java EE 7 technologies (like JMS 2.0, JPA 2.1, Servlet 3.1, JCache etc.) Support for WebSocket-style application architectures (support for JSR-356 compliant runtimes) Fine-grained eventing and messaging within the application (using listener mechanisms) Removing the fat by pruning deprecated featuresAccording to the roadmap, there will be another one-year iteration so that the framework will reach 4.0 GA by the end of 2013. A first Spring Framework 4.0 milestone will be released as early as April. Unitl then, you may check some of our Spring tutorials. So brace yourselves everyone, Spring 4.0 is coming! ...

Scala: call me by my name please?

In Java, when frameworks such as log4J became popular in Java architectures it was a common occurence to see code such as:                   if (logger.isEnabledFor(Logger.INFO)) { // Ok to log now.'ok' + 'to' + 'concatenate' + 'string' + 'to' + 'log' + 'message'); } It was considered best practice to always check if your logging was enabled for the appropriate level before performing any String concatenation. I even remember working on a project ten years ago (a 3G radio network configuration tool for Ericsson) where String concatenation for logging actually resulted in noticeable performance degradation. Since then, JVMs have been optimised and Moore’s Law has continued so that String concatenation isn’t as much of a worry as it used to be. Many frameworks (for example Hibernate), if you check the source code you’ll see logging code where there is no check to see if logging is enabled and the string concatenation happens regardless. However, let’s pretend concatenation is a performance issue. What we’d really like to do is remove the need for the if statements in order to stop code bloat. The nub of the issue here is that in Java, when you call a method with parameters the values of the parameters are all calculated before the method is called. This why the if statement is needed. simpleComputation(expensiveComputation());// In Java, the expensive computation is called first. logger.log(Level.INFO, 'Log this ' + message);// In Java, the String concatenation happens first Scala provides a mechanism where you can defer parameter evaluation. This is called call-by-name. def log(level: Level, message: => String) = if (logger.level.intValue >= level.intValue) logger.log(level, msg) The => before the String types means that the String parameter is not evaluated before invocation of the log function. Instead, there is a check to confirm the logger level value is at the appropriate value and if so the String will then evaluated. This check happens within the log function so there is no need to put the check before every invocation of it. What about that for code re-use? Anything else? Yes when pass-by-name is used, the parameter that is passed-by-name isn’t just evaluated once but everytime it is referenced in the function it is passed to. Let’s look at another example scala> def nanoTime() = { | println(">>nanoTime()") | System.nanoTime // returns nanoTime | } nanoTime: ()Long scala> def printTime(time: => Long) = { // => indicates a by name parameter | println(">> printTime()") | println("time= " + time) | println("second time=" + time) | println("third time=" + time) | } printTime: (time: => Long)Unit scala> printTime(nanoTime()) >> printTime() >>nanoTime() time= 518263321668117 >>nanoTime() second time=518263324003767 >>nanoTime() third time=518263324624587 In this example, we can see that nanoTime() isn’t just executed once but everytime it is referenced in the function, printTime it is passed to. This means it is executed three times in this function and hence we get three different times. ‘Til the next time, take care of yourselves.   Reference: Scala: call me by my name please? from our JCG partner Alex Staveley at the Dublin’s Tech Blog blog. ...

Frankensystems, Half-Strangled Zombies and other Monsters

There are lots of ugly things that can happen to a system over time. This is what the arguments over technical debt are all about – how to keep code from getting ugly and fragile and hard to understand and more expensive to maintain over time, because of sloppiness and short-sighted decision making. But some of the ugliest things that happen to code don’t have anything to do with technical debt. They’re the result of conscious and well-intentioned design changes. Well-Intentioned Changes can create Ugly Code Bad things can happen when you decide to re-architect or rewrite a system, or start some large-scale refactoring, but you don’t get the job done. Other more important   work comes up before you can finish transitioning all of the code over to the new design or the new platform – or maybe that was never going to happen anyways, because you didn’t have the budget and the mandate to do the whole job in the first place. Or the somebody who started the work leaves, and nobody else understands their vision well enough to carry it through – or nobody that’s left cares about it enough to finish it. Or you get just far enough to solve whatever problems your or the customer really cared about, and there’s no good business case to keep going. Now you’re left with what a colleague of mine calls a “Frankensystem”: different designs and different platforms spliced together in a way that works but that is horribly difficult to understand and maintain. Why does this happen? How do you stop your system from turning into a monster like this? Branching by Abstraction One way that code can get messed up, in the short-term at least, is through Branching by Abstraction, an idea that has become popular in shops that Dark Launch changes through Continuous Deployment or Continuous Delivery. In Branching by Abstraction (also known as “branching in code”), instead of creating a feature branch to isolate code changes, and then merging the changes back when you’re done, everyone works in trunk. If you need to make bigger code changes, you start by writing temporary scaffolding (abstraction layers, conditional logic, configuration code like feature switches) to isolate the changes that you’ll need to make, and then you can make your changes directly in the code mainline in small, incremental steps. The scaffolding serves to protect the rest of the system from the impact of your changes until all of the work is complete. Branching by Abstraction tries to address problems with managing the misuse of feature branches (especially long-lived branches) – if you don’t let developers branch, then you don’t have to figure out how to keep all of the branches in sync and manage merge conflicts. But with Branching by Abstraction, until the work is complete and the temporary scaffolding code removed, the code will be harder to maintain and understand, and more brittle and error-prone, as James McKay points out:“…visible or not, you are still deploying code into production that you know for a fact to be buggy, untested, incomplete and quite possibly incompatible with your live data. Your if statements and configuration settings are themselves code which is subject to bugs – and furthermore can only be tested in production. They are also a lot of effort to maintain, making it all too easy to fat-finger something. Accidental exposure is a massive risk that could all too easily result in security vulnerabilities, data corruption or loss of trade secrets. Your features may not be as isolated from each other as you thought you were, and you may end up deploying bugs to your production environment”.If you decide to branch in code like this (we do branching in code in some cases, and feature branching in others – branching in code is good for rolling out behind-the-scenes plumbing changes, not so good for big functional changes), be careful. Review your scaffolding to ensure that your code changes are completely isolated, and test with old and new configurations (switches off and on) to check for regressions. Minimize the number of changes that the team rolls out at one time, so that there’s no chance of changes overlapping or colliding. And to keep Branching by Abstraction from becoming a maintenance nightmare, make sure that you remove temporary scaffolding as soon as you are done with it. Half-Strangled Zombies Branching by Abstraction can lead to ugly code, at least for the few weeks or months that it will take to roll out each change. But things can get much worse in the code if you try to do a major rewrite or re-architecture of a system incrementally, for example “strangling” the existing system with new code and a new design (another approach coined by ThougtWorks), and slowly suffocating the old system. Strangling a system lets you introduce a new design or change over to a new and modern platform without having to finish a long and expensive rewrite first. The strangling work is done in parallel, usually by a separate team, letting the rest of the team to maintain the old code – which of course means that both teams need to keep in sync as changes and fixes are made. But if you don’t finish the job, you’ll be left with a kind of zombie, a scary half-dead and half-alive thing with ugly seams showing, as Nat Pryce warns against in this Stack Overflow post:‘The biggest problem to overcome is lack of will to actually finish the strangling (usually political will from non-technical stakeholders, manifested as lack of budget). If you don’t completely kill off the old system, you’ll end up in a worse mess because your system now has two ways of doing everything with an awkward interface between the two. Later, another wave of developers will probably decide to strangle what’s there, writing yet another strangler application, and again a lack of will might leave the system in an even worse state, with three ways of doing things…. I’ve seen critical systems that have suffered both of these fates, and ended up with about four or five ‘strategic architectural directions’ and ‘future state architectures’. One large multi-site project ended up with eight different new persistence mechanisms in its new architecture. Another ended up with two different database schemas, one for the old way of doing things and another for the new way, neither schema was ever removed from the system and there were also multiple class hierarchies that mapped to one or even both of these schemas.’ Strangling, and other incremental strategies for re-architecting a system, will let you start showing benefits to the customer early, before all of the work of writing the new system is done. This is both an advantage and a problem. Because once the customer starts to get what they really care about (some nice new screens or mobile access channels or better performance or faster turnaround on rules changes or…) you may not be able to make the business case to finish up the work that’s left. Everyone understands (or should) that this means you’re stuck with some inconsistencies – on the inside certainly, and maybe on the outside too. But whatever is there does the job, and keeping this mess running may cost a lot less than finishing the rewrite, at least in the short term. Frankensystems and Zombies are Everywhere Monster-making happens more often than it should to big systems, especially big, mission-critical systems that a lot of different people have worked on over a long time. As Pryce warns, it can even happen multiple times over the life of a big system, so that you end up with several half-realized architectures grafted together, creating all kinds of nasty maintenance and understanding problems. When making changes or adding features, developers will have to decide whether to do it the old way or the new way (or the other new way) – or sometimes they will need to do both, which means working across different architectures, using different tools and different languages, and often having to worry about keeping different data models in sync. This complexity means it’s easy to make mistakes or miss or misunderstand something, and testing can be even uglier than the coding. You need to recognize these risks when you start down the path of incrementally changing a system’s direction and design – even if you believe you have the commitment and time to finish the job properly. Because there’s a good chance that you’ll end up creating a monster that you will have to live with for years.   Reference: Frankensystems, Half-Strangled Zombies and other Monsters from our JCG partner Jim Bird at the Building Real Software blog. ...

Testing REST with multiple MIME types

1. Overview This article will focus on testing a RESTful Service with multiple Media Types/representations. This is the tenth of a series of articles about setting up a secure RESTful Web Service using Spring and Spring Security with Java based configuration.             The REST with Spring series:Part 1 – Bootstrapping a web application with Spring 3.1 and Java based Configuration Part 2 – Building a RESTful Web Service with Spring 3.1 and Java based Configuration Part 3 – Securing a RESTful Web Service with Spring Security 3.1 Part 4 – RESTful Web Service Discoverability Part 5 – REST Service Discoverability with Spring Part 6 – Basic and Digest authentication for a RESTful Service with Spring Security 3.1 Part 7 – REST Pagination in Spring Part 8 – Authentication against a RESTful Service with Spring Security Part 9 – ETags for REST with Spring2. Goals Any RESTful service needs to expose it’s Resources as representations using some sort of Media Type, and in many cases more than a single one. The client will set the Accept header to choose the type of representation it asks for from the service. Since the Resource can have multiple representations, the server will have to implement a mechanism responsible with choosing the right representation – also known as Content Negotiation. Thus, if the client asks for application/xml, then it should get an XML representation of the Resource, and if it asks for application/json, then it should get JSON. This article will explain how to write integration tests capable of switching between the multiple types of representations that the RESTful Service supports. The goal is to be able to run the exact same test consuming the exact same URIs of the service, just asking for a different Media Type. 3. Testing Infrastructure We’ll begin by defining a simple interface for a marshaller – this will be the main abstraction that will allow the test to switch between different Media Types: public interface IMarshaller { ... String getMime(); } Then we need a way to initialize the right marshaller based on some form of external configuration. For this mechanism, we will use a Spring FactoryBean to initialize the marshaller and a simple property to determine which marshaller to use: @Component @Profile('test') public class TestMarshallerFactory implements FactoryBean<IMarshaller> { @Autowired private Environment env; public IMarshaller getObject() { String testMime = env.getProperty('test.mime'); if (testMime != null) { switch (testMime) { case 'json': return new JacksonMarshaller(); case 'xml': return new XStreamMarshaller(); default: throw new IllegalStateException(); } } return new JacksonMarshaller(); } public Class<IMarshaller> getObjectType() { return IMarshaller.class; } public boolean isSingleton() { return true; } } Let’s look over this:first, the new Environment abstraction introduced in Spring 3.1 is used here – for more on this check out the Properties with Spring article the test.mime property is retrieved from the environment and used to determine which marshaller to create – some Java 7 switch on String syntax at work here next, the default marshaller, in case the property is not defined at all, is going to be the Jackson marshaller for JSON support finally – this BeanFactory is only active in a test scenario, as the new @Profile support, also introduced in Spring 3.1 is usedThat’s it – the mechanism is able to switch between marshallers based on whatever the value of the test.mime property is. 4. The JSON and XML Marshallers Moving on, we’ll need the actual marhsaller implementation – one for each supported Media Type. For JSON we’ll use Jackson as the underlying library: public class JacksonMarshaller implements IMarshaller { private ObjectMapper objectMapper; public JacksonMarshaller() { super(); objectMapper = new ObjectMapper(); } ... @Override public String getMime() { return MediaType.APPLICATION_JSON.toString(); } } For the XML support, the marshaller uses XStream: public class XStreamMarshaller implements IMarshaller { private XStream xstream; public XStreamMarshaller() { super(); xstream = new XStream(); } ... public String getMime() { return MediaType.APPLICATION_XML.toString(); } } Note that these marshallers are not define as Spring components themselved. The reason for that is they will be bootstrapped into the Spring context by the TestMarshallerFactory, so there is no need to make them components directly. 5. Consuming the Service with both JSON and XML At this point we should be able to run a full integration test against the deployed RESTful service. Using the marshaller is straighforward – an IMarshaller is simply injected directly into the test: @ActiveProfiles({ 'test' }) public abstract class SomeRestLiveTest { @Autowired private IMarshaller marshaller; // tests ... } The exact marshaller that will be injected by Spring will of course be decided by the value of test.mime property; this could be picked up from a properties file or simply set on the test environment manually. If however a value is not provided for this property, the TestMarshallerFactory will simply fall back on the default marshaller – the JSON marshaller. 6. Maven and Jenkins If Maven is set up to run integration tests against an already deployed RESTful Service, then it can be run like this: mvn test -Dtest.mime=xml Or, if this the build uses the integration-test phase of the Maven lifecycle: mvn integration-test -Dtest.mime=xml For more details about how to use these phases and how to set up the a Maven build so that it will bind the deployment of the application to the pre-integration-test goal, run the integration tests in the integration-test goal and then shut down the deployed service in on post-integration-test, see the Integration Testing with Maven article. With Jenkins, the job must be configured with: This build is parameterized And the String parameter: test.mime=xml added.A common Jenkins configuration would be having to jobs running the same set of integration tests against the deployed service – one with XML and the other with JSON representations. 7. Conclusion This article showed how to properly test a REST API. Most APIs do publish their resources under multiple representations, so testing all of these representations is vital, and using the exact same tests for that is just cool. For a full implementation of this mecahnism in actual integration tests verifying both the XML and JSON representations of all Resources, check out the github project.   Reference: Testing REST with multiple MIME types from our JCG partner Eugen Paraschiv at the baeldung blog. ...

What If We … Like We Hire Programmers – What Questions Are Appropriate?

Programmers often experience a high degree of frustration during the interview process, and one primary source of annoyance is how the programmer perceives the line of questioning or exercises. In a buyer’s market where supply exceeds demand, hiring managers will often be a bit more selective in evaluating candidates, and talent evaluators may request or require more specific skill-sets than they would if the talent pool were deeper. These tactics are short-sighted but deemed necessary in a crunch. I recently stumbled on two articles with an identical theme. “If Carpenters Were Hired Like Programmers” was written in 2004, and “What If Cars Were Rented Like We Hire Programmers” was posted very recently. The tl;dr of these posts is essentially that programmers being interviewed are asked incredibly esoteric questions or are grilled about experience with irrelevant topics (wood color for carpenters, car wiring for car renters). The comments sections on Reddit and Hacker News are a mix of agreement, criticism, and various anecdotes about interviews that reflected the articles’ theme. No analogy is perfect. There are surely companies that are ‘doing it wrong’ and asking questions that will reveal little about a candidate’s potential as an employee, but I’m getting the sense that many candidates are starting to claim that even appropriate lines of questioning and requests are now somehow inappropriate. More importantly, it appears that candidates may not understand or appreciate the true value of certain questions or tasks. To continue the carpenter analogy, let’s look at the types of questions or tasks that would be both useful and appropriate in evaluating either a carpenter or a programmer (or anyone that builds things) for potential employment.Overall experience and training – No one will should argue these.Experience specifically relevant to the project at hand – This is where we may first see some candidates crying foul, particularly if the relevancy of the experience is judged predominantly by the level of experience with very specific tools. Learning a new programming language is probably not equivalent to learning how to use a different brand of saw, but engineers can sometimes be overconfident about the amount of time required to become productive with a new technology. The relevancy of experience factors into a hiring decision most when project delivery is valued over long-term employee development.Answer some questions about your craft – When hiring managers ask questions, candidates should keep in mind that there can be a few reasons why a question is asked. Obviously, one objective may be to truly find out if you know the answer. However, sometimes the interviewer asks a difficult question simply to see how you may react to pressure. Another possibility is that the interviewer wants to reveal if you are the type of person who may confidently give a wrong answer to try and fool the interviewer, if you are more likely to admit what you do not know, and to evaluate your resourcefulness by how you would research a problem with an unknown answer. A genuinely, laugh-out-loud stupid question may be asked to see how well you may deal with frustration with a co-worker or an unruly customer. Lastly, the interviewer may simply want to see your method of approaching a tough question and breaking it down. Candidates that are quick to complain about being asked seemingly minute or irrelevant details often overlook the true purpose behind these exercises.Design something – I’m always amazed when candidates call me in a state of shock after being asked to do a whiteboard exercise in an interview, as if these types of requests were either unfair, insulting, or a ‘gotcha’ technique. Anyone who builds things should be somewhat comfortable (or at least willing) to either visually depict a past design or attempt to design a quick solution to a problem on the spot.Show us how you work alone – Assigning a short task for someone to complete either in an interview setting or at home before/after an interview is absolutely an appropriate request, which candidates can choose to accept or deny. It is both only an opportunity to demonstrate skills and to further express your interest in the position by being willing to invest time. Providing a bit more than the minimum requested solution is a valuable method to differentiate yourself from other candidates.Let’s see how you work with a team – As candidates are often hired to build things collaboratively, a short pairing exercise or a group problem-solving activity could be the best way to efficiently evaluate how well one plays with others.Show us some samples – Professionals who build things have the unique interviewing advantage of actually showing something physical that they have built. A carpenter bringing a piece of furniture to an interview should be no different than an engineer offering a past code sample. Companies are increasingly using past code as an evaluation tool.References – At some point in the process of evaluating talent, asking for references is a given. Being unwilling or unable to provide references can make someone unemployable, even if all other tasks are met.If you go back and reread the articles about the carpenter and rental car interviews, you may have a new perspective on the reasons some questions may be asked. Think back on some interviews that you have had, and consider whether it’s possible that the interviewer had ulterior motives. It’s not always about simply knowing an answer.   Reference: What If We … Like We Hire Programmers – What Questions Are Appropriate? from our JCG partner Dave Fecak at the Job Tips For Geeks blog. ...

Spring Property Placeholder Configurer – A few not so obvious options

Spring’s PropertySourcesPlaceholderConfigurer is used for externalizing properties from the Spring bean definitions defined in XML or using Java Config. There are a few options that PlaceholderConfigurer supports that are not obvious from the documentation but are interesting and could be useful. To start with, an example from Spring’s documentation, consider a properties file with information to configure a datasource:         jdbc.driverClassName=org.hsqldb.jdbcDriver jdbc.url=jdbc:hsqldb:hsql://production:9002 jdbc.username=sa jdbc.password=root The PropertySourcesPlaceholderConfigurer is configured using a custom namespace: <context:property-placeholder location=''/> A datasource bean making use of these properties can be defined using XML based bean definition this way: <bean id='dataSource' destroy-method='close' class='org.apache.commons.dbcp.BasicDataSource'> <property name='driverClassName' value='${jdbc.driverClassName}'/> <property name='url' value='${jdbc.url}'/> <property name='username' value='${jdbc.username}'/> <property name='password' value='${jdbc.password}'/> </bean> and using Java based configuration this way: @Value('${jdbc.driverClassName}') private String driverClassName; @Value('${jdbc.url}') private String dbUrl; @Value('${jdbc.username}') private String dbUserName; @Value('${jdbc.password}') private String dbPassword;@Bean public BasicDataSource dataSource() { BasicDataSource dataSource = new BasicDataSource(); dataSource.setDriverClassName(driverClassName); dataSource.setUrl(dbUrl); dataSource.setUsername(dbUserName); dataSource.setPassword(dbPassword); return dataSource; } The not so obvious options are: First is the support for default values. Say for eg, if ‘sa’ is to be provided as default for the jdbc user name, the way to do it is this way(using a ${propertyName:default} syntax) : <property name='username' value='${jdbc.username:sa}'/> or with Java Config: .. .. @Value('${jdbc.username:sa}') private String dbUserName;@Bean public BasicDataSource dataSource() { .. } Second is the support for nested property resolution, for eg consider the following properties: file – phase=qa and using the ‘phase’ property as part of another property in XML bean definition in this nested way: <property name='username' value='${jdbc.username.${phase}}'/> These options could be very useful for place holder based configuration.   Reference: Spring Property Placeholder Configurer – A few not so obvious options from our JCG partner Biju Kunjummen at the all and sundry blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.