What's New Here?

jenkins-logo

Automating the Deployment and Upload of Snapshot Java Artifacts Using Jenkins on Window

This post will show how to automate the deployment process of a Java Web Application (Student Enrollment Application developed using MYSQL DB with Hibernate ORM in a REST based Jersey2 Spring environment) using Jenkins Continuous Integration – to build the project, run the unit tests, upload the built artifacts to a Sonatype Snapshot repository, run the Cobertura Code Coverage reports and deploy the application to the Amazon EC2. The details of the actual application are explained in the earlier post given by the link Building Java Web Application Using Jersey REST With Spring. 1. Install Jenkins as a Windows Service Navigate to jenkins-ci.org website using an Internet browser and download the Windows native package (the link is underlined for easy identification) as shown from the right side pane of the Download Jenkins tab.Once the download is complete, uncompress the zip file and click on the jenkins-1.xxx.msi file. Proceed through the configuration steps to install the Jenkins as a Windows service. 2. Modify Default Jenkins Port By default Jenkins runs on the port 8080. In order to avoid conflict with other applications, the default port can be modified by editing the jenkins.xml found under C:\Program Files (x86)\Jenkins location. As shown below, modify the httpPort to 8082. <service> <id>jenkins</id> <name>Jenkins</name> <description>This service runs Jenkins continuous integration system.</description> <env name="JENKINS_HOME" value="%BASE%"/> <!-- if you'd like to run Jenkins with a specific version of Java, specify a full path to java.exe. The following value assumes that you have java in your PATH. --> <executable>%BASE%\jre\bin\java</executable> <arguments>-Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -jar "%BASE%\jenkins.war" --httpPort=8082</arguments> <!-- interactive flag causes the empty black Java window to be displayed. I'm still debugging this. <interactive /> --> <logmode>rotate</logmode><onfailure action="restart" /> </service> Once the modification is saved in jenkins.xml file, restart the Jenkins service from the Windows Task Manager->Services and right clicking on the Jenkins service and choose Stop Service to stop the service as shown below.Once the status of the service changes to stopped, restart the service by right clicking on the Jenkins service and choose Start Service to start the service again.Navigate to localhost:8082 to verify if the Jenkins restart was successful as shown below – Jenkins Dashboard will be displayed. Note that it takes a while before the Jenkins service becomes available.3. Install Plugins On the Jenkins Dashboard, navigate to Manage Jenkins –> Manage Plugins as shown in the snapshot below.Install the following plugins and restart Jenkins for the changes to take effect.GitHub Plugin (for integrating Github with Jenkins) Jenkins Cobertura Plugin (for Cobertura support) Deploy to Container Plugin (for deploying the WAR to the Tomcat Container on EC2 instance) Jenkins Artifactory Plugin (for deploying the built Maven artifacts to the Snapshot repository)4. Configure System On the Jenkins Dashboard, navigate to Manage Jenkins –> Configure System as shown in the snapshot below.Navigate to the JDK section and click on “Add JDK” to add the JDK installation as shown in the snapshot below. Specify a JDK Name, choose the JDK Version to install and follow the on-screen instructions to save the Oracle Login credentials. Save the changes.Next, proceed to the Git section and click on “Add Git” to add the Git installation as shown in the snapshot below. Specify Git Name, specify the path to Git executable and Save the changes.Next, proceed to the Maven section and click on “Add Maven” to add the Maven installation as shown in the snapshot below. Specify Maven Name, choose the Maven Version to install and Save the changes.Proceed to the Git plugin section and enter the values for Github Username and Email Address as credentials as shown below. Save the changes.Proceed to the Artifactory section and click on “Add” to add the information about the artifactory servers. Specify the URL for the snapshot repository and provide the deployer credentials created from the Artifactory server website as shown below. Click on “Test Connection” to test if the connection parameters are good to save and Save the changes.Next, proceed to the Email Notification section and enter the SMTP Server details as shown below. Click on the Advanced button to add the further details required and Save the changes. Click on “Test configuration by sending test e-mail”, enter the test e-mail recipient and click on “Test configuration” to see if the email is successfully sent.5. Create a New Jenkins Job From the Jenkins Dashboard, click on “New Job” to create a new job. Enter a name for the job and choose “Build a maven2/3 project” as option and click on OK as shown below.From the New Job Configuration screen, proceed to the Source Code Management section and specify the Git Repository URL for the project as shown below. Save the changes.Next, from the Build Triggers section, select the options desired as shown below and Save the changes.Proceed to the Build section, enter the maven goals for building a snapshot as shown below and Save the changes.Proceed to the Build Settings section. Select the option for Email Notification and enter the values for the email recipients as shown below. Save the changes.Under the Post-build Actions, click on “Add post-build action” button and select “Deploy war/ear to a container”. In the Amazon EC2, a Tomcat Manager (manager as username) instance has to be configured with roles manager-gui and manager-script to allow the remote deployment of the WAR/EAR to the Tomcat Container. The configuration steps can be found in the link https://help.ubuntu.com/13.04/serverguide/tomcat.html under the section of “Tomcat administration webapps” Once the Tomcat Manager webapp configuration is complete in the Amazon EC2 instance, enter the details necessary for the deployment as shown below. Save the changes.Similarly, from the Post-build Actions, click on “Add post-build action” button and select “Publish Cobertura Coverage Report”. Enter the Cobertura XML Report Pattern as shown below and save the changes.6. Configure settings.xml In order to upload the built Maven artifacts to the artifactory server, configure the Jenkins settings.xml found in C:\Program Files (x86)\Jenkins\tools\hudson.tasks.Maven_MavenInstallation\Maven_3.1\conf folder with the same parameters as found in the default settings.xml (usually found under C:\Program Files\Apache Software Foundation\apache-maven-3.1.0\conf for a Windows machine) of the Maven installation on the system. Typically, the server section needs to be configured in the settings.xml for Jenkins matching with the details of the Artifactory server. <servers> <server> <id>sonatype-nexus-snapshots</id> <username>username</username> <password>password</password> </server> <server> <id>sonatype-nexus-staging</id> <username>username</username> <password>password</password> </server> </servers> 7. Update pom.xml The pom.xml file for the project needs to be configured with the following plugins under the build section for the deployment to snapshot repository and for running the Cobertura Coverage report.maven-compiler-plugin maven-deploy-plugin cobertura-maven-pluginAlso add parent, scm and developer section to comply with the requirements put forth by the Artifactory server management as shown below. <parent> <groupId>org.sonatype.oss</groupId> <artifactId>oss-parent</artifactId> <version>7</version> </parent><scm> <connection>scm:git:git@github.com:elizabetht/StudentEnrollmentWithREST.git</connection> <developerConnection>scm:git:git@github.com:elizabetht/StudentEnrollmentWithREST.git</developerConnection> <url>git@github.com:elizabetht/StudentEnrollmentWithREST.git</url> <tag>StudentEnrollmentWithREST-1.3</tag> </scm> <developers> <developer> <id>elizabetht</id> <name>Elizabeth Thomas</name> <email>email2eliza@gmail.com</email> </developer> </developers><build> <finalName>StudentEnrollmentWithREST</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.5.1</version> <inherited>true</inherited> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin><plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-deploy-plugin</artifactId> <version>2.8.1</version> <executions> <execution> <id>default-deploy</id> <phase>deploy</phase> <goals> <goal>deploy</goal> </goals> </execution> </executions> </plugin><plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <version>2.6</version> <configuration> <formats> <format>html</format> <format>xml</format> </formats> </configuration> <executions> <execution> <phase>package</phase> <goals> <goal>cobertura</goal> </goals> </execution> </executions> </plugin> </plugins> </build> 8. Build Now Once the above configuration steps are complete, click on “Build Now” under the Jenkins –> Upload REST Snapshot Artifacts (or the respective Job name) to build the project based on the configuration. The console output has the detailed logs of what steps were initiated by the configuration and the outcome of the entire build. The timestamp of the WAR deployed to Amazon EC2 instance can be checked to see if the deployment is successful. In the same way, the snapshot repository can be checked to see if the upload of the artifacts is successful. Thus the entire process of building the project along with unit tests whenever a SCM change is triggered or under another condition, running code coverage reports, uploading the artifacts built to the snapshot artifactory repository, deploying the WAR to the remote server container and triggering emails to the recipients can be automated with a click of a button through Jenkins.   Reference: Automating the Deployment and Upload of Snapshot Java Artifacts Using Jenkins on Window from our JCG partner Elizabeth Thomas at the My Experiments with Technology blog. ...
apache-spark-logo

Apache Spark is now a top-level project

The Apache Software Foundation (ASF) happily announced that Apache Spark has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying the project’s stability. Apache Spark is an Open Source cluster computing framework for fast and flexible large-scale data analysis. Spark has been the talk of the Big Data town for a while, and 2014 was predicted to be the year of Spark. According to the Spark Web site home page, the engine runs programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk. This is why Cloudera has integrated it into its Hadoop distribution, CDH (Cloudera Distribution including Apache Hadoop). Spark’s big success is not only the fact that it is a fast engine, but also its rapid evolution since past June that it entered the Apache incubator, with contributions including more than 120 developers from 25 organizations. Spark’s creators from the University of California, Berkeley, have created a company called Databricks to commercialize the technology. According to Ion Stoica, CEO at Databricks and Professor at UC Berkeley, with the Spark project it became much easier for organizations to get insights from big data. Now, an open source community is created and this can help to accelerate the development and adoption of Apache Spark. One of Sparks’s features, according to “Apache Spark becomes top-level project” article is that it can run on Hadoop 2.0 YARN. Also, Shark, its companion project can implement SQL-on-Hadoop engine that is syntax-compatible with Apache Hive, but claims the same 10x/100x increases in performance over it that Spark claims over raw MapReduce. Another feature of Spark is that it allows developers to write applications in Java, Python, or Scala. Integrated with Apache Hadoop, Spark is well suited for machine learning, interactive queries, and stream processing, and can read from HDFS, HBase, Cassandra, as well as any Hadoop data source. Yahoo has congratulated Spark on becoming an Apache top-level project, via Andrew Feng, Distinguished Architect at Yahoo. Feng explaned how Yahoo has helped in evolving Hadoop and related big-data technologies, including Spark. Yahoo has made significant contributions to the development of Spark, since Apache Hadoop is the foundation of Yahoo’s big-data platform. Apache Spark software is released under the Apache License v2.0, and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project’s day-to-day operations, including community development and product releases. Documentation and ways to become involved with Apache Spark are offered here. As far as MapReduce is concerned, it seems that Spark is set to take the reins as the primary processing framework for the new Hadoop workloads whereas MapReduce fades. Spark seems to be well suited for next-generation big data applications that might require lower-latency queries, real-time processing or iterative computations on the same data. Spark is technically a standalone project, but it was always designed to work with the Hadoop Distributed File System. However, there’s still a lot of tooling for MapReduce that Spark doesn’t have yet (e.g., Pig and Cascading), and MapReduce is still quite good for certain batch jobs. Cloudera co-founder and Chief Strategy Officer Mike Olson explained that there are a lot of legacy MapReduce workloads that aren’t going anywhere anytime soon even as Spark takes off. In fact, there is a Structure Data conference on March 19-20 in New York, where Ion Stoica will be speaking as part of the Structure Data Awards presentation, and the CEOs of Cloudera, Hortonworks, and Pivotal will talk about the future of big data platforms and how they plan to capitalize on them. ...
spring-logo

Custom Spring namespaces made easier with JAXB

First of all, let me tell this out loud: Spring is no longer XML-heavy. As a matter of fact you can write Spring applications these days with minimal or no XML at all, using plenty of annotations, Java configuration and Spring Boot. Seriously stop ranting about Spring and XML, it’s the thing the of the past. That being said you might still be using XML for couple of reasons: you are stuck with legacy code base, you chose XML for other reasons or you use Spring as a foundation for some framework/platform. The last case is actually quite common, for example Mule ESB and ActiveMQ use Spring underneath to wire up their dependencies. Moreover Spring XML is their way to configure the framework. However configuring message broker or enterprise service bus using plain Spring <bean/>s would be cumbersome and verbose. Luckily Spring supports writing custom namespaces that can be embedded within standard Spring configuration files. These custom snippets of XML are preprocessed at runtime and can register many bean definitions at once in a concise and pleasantly looking (as far as XML allows) format. In a way custom namespaces are like macros that expand at runtime into multiple bean definitions. To give you a feeling of what are we aiming at, imagine a standard “enterprise” application that has several business entities. For each entity we define three, almost identical, beans: repository, service and controller. They are always wired in a similar way and only differ in small details. To begin with, our Spring XML looks like this (I am pasting screenshot with thumbnail to spare your eyes, it’s huge and bloated):This is a “layered” architecture, thus we will call our custom namespace called onion – because onions have layers – and also because systems designed this way make me cry. By the end of this article you will learn how to collapse this pile of XML into: <?xml version="1.0" encoding="UTF-8"?> <b:beans xmlns:b="http://www.springframework.org/schema/beans" xmlns="http://nurkiewicz.blogspot.com/spring/onion/spring-onion.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://nurkiewicz.blogspot.com/spring/onion/spring-onion.xsd http://nurkiewicz.blogspot.com/spring/onion/spring-onion.xsd"> <b:bean id="convertersFactory" class="com.blogspot.nurkiewicz.onion.ConvertersFactory"/> <converter format="html"/> <converter format="json"/> <converter format="error" lenient="false"/> <entity class="Foo" converters="json, error"> <page response="404" dest="not-found"/> <page response="503" dest="error"/> </entity> <entity class="Bar" converters="json, html, error"> <page response="400" dest="bad-request"/> <page response="500" dest="internal"/> </entity> <entity class="Buzz" converters="json, html"> <page response="502" dest="bad-gateway"/> </entity> </b:beans> Look closely, it’s still Spring XML file that is perfectly understandable by this framework – and you will learn how to achieve this. You can run arbitrary code for each top-level custom XML tag, e.g. single occurrence of <entity/> registers repository, service and controller bean definitions all at once. The first thing to implement is writing a custom XML schema for our namespace. This is not that hard and will allow IntelliJ IDEA to show code completion in XML: <?xml version="1.0" encoding="UTF-8"?> <schema xmlns:tns="http://nurkiewicz.blogspot.com/spring/onion/spring-onion.xsd" xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://nurkiewicz.blogspot.com/spring/onion/spring-onion.xsd" elementFormDefault="qualified" attributeFormDefault="unqualified"> <element name="entity"> <complexType> <sequence> <element name="page" type="tns:Page" minOccurs="0" maxOccurs="unbounded"/> </sequence> <attribute name="class" type="string" use="required"/> <attribute name="converters" type="string"/> </complexType> </element> <complexType name="Page"> <attribute name="response" type="int" use="required"/> <attribute name="dest" type="string" use="required"/> </complexType> <element name="converter"> <complexType> <attribute name="format" type="string" use="required"/> <attribute name="lenient" type="boolean" default="true"/> </complexType> </element> </schema>Once the schema is completed we must register it in Spring using two files: /META-INF/spring.schemas: http\://nurkiewicz.blogspot.com/spring/onion/spring-onion.xsd=/com/blogspot/nurkiewicz/onion/ns/spring-onion.xsd/META-INF/spring.handlers: http\://nurkiewicz.blogspot.com/spring/onion/spring-onion.xsd=com.blogspot.nurkiewicz.onion.ns.OnionNamespaceHandlerOne maps schema URL into schema location locally, another points to so-called namespace handler. This class is fairly straightforward – it tells what to do with every top-level custom XML tag coming from this namespace encountered in Spring configuration file:import org.springframework.beans.factory.xml.NamespaceHandlerSupport; public class OnionNamespaceHandler extends NamespaceHandlerSupport { public void init() { registerBeanDefinitionParser("entity", new EntityBeanDefinitionParser()); registerBeanDefinitionParser("converter", new ConverterBeanDefinitionParser()); } } So, when <converter format="html"/> piece of XML is found by Spring, it knows that our ConverterBeanDefinitionParser needs to be used. Remember that if our custom tag has children (like in the case of <entity/>), bean definition parser is called only for top-level tag. It is up to us how we parse and handle children. OK, so single <converter/> tag is suppose to create the following two beans: <bean id="htmlConverter" class="com.blogspot.nurkiewicz.onion.Converter" factory-bean="convertersFactory" factory-method="build"> <constructor-arg value="html.xml"/> <constructor-arg value="true"/> <property name="reader" ref="htmlReader"/> </bean> <bean id="htmlReader" class="com.blogspot.nurkiewicz.onion.ReaderFactoryBean"> <property name="format" value="html"/> </bean>The responsibility of bean definition parser is to programmatically register bean definitions otherwise defined in XML. I won’t go into details of the API, but compare it with the XML snippet above, they match closely to each other: import org.w3c.dom.Element; public class ConverterBeanDefinitionParser extends AbstractBeanDefinitionParser { @Override protected AbstractBeanDefinition parseInternal(Element converterElement, ParserContext parserContext) { final String format = converterElement.getAttribute("format"); final String lenientStr = converterElement.getAttribute("lenient"); final boolean lenient = lenientStr != null? Boolean.valueOf(lenientStr) : true; final BeanDefinitionRegistry registry = parserContext.getRegistry(); final AbstractBeanDefinition converterBeanDef = converterBeanDef(format, lenient); registry.registerBeanDefinition(format + "Converter", converterBeanDef); final AbstractBeanDefinition readerBeanDef = readerBeanDef(format); registry.registerBeanDefinition(format + "Reader", readerBeanDef); return null; } private AbstractBeanDefinition readerBeanDef(String format) { return BeanDefinitionBuilder. rootBeanDefinition(ReaderFactoryBean.class). addPropertyValue("format", format). getBeanDefinition(); } private AbstractBeanDefinition converterBeanDef(String format, boolean lenient) { AbstractBeanDefinition converterBeanDef = BeanDefinitionBuilder. rootBeanDefinition(Converter.class.getName()). addConstructorArgValue(format + ".xml"). addConstructorArgValue(lenient). addPropertyReference("reader", format + "Reader"). getBeanDefinition(); converterBeanDef.setFactoryBeanName("convertersFactory"); converterBeanDef.setFactoryMethodName("build"); return converterBeanDef; } }Do you see how parseInternal() receives XML Element representing <converter/> tag, extracts attributes and registers bean definitions? It’s up to you how many beans you define in AbstractBeanDefinitionParser implementation. Just remember that we are barely constructing the configuration here, no instantiation took place yet. Once the XML file is fully parsed and all bean definition parsers triggered, Spring will start bootstrapping our application. One thing to keep in mind is returning null in the end. The API sort of expects you to return single bean definition. However no need to restrict ourselves, null is fine. The second custom tag that we support is <entity/> that registers three beans at once. It’s similar and thus not that interesting, see full source of EntityBeanDefinitionParser. One important implementation detail that can be found there is the usage of ManagedList. Documentation vaguely mentions it but it’s quite valuable. If you want to define a list of beans to be injected knowing their IDs, a simple List<String> is not sufficient, you must explicitly tell Spring you mean a list of bean references: List<BeanMetadataElement> converterRefs = new ManagedList<>(); for (String converterName : converters) { converterRefs.add(new RuntimeBeanReference(converterName)); } return BeanDefinitionBuilder. rootBeanDefinition("com.blogspot.nurkiewicz.FooService"). addPropertyValue("converters", converterRefs). getBeanDefinition();Using JAXB to simplify bean definition parsers OK, so by now you should be familiar with custom Spring namespaces and how they can help you. However they are quite low level by requiring you to parse custom tags using raw XML DOM API. However my team mate discovered that since we already have XSD schema file, why not use JAXB to handle XML parsing? First we ask maven to generate Java beans representing XML types and elements during build: <build> <plugins> <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb22-plugin</artifactId> <version>0.8.3</version> <executions> <execution> <id>xjc</id> <goals> <goal>generate</goal> </goals> </execution> </executions> <configuration> <schemaDirectory>src/main/resources/com/blogspot/nurkiewicz/onion/ns</schemaDirectory> <generatePackage>com.blogspot.nurkiewicz.onion.ns.xml</generatePackage> </configuration> </plugin> </plugins> </build>Under /target/generated-sources/xjc you will discover couple of Java files. I like generated JAXB models to have some commons prefix like Xml, which can be easily achieved with custom bindings.xjb file placed next to spring-onion.xsd: <bindings version="1.0" xmlns="http://java.sun.com/xml/ns/jaxb" xmlns:xs="http://www.w3.org/2001/XMLSchema" extensionBindingPrefixes="xjc"> <bindings schemaLocation="spring-onion.xsd" node="/xs:schema"> <schemaBindings> <nameXmlTransform> <typeName prefix="Xml"/> <anonymousTypeName prefix="Xml"/> <elementName prefix="Xml"/> </nameXmlTransform> </schemaBindings> </bindings> </bindings>How does it change our custom bean definition parser? Previously we had this: final String clazz = entityElement.getAttribute("class"); //... final NodeList pageNodes = entityElement.getElementsByTagNameNS(NS, "page"); for (int i = 0; i < pageNodes.getLength(); ++i) { //...Now we simply traverse Java beans: final XmlEntity entity = JaxbHelper.unmarshal(entityElement); final String clazz = entity.getClazz(); //... for (XmlPage page : entity.getPage()) { //...JaxbHelper is just a simple tool that hides checked exceptions and JAXB mechanics from outside: public class JaxbHelper {       private static final Unmarshaller unmarshaller = create();       private static Unmarshaller create() {         try {             return JAXBContext.newInstance("com.blogspot.nurkiewicz.onion.ns.xml").createUnmarshaller();         } catch (JAXBException e) {             throw Throwables.propagate(e);         }     }       public static <T> T unmarshal(Element elem) {         try {             return (T) unmarshaller.unmarshal(elem);         } catch (JAXBException e) {             throw Throwables.propagate(e);         }       }   }Couple of words as a summary. First of all I don't encourage you to auto-generate repository/service/controller bean definitions for every entity. Actually it's a poor practice but the domain is familiar to all of us so I thought it will be a good example. Secondly, more important, custom XML namespaces are a powerful tool that should be used as a last resort when everything else fails, namely abstract beans, factory beans and Java configuration. Typically you'll want this kind of feature in frameworks or tools built in top of Spring. In that case check out full source code on GitHub.   Reference: Custom Spring namespaces made easier with JAXB from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog. ...
enterprise-java-logo

Fast Remote Service Tests

Testing code that interacts with remote services is often pretty hard. There are a lot of tradeoffs that influence what tests you can write and the amount of tests to write. Most of the times you have zero control over the data you get from the service, which makes assertions tough to say the least. A while ago I used the VCR library to write some Ruby tests against a remote service. VCR addresses the above problems. It records your test suite’s HTTP interactions to replay them during future runs. The obvious benefits are fast and repeatable tests.       This week I was wondering whether that’s a thing for Java as well. As it turns out there’s Betamax to do that. Actually Betamax is a Groovy port of VCR that can be used with any JVM language. Betamax installs a proxy in between you and the target host, records each request and response on tape and replays the tape for known requests. It works for any HTTP client that respects Java’s proxy settings, and for a bunch that don’t such as Apache HttpClient and WSLite. Example In a JUnit test you can use Betamax as a method-level TestRule. On each test-method that should record and replay you put an @Betamax recorder and set a tape. Consider the following example where I use the Spotify Metadata API to get the popularity of an artist. In this example I use the Apache HttpClient library and configure it for Betamax. public class SpotifyTest { @Rule public final Recorder recorder = new Recorder();private final DefaultHttpClient http = new DefaultHttpClient();@Betamax(tape = "fixtures/popularity") @Test public void get_popularity() throws Exception { Spotify spotify = new Spotify(http); assertThat(spotify.popularity("The Beatles"), is(.55f)); }@Before public void setUp() throws Exception { BetamaxRoutePlanner.configure(http); } } At the moment of writing this code the popularity of The Beatles is .55 but as this number is based on user opinion it is highly likely to change. Using a Betamax tape gets the same response (as long as the request does not change) and allows to assert .55 for popularity. HTTPS As I’ve shown you Betamax properly records and replays any HTTP communication using either a proxy or a wrapper class (as in the example). HTTPS is also supported but may be a bit more interesting as you use Betamax in a proxy-based setup. Using a wrapper will work just fine. The problem with HTTPS and a proxy-based setup obviously is that the proxy cannot intercept data on standard HTTPS communication. This is why we trust HTTPS. Betamax has its way around this. You can enable sslSupport on the Betamax Recorder. When your client code is okay with a broken SSL certificate chain you can make this work. Again this is only really a problem as you use a proxy-based setup. Using a client wrapper enables Betamax directly on API calls easing HTTPS communication. Try it yourself Betamax can help you to write fast and repeatable unit tests for clients of remote services. The most beneficial to me is that the tests are really fast because remote communication is eliminated. Asserting on specific values can be helpful although personally I like a property-based style for these tests (e.g. popularity must be a number >= 0 and <= 5). Give Betamax a try the next time you interact with a remote service.   Reference: Fast Remote Service Tests from our JCG partner Bart Bakker at the Software Craft blog. ...
scala-logo

Using Scala traits as modules, or the “Thin Cake” Pattern

I would like to describe a pure-Scala approach to modularity that we are successfully using in a couple of our Scala projects. But let’s start with how we do Dependency Injection (see also my other blogs). Each class can have dependencies in the form of constructor parameters, e.g.:             class WheatField class Mill(wheatField: wheatField) class CowPasture class DiaryFarm(cowPasture: CowPasture) class Bakery(mill: Mill, dairyFarm: DairyFarm) At the “end of the world”, there is a main class which runs the application and where the whole object graph is created: object BakeMeCake extends App { // creating the object graph lazy val wheatField = new WheatField() lazy val mill = new Mill(wheatField) lazy val cowPasture = new CowPasture() lazy val diaryFarm = new DiaryFarm(cowPasture) lazy val bakery = new Bakery(mill, dairyFarm)// using the object graph val cake = bakery.bakeCake() me.eat(cake) } The wiring can be done manually, or e.g. using MacWire. Note that we can do scoping using Scala constructs: a lazy val corresponds to a singleton object (in the constructed object graph), a def to a dependent-scoped object (a new instance will be created for each usage). Thin Cake pattern What if the object graph, and at the same time the main class, becomes large? The answer is simple: we have to break it into pieces, which will be the “modules”. Each module is a Scala trait, and contains some part of the object graph. For example: trait CropModule { lazy val wheatField = new WheatField() lazy val mill = new Mill(wheatField) }trait LivestockModule { lazy val cowPasture = new CowPasture() lazy val diaryFarm = new DiaryFarm(cowPasture) } The main object then becomes a composition of traits. This is exactly what also happens in the Cake Pattern. However here we are using only one element of it, hence the “Think Cake” Pattern name. object BakeMeCake extends CropModule with LivestockModule { lazy val bakery = new Bakery(mill, dairyFarm)val cake = bakery.bakeCake() me.eat(cake) } If you have ever used Google Guice, you may see a similarity: trait-modules directly correspond to Guice modules. However, here we gain the additional type-safety and compile-time checking that dependency requirements for all classes are met. Of course, the module trait can contain more than just new object instantiations, however you have to be cautious not to put too much logic in there – at some point you probably need to extract a class. Typical code that also goes into modules is e.g. new actor creation code and setting up caches. Dependencies What if our trait modules have inter-module dependencies? There are two ways we can deal with that problem. The first is abstract members. If there’s an instance of a class that is needed in our module, we can simply define it as an abstract member of the trait-module. This abstract member has to be then implemented in some other module with which our module gets composed in the end. Using a consistent naming convention helps here. The fact that all abstract dependencies are defined at some point is checked by the compiler. The second way is composition via inheritance. If we e.g. want to create a bigger module out of three smaller modules, we can simply extend the other module-traits, and due to the way inheritance works we can use all of the objects defined there. Putting the two methods together we get for example: // composition via inheritance: bakery depends on crop and livestock modules trait BakeryModule extends CropModule with LivestockModule { lazy val bakery = new Bakery(mill, dairyFarm) }// abstract member: we need a bakery trait CafeModule { lazy val espressoMachine = new EspressoMachine() lazy val cafe = new Cafe(bakery, espressoMachine)def bakery: Bakery }// the abstract bakery member is implemented in another module object CafeApp extends CafeModule with BakeryModule { cafe.orderCoffeeAndCroissant() } Multiple implementations Taking this idea a bit further, in some situations we might have trait-module-interfaces and several trait-module-implementions. The interface would contain only abstract members, and the implementations would wire the appropriate classes. If other modules depend only on the trait-module-interface, when we do the final composition we can use any implementation. This isn’t perfect, however. The implementation must be known statically, when writing the code – we cannot dynamically decide which implementations we want to use. If we want to dynamically choose an implementation for only one trait-interface, that’s not a problem – we can use a simple “if”. But every additional combination causes an exponential increase in the cases we have to cover. For example: trait MillModule { def mill: Mill }trait CornMillModule extends MillModule { lazy val cornField = new CornField() lazy val mill = new CornMill(cornField) }trait WheatMillModule extends MillModule { lazy val wheatField = new WheatField() lazy val mill = new WheatMill(wheatField) }val modules = if (config.cornPreferred) { new BakeryModule with CornMillModule } else { new BakeryModule with WheatMillModule } Can it be any better? Sure! There’s always something to improve :). One of the problems was already mentioned – you cannot choose which trait-module to use dynamically (run-time configuration). Another area that could get improved is the relation between trait-modules and packages. A good approach is to have a single trait-module per package (or per package tree). That way you logically group code that implements some functionality in a single package, and specify how the classes that form the implementations should be used in the trait-module. But why then do you have to define both the package and trait-module? Maybe they can be merged together somehow? Increasing the role of packages is also an idea I’ve been exploring in the Veripacks project. It may also be good to restrict the visibility of some of the defined objects. Following the “one public class per package” rule, here we might have “one public object per trait-module”. However, if we are creating bigger trait-modules out of smaller ones, the bigger module has no way to restrict the visibility of the objects in the module it composes of. In fact, the smaller modules would have to know the maximum scope of their visibility and use an appropriate private[package name] modifier (supposing the bigger module is in a parent package). Summing up Overall, we found this solution to be a simple, clear way to structure our code and create the object graph. It uses only native Scala constructs, does not depend on any frameworks or libraries, and provides compile-time checking that everything is defined properly. Bon Appetit!   Reference: Using Scala traits as modules, or the “Thin Cake” Pattern from our JCG partner Adam Warski at the Blog of Adam Warski blog. ...
agile-logo

Cost of Delay Due to Technical Debt, Part 4

Cost of delay part 1 was about not shipping on time. Cost of delay part 2 was due to multitasking. Cost of delay part 3 was due to indecision. This part is the cost of delay due to technical debt. One of the big problems in backlog management is ranking technical debt stories. It’s even more of a problem when it’s time to rank technical debt projects. You think product owners have feature-itis problems? Try having them rank technical debt projects. Almost impossible. But if you really want the value from your project portfolio, you will look at your impediments. And, if you are like many of my clients, you have technical debt: a build system that isn’t sufficiently automated, insufficient automated system tests, too many system-level defects, who knows what else. If you addressed the build system, and maybe some of the system tests, if you created a timeboxed technical debt project, you could save time on all of the other projects in this code base. All of them. Imagine this scenario: you have a 2000-person Engineering organization. It takes you 3 weeks (yes, 21 calendar days) to create a real build that you know works. You currently can release every 12-18 months. You want to release every 3-6 months, because you have to respond to market competitors. In order to do that, you have to fix the build system. But you have a list of possible features, an arm and a leg long. What do you do? This client first tried to do more features. They tried to do features in iterations. Oh, they tried. By the time they called me, they were desperate. I did an assessment. I asked them if they knew how much the build system cost them. They had a group of  12 people who “supported” the build system. It took at least 10 days, but closer and closer to 20-25 days to get a working build. They tried to estimate the cost of the build in just this group of people: 12 people time 21 days. They did not account for the cost of delay in their projects. I showed them the back of the napkin calculation in part 1, and asked, “How many releases have you postponed for at least a month, due to the build system?” They had an answer, which was in the double digits. They had sales in the millions for the maximum revenue. But they still had a sticking point. If they funded this project, they would have no builds for four weeks. None. Nada. Zilch. And, their best people (whatever that means) would be on the build project for four weeks. So, no architecture development, no design, no working on anything by the best people on anything other than the build system. This company was convinced that stopping Engineering for a month was a radical step. Does it matter how long your iterations are, if you can’t build during the iterations and get feedback? They finally did fund this project, after about six months of hobbling along. After four weeks of intense work by 16 of their smartest people, they had an automated build system that anyone in Engineering could use. It still took 2 days to build. But that was heaven for everyone. They continued the build system work for another month, in parallel with regular Engineering work to reduce build system time. After all the build system work, Engineering was able to change. They were able to transition to agile. Now, Engineering could make progress on their feature list, and release when it made sense for their business. What was the payback for the build system work? Almost immediate, Engineering staff said. When I asked one of the VPs, he estimated, off the record, that they had lost more than the “millions” of dollars of revenue because they did not have the features needed at the time the market demanded. All because of the build system. People didn’t plan for things to get this way. They got that way a little at a time, and because no one wanted to fund work on the build system. This is a dramatic story due to technical debt. I bet you have a story just like this one. The cost of delay due to technical debt is real. If you never look at your technical debt and see where it impedes you, you are not looking at the value of your entire project portfolio. If you eliminated a technical debt impediment, would that change one of your costs of delay?   Reference: Cost of Delay Due to Technical Debt, Part 4 from our JCG partner Johanna Rothman at the Managing Product Development blog. ...
software-development-2-logo

The regex that broke a server

I’ve never thought I would see an unresponsive server due to a bad regex matcher but that’s just happened to one of our services, yielding it it unresponsive. Let’s assume we parse some external dealer car info. We are trying to find all those cars with “no air conditioning” among various available input patterns (but without matching patterns such as “mono air conditioning”). The regex that broke our service looks like this:       String TEST_VALUE = "ABS, traction control, front and side airbags, Isofix child seat anchor points, no air conditioning, electric windows, \r\nelectrically operated door mirrors"; double start = System.nanoTime(); Pattern pattern = Pattern.compile("^(?:.*?(?:\\s|,)+)*no\\s+air\\s+conditioning.*$"); assertTrue(pattern.matcher(TEST_VALUE).matches()); double end = System.nanoTime(); LOGGER.info("Took {} micros", (end - start) / (1000 )); After 2 minutes this test was still running and one CPU core was fully overloaded.First, the matches method uses the entire input data, so we don’t need the start(^) or the end($) delimiters, and because of the new line characters in the input string we must instruct our Regex Pattern to operate in a MULTILINE mode: Pattern pattern = Pattern.compile("(?:.*?(?:\\s|,)+)*no\\s+air\\s+conditioning.*?", Pattern.MULTILINE); Let’s see how multiple versions of this regex behave:Regex Duration [microseconds] Observation“(?:.*?(?:\\s|,)+)*no\\s+air\\s+conditioning.*?” 35699.334 This is way too slow“(?:.*?(?:\\s|,)+)?no\\s+air\\s+conditioning.*?” 108.686 The non-capturing group doesn’t need the one-or-many(+) multiplier, so we can replace it with zero-or-one(?)“(?:.*?\\b)?no\\s+air\\s+conditioning.*?” 153.636 It works for more input data than the previous one, which only uses the space(\s) and the comma(,) to separate the matched pattern“\\bno\\s+air\\s+conditioning” 78.831 Find is much faster than matches and we are only interested in the first occurrence of this pattern.Why not using String.indexOf() instead? While this would be much faster than using regex, we would still have to consider the start of the string, patterns such as “mono air conditioning”, tabs or multiple space characters between our pattern tokens. Custom implementations as such may be faster, but are less flexible and take more time to implement. Conclusion Regex is a fine tool for pattern matching, but you must not take it for granted since small changes may yield big differences. The reason why the first regex was counterproductive is due to catastrophic backtracking, a phenomena that every developer should be aware of before starting writing regular expressions.   Reference: The regex that broke a server from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog. ...
software-development-2-logo

Agile Mindset During Programming

I’m Stuck Recently I found myself in several situations where I just couldn’t write code. Or at least, “good code” First, I had “writer’s block”. I just could not see what was going to be my next test to write. I could not find the name for the class / interface I needed. Second, I just couldn’t simplify my code. Each time I tried to change something (class / method) to a simpler construction, things got worse. Sometimes to break. I was stuck. The Tasks Refactor to Patterns One of the situation we had was to refactor a certain piece in the code. This piece of code is the manual wiring part. We use DI pattern in ALL of our system, but due to some technical constraints, we must do the injection by hand. We can live with that. So the refactor in the wiring part would have given us a nice option to change some of the implementation during boot. Some of the concrete classes should be different than others based on some flags. The design patterns we understood we would need were: Factory Method and Abstract Factory The last remark is important to understand why I had those difficulties. I will get to it later. New Module Another task was to create a new module that gets some input items, extract data from them, send it to a service, parse the response, modify the data accordingly and returns items with modified data. While talking about it with a peer, we understood we needed several classes. As always we wanted to have high quality code by using the known OOD principles wherever we could apply them. So What Went Wrong? In the case of refactoring the wiring part, I constantly tried to immediately create the end result of the abstract factory and the factory method that would call it. There are a-lot of details in that wiring code. Some are common and some needed to be separated by the factory. I just couldn’t find the correct places to extract to methods and then to other class. Each time I had to move code from one location and dependency to another. I couldn’t tell what exactly the factory’s signature and methods would be. In the case of the new module, I knew that I want several classes. Each has one responsibility. I knew I want some level of abstraction and good encapsulation. So I kept trying to create this great encapsulated abstract data structure. And the code kept being extremely complicated. Important note: I always to test first approach. Each time I tried to create a test for a certain behavior, it was really really complicated. I stopped Went to have a cup of coffey. I went to read some unrelated stuff. And I talked to one of my peers. We both understood what we needed to do. I went home… And then it hit me The problem I had was that I knew were I needed to go, but instead of taking small steps, I kept trying to take one big leap at once. Which brings me to the analogy of Agile to good programming habits (and TDD would be one of them). Agile and Programming Analogy One of the advantages in Agile development that I really like is the small steps (iteration) we do in order to reach our goal. Check the two pictures below. One shows how we aim towards a far away goal and probably miss. The other shows how we divide to iterations and aim incrementally.Develop in Small Incremental Iterations This is the moral of the story. Even if you know exactly how the structure of the classes should look like. Even if you know exactly which design pattern to use. Even if you know what to do. Even if you know exactly how the end result should look like. Keep on using the methods and practices that brings you to the goal in the safest and fastest way. Do small steps. Test each step. Increment the functionality of the code in small chucks. TDD. Pair. Keep calm.  Reference: Agile Mindset During Programming from our JCG partner Eyal Golan at the Learning and Improving as a Craftsman Developer blog. ...
java-logo

Design Pattern: Immutable Embedded Builder

Last week I wrote about what makes a pattern anti-pattern. This week I present a design pattern… or wait… perhaps this is an anti-pattern. Or is it? Let’ see! The builder pattern is a programming style when there is a class that build an instance of another. The original aim of the builder pattern is to separate the building process of an object, that can be fairly complex in some cases, from the class of the object itself thus the builder can deliver different types of objects based on how the building process progresses. This is a clear example of the separation of concerns. Immutable objects are objects that are created and can not be altered after the creation process. Builders and immutable objects just come together very natural. The builder and the built objects are very closely related and therefore they are usually put into the same package. But why are they implemented in separate classes? On one hand: they have to be separate classes of course. That is the whole thing is about. But on the other hand: why can not the builder be an inner class of the built class? Builder usually collect the building information in their own state and when the caller requests the object to be built this information is used to build the built object. This “use” is a copy operation most of the time. If the builder is inner class all this information can be stored in the built object. Note that the inner class can access all private parts of the class embedding it. The builder can create a built object just not ready yet and store the build information in it. When requested to build all it does are the final paintings. This pattern is followed by Guava for the immutable collections. The builders are static inner classes. If you look at the code of ImmutableList you can see that there is an internal Builder class inside the abstract class. But this is not the only way to embed the builder and the implementation. What if we embed the implementation inside the builder? The builder is the only code that needs mutable access to the class. An interface defining the query methods the class implements should be enough for anybody else. And if we get to this point why not to create a matrjoschka? Lets have an interface. Lets have a builder inside the interface as an inner class (static and public by default and can not be any other way). Lets have the implementation inside the builder as a private static class implementing the outer interface. public interface Knight { boolean saysNi();public class Builder { private Implementation implementation = new Implementation();public Builder setState(String say) { implementation.say = say; return this; }public Implementation build() { Implementation knight = implementation; implementation = null; return knight; }private static class Implementation implements Knight { private String say;public boolean saysNi() { return say.indexOf("ni") != -1; } } } } The builder can access any fields of the Knight implementation since they are in the same top level class. (JLS1.7, section 6.6.1 Determining Accessibility) There is no other way (except nasty reflection tricks or byte code abuse, which are out of scope for now) to get access to the implementation except using the builder. The builder can be used to build the implementation and once it returned it it has no access to it anymore, there is no way to modify the implementation via the builder. If the implementation is immutable it is guaranteed to save the state. Is this a pattern or an antipattern? Reference: Design Pattern: Immutable Embedded Builder from our JCG partner Peter Verhas at the Java Deep blog. ...
enterprise-java-logo

Injecting configuration values using CDI’s InjectionPoint

Dependency injection is a great technology for the organization of class dependencies. All class instances you need in your current class are provided at runtime from the DI container. But what about your configuration? Of course, you can create a “Configuration” class and inject this class everywhere you need it and get the necessary value(s) from it. But CDI lets you do this even more fine-grained using the InjectionPoint concept. If you write a @Produces method you can let your CDI container also inject some information about the current code where the newly created/produced value is inject into. A complete list of the available methods can be found here. The interesting point is, that you can query this class for all the annotations the current injection point has: Annotated annotated = injectionPoint.getAnnotated(); ConfigurationValue annotation = annotated.getAnnotation(ConfigurationValue.class); As the example code above shows, we can introduce a simple @Qualifier annotation that marks all the injection points where we need a specific configuration value. In this blog post we just want to use strings as configuration values, but the whole concept can of course be extended to other data types as well. The already mentioned @Qualifier annotation looks like the following one: @Target({ElementType.FIELD, ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @Qualifier public @interface ConfigurationValue { @Nonbinding ConfigurationKey key(); }public enum ConfigurationKey { DefaultDirectory, Version, BuildTimestamp, Producer } The annotation has of course the retention policy RUNTIME because the CDI container has to evaluate it while the application is running. It can be used for fields and methods. Beyond that we also create a key attribute, which is backed by the enum ConfigurationKey. Here we can introduce all configuration values we need. In our example this for example a configuration value for a default directory, for the version of the program and so on. We mark this attribute as @Nonbinding to prevent that the value of this attribute is used by the CDI container to choose the correct producer method. If we would not use @Nonbinding we would have to write a @Produces method for each value of the enum. But here we want to handle all this within one method. The @Produces method for strings that are annotated with @ConfigurationKey is shown in the following code example: @Produces @ConfigurationValue(key=ConfigurationKey.Producer) public String produceConfigurationValue(InjectionPoint injectionPoint) { Annotated annotated = injectionPoint.getAnnotated(); ConfigurationValue annotation = annotated.getAnnotation(ConfigurationValue.class); if (annotation != null) { ConfigurationKey key = annotation.key(); if (key != null) { switch (key) { case DefaultDirectory: return System.getProperty("user.dir"); case Version: return JB5n.createInstance(Configuration.class).version(); case BuildTimestamp: return JB5n.createInstance(Configuration.class).timestamp(); } } } throw new IllegalStateException("No key for injection point: " + injectionPoint); } The @Produces method gets the InjectionPoint injected as a parameter, so that we can inspect its values. As we are interested in the annotations of the injection point, we have a look if the current injection point is annotated with @ConfigurationValue. If this is the case, we have a look at the @ConfigurationValue’s key attribute and decide which value we return. That’s it. In a more complex application we can of course load the configuration from some files or some other kind of data store. But the concept remains the same. Now we can easily let the CDI container inject the configuration values we need simply with these two lines of code: @Inject @ConfigurationValue(key = ConfigurationKey.DefaultDirectory) private String defaultDirectory; Conclusion: Making a set of configuration values accessible throughout the whole application has never been easier.   Reference: Injecting configuration values using CDI’s InjectionPoint from our JCG partner Martin Mois at the Martin’s Developer World blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books