Featured FREE Whitepapers

What's New Here?


Why do we mock?

I do Java interviews. During the interviews I ask technical question that I know the answer for. You may think this is boring. To be honest: sometimes it is. But sometimes it is interesting to learn what misconcepts there are. I happened to ask during the interview what you can read in the title: “Why do we mock?”. The answer was interesting. I can not quote word by word and I also would not like to do that for editorial, ethical and many other reasons. I also would like to stress that a misconcept does not qualify the person clever or stupid or anything. It is just a misconcept that comes from someone’s personal experience. Here is what she/he was saying. We use mocks so that we can write tests for units that use heavy services that may not be available when we run the test. It is also important to mock so that the tests can run fast even when the services would make the testing slow. It may also happen in an enterprise environment that some of the services are not available when we develop, and therefore testing would be impossible if we did not use mocks. Strictly speaking the above statements are true. I would not argue about why you or anybody else uses mocks. But as a Java professional I would argue about what the major and first goal we use mocks for.We use mocks to separate the modules from each other during testing so that we can tell from the test results which module is faulty and which passed the tests.This is what unit tests were invented for the first place. There are side effects, like those mentioned above. There are other side effects. For example the unit tests are great documentations. If formulated well they explain better how the unit works and what interfaces it needs and provides than any javadoc. Especially that javadocs tend to be outdated while Junit tests fail during build if get outdated. Another side effect is that you write testable code if you craft the unit tests first and this generally improve your coding style. Saying it simply: unit testing is testing units. Units bare without dependencies. And this can be done with mocks. The other things are side effects. We like them, but thy are not the main reason to mock when we unit test.Reference: Why do we mock? from our JCG partner Peter Verhas at the Java Deep blog....

Pushing Docker images to Registry

Tech Tip #57 explained how to create your own Docker images. That particular blog specifically showed how to build your own WildFly Docker images on CentOS and Ubuntu. Now you are ready to share your images with rest of the world. That’s where Docker Hub comes in handy. Docker Hub is the “distribution component” of Docker, or a place to store and search images. From the Getting Started with Docker Hub docs … The Docker Hub is a centralized resource for working with Docker and its components. Docker Hub helps you collaborate with colleagues and get the most out of Docker. Starting and pushing images to with Docker Hub is pretty straight forward.Pushing images to Docker Hub require an account. It can be created as explained here. Or rather easily by using docker login command.wildfly-centos> docker login Username: arungupta Password: Email: arun.gupta@gmail.com Login SucceededSearching on WildFly shows there are 72 images:wildfly-centos> docker search wildfly NAME DESCRIPTION STARS OFFICIAL AUTOMATED jboss/wildfly WildFly application server image 42 [OK] sewatech/wildfly Debian + WildFly 8.1.0.Final with OpenJDK ... 1 [OK] kamcord/wildfly 1 openshift/wildfly-8-centos 1 [OK] abstractj/wildfly AeroGear Wildfly Docker image 1 jsightler/wildfly_nightly Nightly build from wildfly's github master... 1 centos/wildfly CentOS based WildFly Docker image 1 aerogear/unifiedpush-wildfly 1 [OK] t0nyhays/wildfly 1 [OK] tsuckow/wildfly-propeller Dockerization of my application *Propeller... 0 [OK] n3ziniuka5/wildfly 0 [OK] snasello/wildfly 0 [OK] jboss/keycloak-adapter-wildfly 0 [OK] emsouza/wildfly 0 [OK] sillenttroll/wildfly-java-8 WildFly container with java 8 0 [OK] jboss/switchyard-wildfly 0 [OK] n3ziniuka5/wildfly-jrebel 0 [OK] dfranssen/docker-wildfly 0 [OK] wildflyext/wildfly-camel WildFly with Camel Subsystem 0 ianblenke/wildfly 0 [OK] arcamael/docker-wildfly 0 [OK] dmartin/wildfly 0 [OK] pires/wildfly-cluster-backend 0 [OK] aerogear/push-quickstarts-wildfly-dev 0 [OK] faga/wildfly Wildfly application server with ubuntu. 0 abstractj/unifiedpush-wildfly AeroGear Wildfly Docker image 0 murad/wildfly - oficial centos image - java JDK "1.8.0_0... 0 aerogear/unifiedpush-wildfly-dev 0 [OK] ianblenke/wildfly-cluster 0 [OK] blackhm/wildfly 0 khipu/wildfly8 0 [OK] rowanto/docker-wheezy-wildfly-java8 0 [OK] ordercloud/wildfly 0 lavaliere/je-wildfly A Jenkins Enterprise demo master with a Wi... 0 adorsys/wildfly Ubuntu - Wildfly - Base Image 0 akalliya/wildfly 0 lavaliere/joc-wildfly Jenkins Operations Center master with an a... 0 tdiesler/wildfly 0 apiman/on-wildfly8 0 [OK] rowanto/docker-wheezy-wildfly-java8-ex 0 [OK] arcamael/blog-wildfly 0 lavaliere/wildfly 0 jfaerman/wildfly 0 yntelectual/wildfly 0 svenvintges/wildfly 0 dbrotsky/wildfly 0 luksa/wildfly 0 tdiesler/wildfly-camel 0 blackhm/wildfly-junixsocket 0 abstractj/unifiedpush-wildfly-dev AeroGear UnifiedPush server developer envi... 0 abstractj/push-quickstarts-wildfly-dev AeroGear UnifiedPush Quickstarts developer... 0 bn3t/wildfly-wicket-examples An image to run the wicket-examples on wil... 0 lavaliere/wildfly-1 0 munchee13/wildfly-node 0 munchee13/wildfly-manager 0 munchee13/wildfly-dandd 0 munchee13/wildfly-admin 0 bparees/wildfly-8-centos 0 lecoz/wildflysiolapie fedora latest, jdk1.8.0_25, wildfly-8.1.0.... 0 lecoz/wildflysshsiolapie wildfly 8.1.0.Final, jdk1.8.0_25, sshd, fe... 0 wildflyext/example-camel-rest 0 pepedigital/wildfly 0 [OK] tsuckow/wildfly JBoss Wildfly 8.1.0.Final standalone mode ... 0 [OK] mihahribar/wildfly Dockerfile for Wildfly running on Ubuntu 1... 0 [OK] hpehl/wildfly-domain Dockerfiles based on "jboss/wildfly" to se... 0 [OK] raynera/wildfly 0 [OK] hpehl/wildfly-standalone Dockerfile based on jboss/wildfly to setup... 0 [OK] aerogear/wildfly 0 [OK] piegsaj/wildfly 0 [OK] wildflyext/wildfly Tagged versions JBoss WildFly 0Official images are tagged jboss/wildfly. In order to push your own image, it needs to be built as a named image otherwise you’ll get an error as shown:2014/11/26 09:59:37 You cannot push a "root" repository. Please rename your repository in <user>/<repo> (ex: arungupta/wildfly-centos)This can be easily done as shown:wildfly-centos> docker build -t="arungupta/wildfly-centos" . Sending build context to Docker daemon 4.096 kB Sending build context to Docker daemon Step 0 : FROM centos ---> ae0c2d0bdc10 Step 1 : MAINTAINER Arun Gupta ---> Using cache ---> e490dfcb3685 Step 2 : RUN yum -y update && yum clean all ---> Using cache ---> f212cb9dbcf5 Step 3 : RUN yum -y install xmlstarlet saxon augeas bsdtar unzip && yum clean all ---> Using cache ---> 28b11e6151f0 Step 4 : RUN groupadd -r jboss -g 1000 && useradd -u 1000 -r -g jboss -m -d /opt/jboss -s /sbin/nologin -c "JBoss user" jboss ---> Using cache ---> 73603eab89b7 Step 5 : WORKDIR /opt/jboss ---> Using cache ---> 9a661ae4341b Step 6 : USER jboss ---> Using cache ---> 6265153611c7 Step 7 : USER root ---> Using cache ---> 12ed28a7acb7 Step 8 : RUN yum -y install java-1.7.0-openjdk-devel && yum clean all ---> Using cache ---> 44c4bb92fa11 Step 9 : USER jboss ---> Using cache ---> 930cb2a860f7 Step 10 : ENV JAVA_HOME /usr/lib/jvm/java ---> Using cache ---> fff2c21b0a71 Step 11 : ENV WILDFLY_VERSION 8.2.0.Final ---> Using cache ---> b7b7ca7a9172 Step 12 : RUN cd $HOME && curl -O http://download.jboss.org/wildfly/$WILDFLY_VERSION/wildfly-$WILDFLY_VERSION.zip && unzip wildfly-$WILDFLY_VERSION.zip && mv $HOME/wildfly-$WILDFLY_VERSION $HOME/wildfly && rm wildfly-$WILDFLY_VERSION.zip ---> Using cache ---> a1bc79a43c77 Step 13 : ENV JBOSS_HOME /opt/jboss/wildfly ---> Using cache ---> d46fdd618d55 Step 14 : EXPOSE 8080 9990 ---> Running in 9c2c2a5ef41c ---> 8988c8cbc051 Removing intermediate container 9c2c2a5ef41c Step 15 : CMD /opt/jboss/wildfly/bin/standalone.sh -b ---> Running in 9e28c3449ec1 ---> d989008d1f84 Removing intermediate container 9e28c3449ec1 Successfully built d989008d1f84docker build command builds the image, -t specifies the repository name to be applied to the resulting image. Once the image is built, it can be verified as:wildfly-centos> docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE arungupta/wildfly-centos latest d989008d1f84 14 hours ago 619.6 MB wildfly-ubuntu latest a2e96e76eb10 43 hours ago 749.5 MB 0281986b0ed8 44 hours ago 749.5 MB 1a5e1aeadc85 44 hours ago 607.7 MB wildfly-centos latest 97c8780a7d6a 45 hours ago 619.6 MB registry latest 7e2db37c6564 13 days ago 411.6 MB centos latest ae0c2d0bdc10 3 weeks ago 224 MB jboss/wildfly latest 365390553f92 4 weeks ago 948.7 MB ubuntu latest 5506de2b643b 4 weeks ago 199.3 MBNotice the first line shows the named image arungupta/wildfly-centos. This image can then be pushed to Docker Hub as:wildfly-centos> docker push arungupta/wildfly-centos The push refers to a repository [arungupta/wildfly-centos] (len: 1) Sending image list Pushing repository arungupta/wildfly-centos (1 tags) 511136ea3c5a: Image already pushed, skipping 5b12ef8fd570: Image already pushed, skipping ae0c2d0bdc10: Image already pushed, skipping e490dfcb3685: Image successfully pushed f212cb9dbcf5: Image successfully pushed 28b11e6151f0: Image successfully pushed 73603eab89b7: Image successfully pushed 9a661ae4341b: Image successfully pushed 6265153611c7: Image successfully pushed 12ed28a7acb7: Image successfully pushed 44c4bb92fa11: Image successfully pushed 930cb2a860f7: Image successfully pushed fff2c21b0a71: Image successfully pushed b7b7ca7a9172: Image successfully pushed a1bc79a43c77: Image successfully pushed d46fdd618d55: Image successfully pushed 8988c8cbc051: Image successfully pushed d989008d1f84: Image successfully pushed Pushing tag for rev [d989008d1f84] on {https://cdn-registry-1.docker.io/v1/repositories/arungupta/wildfly-centos/tags/latest}And you can verify this by pulling the image:wildfly-centos> docker pull arungupta/wildfly-centos Pulling repository arungupta/wildfly-centos d989008d1f84: Download complete 511136ea3c5a: Download complete 5b12ef8fd570: Download complete ae0c2d0bdc10: Download complete e490dfcb3685: Download complete f212cb9dbcf5: Download complete 28b11e6151f0: Download complete 73603eab89b7: Download complete 9a661ae4341b: Download complete 6265153611c7: Download complete 12ed28a7acb7: Download complete 44c4bb92fa11: Download complete 930cb2a860f7: Download complete fff2c21b0a71: Download complete b7b7ca7a9172: Download complete a1bc79a43c77: Download complete d46fdd618d55: Download complete 8988c8cbc051: Download complete Status: Image is up to date for arungupta/wildfly-centos:latestEnjoy!Reference: Pushing Docker images to Registry from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....

Remove Docker image and container with a criteria

You have installed multiple Docker images and would like to clean them up using rmi command. So, you list all the images as:                ~> docker images --no-trunc REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE mysql latest 98840bbb442c7dc3640ffe3a8ec45d3fee934c2f6d85daaaa2edf65b380485a0 25 hours ago 236 MB wildfly-centos latest fc378232f03d04bff96987f4c23969461582f73c3a7b473a7cb823ac67939f48 5 days ago 619.6 MB arungupta/wildfly-centos latest e4f1dbdff18956621aa48a83e5b05df309ee002c3668fa452f1235465d881020 6 days ago 619.6 MB wildfly-ubuntu latest a2e96e76eb10f4df87d01965ce4df5310de6f9f3927aceb7f5642393050e8752 7 days ago 749.5 MB registry latest 7e2db37c6564bf030e6c5af9725bf9f9a8196846e3a77a51e201fc97871e2e60 2 weeks ago 411.6 MB centos latest ae0c2d0bdc100993f7093400f96e9abab6ddd9a7c56b0ceba47685df5a8fe906 4 weeks ago 224 MB jboss/wildfly latest 365390553f925f96f8c00f79525ad101847de7781bb4fec23b1188f25fe99a6a 5 weeks ago 948.7 MB centos/wildfly latest 1de9304f58bbc2d401b4dcbba6fc686bdd6f6bff473fe486e7cb905c02163b1a 6 weeks ago 606.6 MBThen try to remove the “arungupta/wildfly-centos” image as shown below, but get an error:~> docker rmi e4f1dbdff18956621aa48a83e5b05df309ee002c3668fa452f1235465d881020 Error response from daemon: Conflict, cannot delete e4f1dbdff189 because the container bafc2b3327a4 is using it, use -f to force 2014/12/02 12:56:53 Error: failed to remove one or more images So you follow the recommendation of using -f switch but get another error:~> docker rmi -f e4f1dbdff18956621aa48a83e5b05df309ee002c3668fa452f1235465d881020 Error response from daemon: No such id: c345720579e024df4f6d28d2062fda64b7743f7dbb214136d4d2285bc3afc95b 2014/12/02 12:56:55 Error: failed to remove one or more imagesWhat do you do ? This message indicates that the image is used by one of the containers and that’s why could not be removed. The error message is very ambiguous and a #9458 has been filed for the same. In the meanwhile, an easy way to solve this is to list all the containers as shown:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bafc2b3327a4 arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 4 days ago boring_ptolemy bfe71d92a612 arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 4 days ago agitated_einstein e1c0965d202c arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 4 days ago thirsty_blackwell ddc400c26f1a mysql:latest "/entrypoint.sh mysq 5 days ago Exited (0) 27 minutes ago 3306/tcp sample-mysql 05c741b5e22f wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago Exited (130) 5 days ago agitated_lalande ff10b83d6c17 arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago insane_wilson b2774b17460c arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago goofy_pasteur 2d64f4eb8fb9 arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago focused_lalande c3f61947671a arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago silly_ardinghelli ac6f29b92c7a arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago stoic_leakey fc16f3f8c139 wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago desperate_babbage 4555628a5d0a wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago Exited (-1) 4 days ago sharp_bardeen 3bdae1d2527a wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago Exited (130) 5 days ago sick_lovelace 2697c769c2ee wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago thirsty_fermat f8c686d1d6be wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago Exited (130) 5 days ago cranky_fermat a1945f2ca473 wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago Exited (-1) 4 days ago suspicious_turing 31b9c4df0633 arungupta/wildfly-centos:latest "/opt/jboss/wildfly/ 5 days ago distracted_franklin cd8dad2b1e22 c345720579e0 "/bin/sh -c '#(nop) 5 days ago cocky_blackwellThere are lots of containers that are using “arungupta/wildfly-centos” image but none of them seem to be running. If there are any containers that are running then you need to stop them as:docker rm $(docker stop $(docker ps -q))Remove the containers that are using this image as:docker ps -a | grep arungupta/wildfly-centos | awk '{print $1}' | xargs docker rm bafc2b3327a4 bfe71d92a612 e1c0965d202c ff10b83d6c17 b2774b17460c 2d64f4eb8fb9 ac6f29b92c7a 31b9c4df0633The criteria here is specified as a grep pattern. docker ps command has other options to specify criteria as well such as only the latest created containers or containers in a particular status. For example, containers that exited with status -1 can be seen as:~> docker ps -a -f "exited=-1" CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 68aca76aa690 wildfly-centos:latest "/opt/jboss/wildfly/ 39 minutes ago Exited (-1) 37 minutes ago insane_yonathAll running containers, as opposed to meeting a specific criteria, can be removed as:docker rm $(docker ps -aq)And now the image can be easily removed as:~> docker rmi e4f1dbdff189 Untagged: arungupta/wildfly-centos:latest Deleted: e4f1dbdff18956621aa48a83e5b05df309ee002c3668fa452f1235465d881020 Deleted: ad2899e176a2e73acbcf61909426786eaa195fcea7fb0aa27061431a3aae6633Just like removing all containers, all images can be removed as:docker rmi $(docker images -q)Enjoy!Reference: Remove Docker image and container with a criteria from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....

Beyond the JAX-RS spec: Apache CXF search extension

In today’s post we are going to look beyond the JAX-RS 2.0 specification and explore the useful extensions which Apache CXF, one of the popular JAX-RS 2.0 implementations, is offering to the developers of REST services and APIs. In particular, we are going to talk about search extension using subset of the OData 2.0 query filters. In the nutshell, search extension just maps some kind of the filter expression to a set of matching typed entities (instances of Java classes). The OData 2.0 query filters may be very complex however at the moment Apache CXF supports only subset of them:  Operator Description Exampleeq Equal city eq ‘Redmond’ne Not equal city ne ‘London’gt Greater than price gt 20ge Greater than or equal price ge 10lt Less than price lt 20le Less than or equal price le 100and Logical and price le 200 and price gt 3.5or Logical or price le 3.5 or price gt 200Basically, to configure and activate the search extension for your JAX-RS services it is enough to define two properties, search.query.parameter.name and search.parser, plus one additional provider,SearchContextProvider: @Configuration public class AppConfig { @Bean( destroyMethod = "shutdown" ) public SpringBus cxf() { return new SpringBus(); } @Bean @DependsOn( "cxf" ) public Server jaxRsServer() { final Map< String, Object > properties = new HashMap< String, Object >(); properties.put( "search.query.parameter.name", "$filter" ); properties.put( "search.parser", new ODataParser< Person >( Person.class ) );final JAXRSServerFactoryBean factory = RuntimeDelegate.getInstance().createEndpoint( jaxRsApiApplication(), JAXRSServerFactoryBean.class ); factory.setProvider( new SearchContextProvider() ); factory.setProvider( new JacksonJsonProvider() ); factory.setServiceBeans( Arrays.< Object >asList( peopleRestService() ) ); factory.setAddress( factory.getAddress() ); factory.setProperties( properties );return factory.create(); } @Bean public JaxRsApiApplication jaxRsApiApplication() { return new JaxRsApiApplication(); } @Bean public PeopleRestService peopleRestService() { return new PeopleRestService(); } } The search.query.parameter.name defines what would be the name of query string parameter used as a filter (we set it to be $filter), while search.parser defines the parser to be used to parse the filter expression (we set it to be ODataParser parametrized with Person class). The ODataParser is built on top of excellent Apache Olingo project which currently implements OData 2.0 protocol (the support for OData 4.0 is on the way). Once the configuration is done, any JAX-RS 2.0 service is able to benefit from search capabilities by injecting the contextual parameter SearchContext. Let us take a look on that in action by defining the REST service to manage people represented by following class Person: public class Person { private String firstName; private String lastName; private int age;// Setters and getters here } The PeopleRestService would just allow to create new persons using HTTP POST and perform the search using HTTP GET, listed under /search endpoint: package com.example.rs;import java.util.ArrayList; import java.util.Collection;import javax.ws.rs.FormParam; import javax.ws.rs.GET; import javax.ws.rs.POST; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.Context; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import javax.ws.rs.core.UriInfo;import org.apache.cxf.jaxrs.ext.search.SearchCondition; import org.apache.cxf.jaxrs.ext.search.SearchContext;import com.example.model.Person;@Path( "/people" ) public class PeopleRestService { private final Collection< Person > people = new ArrayList<>(); @Produces( { MediaType.APPLICATION_JSON } ) @POST public Response addPerson( @Context final UriInfo uriInfo, @FormParam( "firstName" ) final String firstName, @FormParam( "lastName" ) final String lastName, @FormParam( "age" ) final int age ) { final Person person = new Person( firstName, lastName, age ); people.add( person ); return Response .created( uriInfo.getRequestUriBuilder().path( "/search" ) .queryParam( "$filter=firstName eq '{firstName}' and lastName eq '{lastName}' and age eq {age}" ) .build( firstName, lastName, age ) ) .entity( person ).build(); } @GET @Path("/search") @Produces( { MediaType.APPLICATION_JSON } ) public Collection< Person > findPeople( @Context SearchContext searchContext ) { final SearchCondition< Person > filter = searchContext.getCondition( Person.class ); return filter.findAll( people ); } } The findPeople method is the one we are looking for. Thanks to all hard lifting which Apache CXF does, the method looks very simple: the SearchContext is injected and the filter expression is automatically picked up from $filter query string parameter. The last part is to apply the filter to the data, which in our case is just a collection named people. Very clean and straightforward. Let us build the project and run it: mvn clean package java -jar target/cxf-search-extension-0.0.1-SNAPSHOT.jar Using awesome curl tool, let us issue a couple of HTTP POST requests to generate some data to run the filter queries against: > curl http://localhost:8080/rest/api/people -X POST -d "firstName=Tom&lastName=Knocker&age=16" { "firstName": "Tom", "lastName": "Knocker", "age": 16 }> curl http://localhost:8080/rest/api/people -X POST -d "firstName=Bob&lastName=Bobber&age=23" { "firstName": "Bob", "lastName": "Bobber", "age": 23 }> curl http://localhost:8080/rest/api/people -X POST -d "firstName=Tim&lastName=Smith&age=50" { "firstName": "Tim", "lastName": "Smith", "age": 50 } With sample data in place, let us go ahead and come up with a couple of different search criteria, complicated enough to show off the power of OData 2.0 query filters:find all persons whose first name is Bob ($filter=“firstName eq ‘Bob'”)> curl -G -X GET http://localhost:8080/rest/api/people/search --data-urlencode $filter="firstName eq 'Bob'" [ { "firstName": "Bob", "lastName": "Bobber", "age": 23 } ]find all persons whose last name is Bobber or last name is Smith and firstName is not Bob ($filter=“lastName eq ‘Bobber’ or (lastName eq ‘Smith’ and firstName ne ‘Bob’)”)> curl -G -X GET http://localhost:8080/rest/api/people/search --data-urlencode $filter="lastName eq 'Bobber' or (lastName eq 'Smith' and firstName ne 'Bob')" [ { "firstName": "Bob", "lastName": "Bobber", "age": 23 }, { "firstName": "Tim", "lastName": "Smith", "age": 50 } ]find all persons whose first name starts from letter T and who are 16 or older ($filter=“firstName eq ‘T*’ and age ge 16″)> curl -G -X GET http://localhost:8080/rest/api/people/search --data-urlencode $filter="firstName eq 'T*' and age ge 16" [ { "firstName": "Tom", "lastName": "Knocker", "age": 16 }, { "firstName": "Tim", "lastName": "Smith", "age": 50 } ] Note: if you run this commands on Linux-like environment, you may need to escape the $ sign using \$ instead, for example: curl -X GET -G http://localhost:8080/rest/api/people/search –data-urlencode \$filter=”firstName eq ‘Bob'” At the moment, Apache CXF offers just basic support of OData 2.0 query filters, with many powerful expressions left aside. However, there is a commitment to push it forward once the community expresses enough interest in using this feature. It is worth mentioning that OData 2.0 query filters is not the only option available. Search extension also supports FIQL (The Feed Item Query Language) and this great article from one of the core Apache CXF developers is a great introduction into it. I think this quite useful feature of Apache CXF can save a lot of your time and efforts by providing simple (and not so simple) search capabilities to your JAX-RS 2.0 services. Please give it a try if it fits your application needs.The complete project source code is available on Github.Reference: Beyond the JAX-RS spec: Apache CXF search extension from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog....

Continuous Deployment: Implementation

This article is part of the Continuous Integration, Delivery and Deployment series. Previous post described several Continuous Deployment strategies. In this one we will attempt to provide one possible solution for reliable, fast and automatic continuous deployment with ability to test new releases before they become available to general users. If something goes wrong we should be able to rollback back easily. On top of that, we’ll try to accomplish zero-downtime. No matter how many times we deploy our applications, there should never be a single moment when they are not operational. To summarize, our goals are:to deploy on every commit or as often as needed to be fast to be automated to be able to rollback to have zero-downtimeSetting up the stage Let’s set-up the technological part of the story. Application will be deployed as a Docker container. It is an open source platform that can be used to build, ship and run distributed applications. While Docker can be deployed on any operating system, my preference is to use CoreOS. It is a Linux distribution that provides features needed to run modern architecture stacks. An advantage CoreOS has over others is that it is very light-wight. It has only few tools and they are just those that we need for continuous deployment. We’ll use Vagrant to create a virtual machine with CoreOS. Two specifically useful tools that come pre-installed on CoreOS are etcd (key-value store for shared configuration and service discovery) and systemd (a suite of system management daemons, libraries and utilities). We’ll use nginx as our reverse proxy server. Its templates will be maintained by confd that is designed to manage application configuration files using templates and data from etcd. Finally, as an example application we’ll deploy (many times) BDD Assistant. It can be used as a helper tool for BDD development and testing. The reason for including it is that we’ll need a full-fledged application that can be used to demonstrate deployment strategy we’re about to explore.I’m looking for early adopters of the application. If you’re interested, please contact me and I’ll provide all the help you might need.CoreOS If you do not already have an instance of CoreOS up and running, continuous-deployment repository contains Vagrantfile that can be used to bring one up. Please clone that repo or download and unpack the ZIP file. To run the OS, please install Vagrant and run the following command from the directory with cloned (or unpacked) repository.   vagrant up Once creation and startup of the VM is finished, we can enter the CoreOS using: vagrant ssh From now on you should be inside CoreOS. DockerWe’ll use the BDD Assistant as an example simulation of Continuous Deployment. Container with the application is created on every commit made to the BDD Assistant repo. For now we’ll run it directly with Docker. Further on we’ll refine the deployment to be more resilient. Once the command below is executed it will start downloading the container images. First run might take a while. Good news is that images are cached and later on it will update very fast when there is a new version and run in a matter of seconds.       # Run container technologyconversationsbdd and expose port 9000 docker run --name bdd_assistant -d -p 9000:9000 vfarcic/technologyconversationsbdd It might take a while until all Docker images are downloaded for the first time. From there on, starting and stopping the service is very fast. To see the result, open http://localhost:9000/ in your browser. That was easy. With one command we downloaded fully operational application with AngularJS front-end, Play! web server, REST API, etc. The container itself is self-sufficient and immutable. New release would be a whole new container. There’s nothing to configure (except the port application is running on) and nothing to update when new release is made. It simply works. etcd Let’s move onto the etcd. etcd & From now on, we can use it to store and retrieve information we need. As an example, we can store the port BDD Assistant is running. That way, any application that would need to be integrated with it, can retrieve the port and, for example, use it to invoke the application API. # Set value for a give key etcdctl set /bdd-assistant/port 9000 # Retrive stored value etcdctl get /bdd-assistant/port That was a very simple (and fast) way to store any key/value that we might need. It will come in handy very soon. nginx At the moment, our application is running on port 9000. Instead of opening localhost:9000 (or whatever port it’s running) it would be better if it would simply run on localhost. We can use nginx reverse proxy to accomplish that. This time we won’t call Docker directly but run it as a service through systemd. # Create directories for configuration files sudo mkdir -p /etc/nginx/{sites-enabled,certs-enabled} # Create directories for logs sudo mkdir -p /var/log/nginx # Copy nginx service sudo cp /vagrant/nginx.service /etc/systemd/system/nginx.service # Enable nginx service sudo systemctl enable /etc/systemd/system/nginx.service nginx.service file tells systemd what to do when we want to start, stop or restart some service. In our case, the service is created using the Docker nginx container. Let’s start the nginx service (first time it might take a while to pull the Docker image). # Start nginx service sudo systemctl start nginx.service # Check whether nginx is running as Docker container docker ps As you can see, nginx is running as a Docker container. Let’s stop it. # Stop nginx service sudo systemctl stop nginx.service # Check whether nginx is running as Docker container docker ps Now it disappeared from Docker processes. It’s as easy as that. We can start and stop any Docker container in no time (assuming that images were already downloaded). We’ll need nginx up and running for the rest of the article so let’s start it up again. sudo systemctl start nginx.service confd We need something to tell our nginx what port to redirect to when BDD Assistant is requested. We’ll use confd for that. Let’s set it up. # Download confd wget -O confd https://github.com/kelseyhightower/confd/releases/download/v0.6.3/confd-0.6.3-linux-amd64 # Put it to the bin directory so that it is easily accessible sudo cp confd /opt/bin/. # Give it execution permissions sudo chmod +x /opt/bin/confd Next step is to configure confd to modify nginx routes and reload them every time we deploy our application. # Create configuration and templates directories sudo mkdir -p /etc/confd/{conf.d,templates} # Copy configuration sudo cp /vagrant/bdd_assistant.toml /etc/confd/conf.d/. # Copy template sudo cp /vagrant/bdd_assistant.conf.tmpl /etc/confd/templates/. Both bdd_assistant.toml and bdd_assistant.conf.toml are in the repo you already downloaded. Let’s see how it works. sudo confd -onetime -backend etcd -node cat /etc/nginx/sites-enabled/bdd_assistant.conf wget localhost; cat index.html We just updated nginx template to use the port previously set in etcd. Now you can open http://localhost:8000/ in your browser (Vagrant is set to expose default 80 as 8000). Even though the application is running on port 9000, we setup nginx to redirect requests from the default port 80 to the port 9000. Let’s stop and remove the BDD Assistant container. We’ll create it again using all the tools we saw by now. docker stop bdd_assistant docker rm bdd_assistant docker ps BDD Assistant Deployer Now that you are familiar with the tools, it’s time to tie them all together. We will practice Blue Green Deployment. That means that we will have one release up and running (blue). When new release (green) is deployed, it will run in parallel. Once it’s up and running, nginx will redirect all requests to it instead to the old one. Each consecutive release will follow the same process. Deploy over blue, redirect requests from green to blue, deploy over green, redirect requests from blue to green, etc. Rollbacks will be easy to do. We would just need to change the reverse proxy. There will be zero-down time since new release will be up and running before we start redirecting requests. Everything will be fully automated and very fast. With all that in place, we’ll be able to deploy as often as we want (preferably on every commit to the repository). sudo cp /vagrant/bdd_assistant.service /etc/systemd/system/bdd_assistant_blue@9001.service sudo cp /vagrant/bdd_assistant.service /etc/systemd/system/bdd_assistant_green@9002.service sudo systemctl enable /etc/systemd/system/bdd_assistant_blue@9001.service sudo systemctl enable /etc/systemd/system/bdd_assistant_green@9002.service # sudo systemctl daemon-reload etcdctl set /bdd-assistant/instance none sudo chmod 744 /vagrant/deploy_bdd_assistant.sh sudo cp /vagrant/deploy_bdd_assistant.sh /opt/bin/. We just created two BDD Assistant services: blue and green. Each of them will run on different ports (9001 and 9002) and store relevant information to etcd. deploy_bdd_assistant.sh is a simple script that starts the service, updates nginx template using conf and, finally, stops the old service. Both BDD Assistant service and deploy_bdd_assistant.sh are available in the repo you already downloaded. Let’s try it out. sudo deploy_bdd_assistant.sh New release will be deployed each time we run the script deploy_bdd_assistant.sh. We can confirm that by checking what value is stored in etcd, looking at Docker processes and, finally, running the application in browser. docker ps etcdctl get /bdd-assistant/port Docker process should change from running blue deployment on port 9001 to running green on port 9002 and the other way around. Port stored in etcd should be changing from 9001 to 9002 and vice verse. Whichever version is deployed, http://localhost:8000/ will always be working in your browser no matter whether we are in the process of deployment or already finished it. Repeat the execution of the script deploy_bdd_assistant.sh as many times as you like. It should always deploy the latest new version. For brevity of this article I excluded deployment verification. In “real world”, after new container is run and before reverse proxy is set to point to it, we should run all sorts of tests (functional, integration and stress) that would validate that changes to the code are correct. Continuous Delivery and Deployment The process described above should be tied to your CI/CD server (Jenkins, Bamboo, GoCD, etc). One possible Continuous Delivery procedure would be:Commit the code to VCS (GIT, SVN, etc) Run all static analysis Run all unit tests Build Docker container Deploy to the test environmentRun the container with the new version Run automated functional, integration (i.e. BDD) and stress tests Perform manual tests Change the reverse proxy to point to the new containerDeploy to the production environmentRun the container with the new version Run automated functional, integration (i.e. BDD) and stress tests Change the reverse proxy to point to the new containerIdeally, there should be no manual tests and in that case point 5 is not necessary. We would have Continuous Deployment that would automatically deploy every single commit that passed all tests to production. If manual verification is unavoidable, we have Continuous Delivery to test environments and software would be deployed to production on a click of a button inside the CI/CD server we’re using. Summary No matter whether we choose continuous delivery or deployment, when our process is completely automated (from build through tests until deployment itself), we can spend time working on things that bring more value while letting scripts do the work for us. Time to market should decrease drastically since we can have features available to users as soon as code is committed to the repository. It’s a very powerful and valuable concept. In case of any trouble following the exercises, you can skip them and go directly to running the deploy_bdd_assistant.sh script. Just remove comments (#) from the Vagrantfile. If VM is already up and running, destroy it. vagrant destroy Create new VM and run the deploy_bdd_assistant.sh script. vagrant up vagrant ssh sudo deploy_bdd_assistant.sh Hopefully you can see the value in Docker. It’s a game changer when compared to more traditional ways of building and deploying software. New doors have been opened for us and we should step through them. BDD Assistant and its deployment with Docker can be even better. We can split the application into smaller microservices. It could, for example have front-end as a separate container. Back-end can be split into smaller services (stories managements, stories runner, etc). Those microservices can be deployed to the same or different machines and orchestrated with Fleet. Microservices will be the topic of the next article.Reference: Continuous Deployment: Implementation from our JCG partner Viktor Farcic at the Technology conversations blog....

Do You Really Understand SQL’s GROUP BY and HAVING clauses?

There are some things in SQL that we simply take for granted without thinking about them properly. One of these things are the GROUP BY and the less popular HAVING clauses. Let’s look at a simple example. For this example, we’ll reiterate the example database we’ve seen in this previous article about the awesome LEAD(), LAG(), FIRST_VALUE(), LAST_VALUE() functions:         CREATE TABLE countries ( code CHAR(2) NOT NULL, year INT NOT NULL, gdp_per_capita DECIMAL(10, 2) NOT NULL, govt_debt DECIMAL(10, 2) NOT NULL ); Before there were window functions, aggregations were made only with GROUP BY. A typical question that we could ask our database using SQL is:What are the top 3 average government debts in percent of the GDP for those countries whose GDP per capita was over 40’000 dollars in every year in the last four years.Whew. Some (academic) business requirements. In SQL (PostgreSQL dialect), we would write: select code, avg(govt_debt) from countries where year > 2010 group by code having min(gdp_per_capita) >= 40000 order by 2 desc limit 3 Or, with inline comments -- The average government debt select code, avg(govt_debt)-- for those countries from countries-- in the last four years where year > 2010-- yepp, for the countries group by code-- whose GDP p.c. was over 40'000 in every year having min(gdp_per_capita) >= 40000-- The top 3 order by 2 desc limit 3 The result being: code avg ------------ JP 193.00 US 91.95 DE 56.00 Remember the 10 easy steps to a complete understanding of SQL:FROM generates the data set WHERE reduces the generated data set GROUP BY aggregates the reduced data set HAVING reduces the aggregated data set SELECT transforms the reduced aggregated data set ORDER BY sorts the transformed data set LIMIT .. OFFSET frames the sorted data set… where LIMIT .. OFFSET may come in very different flavours. The empty GROUP BY clause A very special case of GROUP BY is the explicit or implicit empty GROUP BY clause. Here’s a question that we could ask our database: Are there any countries at all with a GDP per capita of more than 50’000 dollars?And in SQL, we’d write: select true answer from countries having max(gdp_per_capita) >= 50000 The result being answer ------ t You could of course have used the EXISTS clause instead (please don’t use COUNT(*) in these cases): select exists( select 1 from countries where gdp_per_capita >= 50000 ); And we would get, again: answer ------ t … but let’s focus on the plain HAVING clause. Not everyone knows that HAVING can be used all by itself, or what it even means to have HAVING all by itself. Already the SQL 1992 standard allowed for the use of HAVING without GROUP BY, but it wasn’t until the introduction of GROUPING SETS in SQL:1999, when the semantics of this syntax was retroactively unambiguously defined: 7.10 <having clause> <having clause> ::= HAVING <search condition> Syntax Rules 1) Let HC be the <having clause>. Let TE be the <table expression> that immediately contains HC. If TE does not immediately contain a <group by clause>, then GROUP BY ( ) is implicit.That’s interesting. There is an implicit GROUP BY ( ), if we leave out the explicit GROUP BY clause. If you’re willing to delve into the SQL standard a bit more, you’ll find: <group by clause> ::= GROUP BY <grouping specification><grouping specification> ::= <grouping column reference> | <rollup list> | <cube list> | <grouping sets list> | <grand total> | <concatenated grouping><grouping set> ::= <ordinary grouping set> | <rollup list> | <cube list> | <grand total><grand total> ::= <left paren> <right paren>So, GROUP BY ( ) is essentially grouping by a “grand total”, which is what’s intuitively happening, if we just look for the highest ever GDP per capita: select max(gdp_per_capita) from countries; Which yields: max -------- 52409.00 The above query is also implicitly the same as this one (which isn’t supported by PostgreSQL): select max(gdp_per_capita) from countries group by (); The awesome GROUPING SETs In this section of the article, we’ll be leaving PostgreSQL land, entering SQL Server land, as PostgreSQL shamefully doesn’t implement any of the following (yet). Now, we cannot understand the grand total (empty GROUP BY ( ) clause), without having a short look at the SQL:1999 standard GROUPING SETS. Some of you may have heard of CUBE() or ROLLUP() grouping functions, which are just syntactic sugar for commonly used GROUPING SETS. Let’s try to answer this question in a single query: What are the highest GDP per capita values per year OR per country In SQL, we’ll write: select code, year, max(gdp_per_capita) from countries group by grouping sets ((code), (year)) Which yields two concatenated sets of records: code year max ------------------------ NULL 2009 46999.00 <- grouped by year NULL 2010 48358.00 NULL 2011 51791.00 NULL 2012 52409.00CA NULL 52409.00 <- grouped by code DE NULL 44355.00 FR NULL 42578.00 GB NULL 38927.00 IT NULL 36988.00 JP NULL 46548.00 RU NULL 14091.00 US NULL 51755.00 That’s kind of nice, isn’t it? It’s essentially just the same thing as this query with UNION ALL select code, null, max(gdp_per_capita) from countries group by code union all select null, year, max(gdp_per_capita) from countries group by year; In fact, it’s exactly the same thing, as the latter explicitly concatenates two sets of grouped records… i.e. two GROUPING SETS. This SQL Server documentation page also explains it very nicely. And the most powerful of them all: CUBE() Now, imagine, you’d like to add the “grand total”, and also the highest value per country AND year, producing four different concatenated sets. To limit the results, we’ll also filter out GDPs of less than 48000 for this example: select code, year, max(gdp_per_capita), grouping_id(code, year) grp from countries where gdp_per_capita >= 48000 group by grouping sets ( (), (code), (year), (code, year) ) order by grp desc; This nice-looking query will now produce all the possible grouping combinations that we can imagine, including the grand total, in order to produce: code year max grp --------------------------------- NULL NULL 52409.00 3 <- grand totalNULL 2012 52409.00 2 <- group by year NULL 2010 48358.00 2 NULL 2011 51791.00 2CA NULL 52409.00 1 <- group by code US NULL 51755.00 1US 2010 48358.00 0 <- group by code and year CA 2012 52409.00 0 US 2012 51755.00 0 CA 2011 51791.00 0 US 2011 49855.00 0 And because this is quite a common operation in reporting and in OLAP, we can simply write the same by using the CUBE() function: select code, year, max(gdp_per_capita), grouping_id(code, year) grp from countries where gdp_per_capita >= 48000 group by cube(code, year) order by grp desc; Compatibility While the first couple of queries also worked on PostgreSQL, the ones that are using GROUPING SETS will work only on 4 out of 17 RDBMS currently supported by jOOQ. These are:DB2 Oracle SQL Server Sybase SQL AnywherejOOQ also fully supports the previously mentioned syntaxes. The GROUPING SETS variant can be written as such: // Countries is an object generated by the jOOQ // code generator for the COUNTRIES table. Countries c = COUNTRIES;ctx.select( c.CODE, c.YEAR, max(c.GDP_PER_CAPITA), groupingId(c.CODE, c.YEAR).as("grp")) .from(c) .where(c.GDP_PER_CAPITA.ge(new BigDecimal("48000"))) .groupBy(groupingSets(new Field[][] { {}, { c.CODE }, { c.YEAR }, { c.CODE, c.YEAR } })) .orderBy(fieldByName("grp").desc()) .fetch(); … or the CUBE() version: ctx.select( c.CODE, c.YEAR, max(c.GDP_PER_CAPITA), groupingId(c.CODE, c.YEAR).as("grp")) .from(c) .where(c.GDP_PER_CAPITA.ge(new BigDecimal("48000"))) .groupBy(cube(c.CODE, c.YEAR)) .orderBy(fieldByName("grp").desc()) .fetch();… and in the future, we’ll emulate GROUPING SETS by their equivalent UNION ALL queries in those databases that do not natively support GROUPING SETS.Reference: Do You Really Understand SQL’s GROUP BY and HAVING clauses? from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Manipulating JARs, WARs, and EARs on the Command Line

Although Java IDEs and numerous graphical tools make it easier than ever to view and manipulate the contents of Java archive (JAR, WAR, and EAR) files, there are times when I prefer to use the command-line jar command to accomplish these tasks. This is particularly true when I have to do something repeatedly or when I am doing it as part of a script. In this post, I look at use of the jar command to create, view, and manipulate Java archive files. JAR files will be used primarily in this post, but the commands I demonstrate on .jar files work consistently with .war and .ear files. It’s also worth keeping in mind that the JAR format is based on the ZIP format and so the numerous tools available for working with ZIP files can be applied to JAR, WAR, and EAR files. It’s also worth keeping in mind that the jar options tend to mirror tar‘s options. For my examples, I want to jar up and work with some .class files. The next screen snapshot demonstrates compiling some Java source code files (.java files) into .class files. The actual source of these files is insignificant to this discussion and is not shown here. I have shown compiling these without an IDE to be consistent with using command-line tools in this post. Preparing the Files to Be Used in the jar ExamplesThe next screen snapshot shows my .class files have been compiled and are ready to be included in a JAR.Creating a JAR File The “c” option provided to the jar command instructs it to create an archive. I like to use the “v” (verbose) and “f” (filename) options with all jar commands that I run so that the output will be verbose (to help see that something is happening and that it’s the correct thing that’s happening) and so that the applicable JAR/WAR/EAR filename can be provided as part of the command rather than input or output depending on standard input and standard output. In the case of creating a JAR file, the options “cvf” will create JAR file (c) with specified name (f) and print out verbose output (v) regarding this creation. The next screen snapshot demonstrates the simplest use of jar cvf. I have changed my current directory to the “classes” directory so that creating the JAR is as simple as running jar cvf * or jar cvf . and all files in the current directory and all subdirectories and files in subdirectories will be included in the created JAR file. This process is demonstrated in the next screen snapshot.If I don’t want to explicitly change my current directory to the most appropriate directory from which to build the JAR before running jar, I can use the -C option to instruct jar to implicitly do this as part of its creation process. This is demonstrated in the next screen snapshot.Listing Archive’s Contents Listing (or viewing) the contents of a JAR, WAR, or EAR file is probably the function I perform most with the jar command. I typically use the options “t” (list contents of archive), “v” (verbose), and “f” (filename specified on command line) for this. The next screen snapshot demonstrates running jar tvf MyClasses.jar to view the contents of my generated JAR file.Extracting Contents of Archive File It is sometimes desirable to extract one or many of the files contained in an archive file to work on or view the contents of these individual files. This is done using jar “x” (for extract) option. The next screen snapshot demonstrates using jar xvf MyClasses.jar to extract all the contents of that JAR file. Note that the original JAR file is left intact, but its contents are all now available directly as well.I often only need to view or work with one or two files of the archive file. Although I could definitely extract all of them as shown in the last example and only edit those I need to edit, I prefer to extract only the files I need if the number of them is small. This is easily done with the same jar xvf command. By specifying the fully qualified files to extract explicitly after the archive file’s name in the command, I can instruct to only extract those specific files. This is advantageous because I don’t fill my directory up with files I don’t care about and I don’t need to worry about cleaning up as much when I’m done. The next screen snapshot demonstrates running jar xvf MyClasses.jar dustin/examples/jar/GrandParent.class to extract only that single class definition for GrandParent rather than extracting all the files in that JAR.Updating an Archive File Previous examples have demonstrated providing the jar command with “c” to create an archive, “t” to list an archive’s contents, and “x” to extract an archive’s contents. Another commonly performed function is to update an existing archive’s contents and this is accomplished with jar‘s “u” option. The next screen snapshot demonstrates creating a text file (in DOS with the copy con command) called tempfile.txt and then using jar uvf MyClasses.jar tempfile.txt to update the MyClasses.jar and add tempfile.txt to that JAR.If I want to update a file in an existing archive, I can extract that file using jar xvf, modify the file as desired, and place t back in the original JAR with the jar uvf command. The new file will overwrite the pre-existing one of the same name. This is simulated in the next screen snapshot.Deleting an Entry from Archive File It is perhaps a little surprising to see no option for deleting entries from a Java archive file when reading the jar man page, the Oracle tools description of jar, or the Java Tutorials coverage of jar. One way to accomplish this is to extract the contents of a JAR, remove the files that are no longer desired, and re-create the JAR from the directories with those files removed. However, a much easier approach is to simply take advantage of the Java archiving being based on ZIP and use ZIP-based tools’ deletion capabilities. The next screen snapshot demonstrates using 7-Zip (on Windows) to delete tempfile.txt from MyClasses.jar by running the command 7z d MyClasses.jar tempfile.txt. Note that the same thing can be accomplished in Linux with zip -d MyClasses.jar tempfile.txt. Other ZIP-supporting tools have their own options.WAR and EAR Files All of the examples in this post have been against JAR files, but these examples work with WAR and EAR files. As a very simplistic example of this, the next screen snapshot demonstrates using jar uvf to update a WAR file with a new web descriptor. The content of the actual files involved do not matter for purposes of this illustration. The important observation to make is that a WAR file can be manipulated in the exact same manner as a JAR file. This also applies to EAR files.Other jar Operations and Options In this post, I focused on the “CRUD” operations (Create/Read/Update/Delete) and extraction that can be performed from the command-line on Java archive files. I typically used the applicable “CRUD” operation command (“c”, “t”, “u”) or extraction command (“x”) used in conjunction with the common options “v” (verbose) and “f” (Java archive file name explicitly specified on command line). The jar command supports operations other than these such as “M” (controlling Manifest file creation) and “0” (controlling compression). I also did not demonstrate using “i” to generate index information for a Java archive. Additional Resources on Working with Java Archives I referenced these previously, but summarize them here for convenience.Java Tutorials: Packaging Programs in JAR Files Oracle Tools Documentation on jar Command jar man PageConclusion The jar command is relatively easy to use and can be the quickest approach for creating, viewing, and modifying Java archive files contents in certain cases. Familiarity with this command-line tool can pay off from time to time for the Java developer, especially when working on a highly repetitive task or one that involves scripting. IDEs and tools (especially build tools) can help a lot with Java archive file manipulation, but sometimes the “overhead” of these is much greater than what is required when using jar from the command line.Reference: Manipulating JARs, WARs, and EARs on the Command Line from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

The downside of version-less optimistic locking

Introduction In my previous post I demonstrated how you can scale optimistic locking through write-concerns splitting. Version-less optimistic locking is one lesser-known Hibernate feature. In this post I’ll explain both the good and the bad parts of this approach.         Version-less optimistic locking Optimistic locking is commonly associated with a logical or physical clocking sequence, for both performance and consistency reasons. The clocking sequence points to an absolute entity state version for all entity state transitions. To support legacy database schema optimistic locking, Hibernate added a version-less concurrency control mechanism. To enable this feature you have to configure your entities with the @OptimisticLocking annotation that takes the following parameters:Optimistic Locking Type DescriptionALL All entity properties are going to be used to verify the entity versionDIRTY Only current dirty properties are going to be used to verify the entity versionNONE Disables optimistic lockingVERSION Surrogate version column optimistic lockingFor version-less optimistic locking, you need to choose ALL or DIRTY. Use case We are going to rerun the Product update use case I covered in my previous optimistic locking scaling article. The Product entity looks like this:First thing to notice is the absence of a surrogate version column. For concurrency control, we’ll use DIRTY properties optimistic locking: @Entity(name = "product") @Table(name = "product") @OptimisticLocking(type = OptimisticLockType.DIRTY) @DynamicUpdate public class Product { //code omitted for brevity } By default, Hibernate includes all table columns in every entity update, therefore reusing cached prepared statements. For dirty properties optimistic locking, the changed columns are included in the update WHERE clause and that’s the reason for using the @DynamicUpdate annotation. This entity is going to be changed by three concurrent users (e.g. Alice, Bob and Vlad), each one updating a distinct entity properties subset, as you can see in the following The following sequence diagram:The SQL DML statement sequence goes like this: #create tables Query:{[create table product (id bigint not null, description varchar(255) not null, likes integer not null, name varchar(255) not null, price numeric(19,2) not null, quantity bigint not null, primary key (id))][]} Query:{[alter table product add constraint UK_jmivyxk9rmgysrmsqw15lqr5b unique (name)][]}#insert product Query:{[insert into product (description, likes, name, price, quantity, id) values (?, ?, ?, ?, ?, ?)][Plasma TV,0,TV,199.99,7,1]}#Alice selects the product Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_ from product optimistic0_ where optimistic0_.id=?][1]} #Bob selects the product Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_ from product optimistic0_ where optimistic0_.id=?][1]} #Vlad selects the product Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_ from product optimistic0_ where optimistic0_.id=?][1]}#Alice updates the product Query:{[update product set quantity=? where id=? and quantity=?][6,1,7]}#Bob updates the product Query:{[update product set likes=? where id=? and likes=?][1,1,0]}#Vlad updates the product Query:{[update product set description=? where id=? and description=?][Plasma HDTV,1,Plasma TV]} Each UPDATE sets the latest changes and expects the current database snapshot to be exactly as it was at entity load time. As simple and straightforward as it may look, the version-less optimistic locking strategy suffers from a very inconvenient shortcoming. The detached entities anomaly The version-less optimistic locking is feasible as long as you don’t close the Persistence Context. All entity changes must happen inside an open Persistence Context, Hibernate translating entity state transitions into database DML statements. Detached entities changes can be only persisted if the entities rebecome managed in a new Hibernate Session, and for this we have two options:entity merging (using Session#merge(entity)) entity reattaching (using Session#update(entity))Both operations require a database SELECT to retrieve the latest database snapshot, so changes will be applied against the latest entity version. Unfortunately, this can also lead to lost updates, as we can see in the following sequence diagram:Once the original Session is gone, we have no way of including the original entity state in the UPDATE WHERE clause. So newer changes might be overwritten by older ones and this is exactly what we wanted to avoid in the very first place. Let’s replicate this issue for both merging and reattaching. Merging The merge operation consists in loading and attaching a new entity object from the database and update it with the current given entity snapshot. Merging is supported by JPA too and it’s tolerant to already managed Persistence Context entity entries. If there’s an already managed entity then the select is not going to be issued, as Hibernate guarantees session-level repeatable reads. #Alice inserts a Product and her Session is closed Query:{[insert into Product (description, likes, name, price, quantity, id) values (?, ?, ?, ?, ?, ?)][Plasma TV,0,TV,199.99,7,1]}#Bob selects the Product and changes the price to 21.22 Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_ from Product optimistic0_ where optimistic0_.id=?][1]} OptimisticLockingVersionlessTest - Updating product price to 21.22 Query:{[update Product set price=? where id=? and price=?][21.22,1,199.99]}#Alice changes the Product price to 1 and tries to merge the detached Product entity c.v.h.m.l.c.OptimisticLockingVersionlessTest - Merging product, price to be saved is 1 #A fresh copy is going to be fetched from the database Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_ from Product optimistic0_ where optimistic0_.id=?][1]} #Alice overwrites Bob therefore loosing an update Query:{[update Product set price=? where id=? and price=?][1,1,21.22]} Reattaching Reattaching is a Hibernate specific operation. As opposed to merging, the given detached entity must become managed in another Session. If there’s an already loaded entity, Hibernate will throw an exception. This operation also requires an SQL SELECT for loading the current database entity snapshot. The detached entity state will be copied on the freshly loaded entity snapshot and the dirty checking mechanism will trigger the actual DML update: #Alice inserts a Product and her Session is closed Query:{[insert into Product (description, likes, name, price, quantity, id) values (?, ?, ?, ?, ?, ?)][Plasma TV,0,TV,199.99,7,1]}#Bob selects the Product and changes the price to 21.22 Query:{[select optimistic0_.id as id1_0_0_, optimistic0_.description as descript2_0_0_, optimistic0_.likes as likes3_0_0_, optimistic0_.name as name4_0_0_, optimistic0_.price as price5_0_0_, optimistic0_.quantity as quantity6_0_0_ from Product optimistic0_ where optimistic0_.id=?][1]} OptimisticLockingVersionlessTest - Updating product price to 21.22 Query:{[update Product set price=? where id=? and price=?][21.22,1,199.99]}#Alice changes the Product price to 1 and tries to merge the detached Product entity c.v.h.m.l.c.OptimisticLockingVersionlessTest - Reattaching product, price to be saved is 10 #A fresh copy is going to be fetched from the database Query:{[select optimistic_.id, optimistic_.description as descript2_0_, optimistic_.likes as likes3_0_, optimistic_.name as name4_0_, optimistic_.price as price5_0_, optimistic_.quantity as quantity6_0_ from Product optimistic_ where optimistic_.id=?][1]} #Alice overwrites Bob therefore loosing an update Query:{[update Product set price=? where id=?][10,1]} Conclusion The version-less optimistic locking is a viable alternative as long as you can stick to a non-detached entities policy. Combined with extended persistence contexts, this strategy can boost writing performance even for a legacy database schema.Code available on GitHub.Reference: The downside of version-less optimistic locking from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

REST Messages And Data Transfer Objects

In Patterns of Enterprise Application Architecture, Martin Fowler defines a Data Transfer Object (DTO) as:An object that carries data between processes in order to reduce the number of method calls.Note that a Data Transfer Object is not the same as a Data Access Object (DAO), although they have some similarities. A Data Access Object is used to hide details from the underlying persistence layer.   REST Messages Are Serialized DTOs In a RESTful architecture, the messages sent across the wire are serializations of DTOs. This means all the best practices around DTOs are important to follow when building RESTful systems. For instance, Fowler writes:…encapsulate the serialization mechanism for transferring data over the wire. By encapsulating the serialization like this, the DTOs keep this logic out of the rest of the code and also provide a clear point to change serialization should you wish.In other words, you should follow the DRY principle and have exactly one place where you convert your internal DTO to a message that is sent over the wire. In JAX-RS, that one place should be in an entity provider. In Spring, the mechanism to use is the message converter. Note that both frameworks have support for several often-used serialization formats. Following this advice not only makes it easier to change media types (e.g. from plain JSON or HAL to a more mature media type like Siren, Mason, or UBER). It also makes it easy to support multiple media types. This in turn enables you to switch media types without breaking clients. You can continue to serve old clients with the old media type, while new clients can take advantage of the new media type. Introducing new media types is one way to evolve your REST API when you must make backwards incompatible changes. DTOs Are Not Domain Objects Domain objects implement the ubiquitous language used by subject matter experts and thus are discovered. DTOs, on the other hand, are designed to meet certain non-functional characteristics, like performance, and are subject to trade-offs. This means the two have very different reasons to change and, following the Single Responsibility Principle, should be separate objects. Blindly serializing domain objects should thus be considered an anti-pattern. That doesn’t mean you must blindly add DTOs, either. It’s perfectly fine to start with exposing domain objects, e.g. using Spring Data REST, and introducing DTOs as needed. As always, premature optimization is the root of all evil, and you should decide based on measurements. The point is to keep the difference in mind. Don’t change your domain objects to get better performance, but rather introduce DTOs. DTOs Have No Behavior A DTO should not have any behavior; it’s purpose in life is to transfer data between remote systems. This is very different from domain objects. There are two basic approaches for dealing with the data in a DTO. The first is to make them immutable objects, where all the input is provided in the constructor and the data can only be read. This doesn’t work well for large objects, and doesn’t play nice with serialization frameworks. The better approach is to make all the properties writable. Since a DTO must not have logic, this is one of the few occasions where you can safely make the fields public and omit the getters and setters. Of course, that means some other part of the code is responsible for filling the DTO with combinations of properties that together make sense. Conversely, you should validate DTOs that come back in from the client.Reference: REST Messages And Data Transfer Objects from our JCG partner Remon Sinnema at the Secure Software Development blog....

Scala snippets 3: Lists together with Map, flatmap, zip and reduce

You can’t really talk about scala without going into the details of the Map, flatMap, zip and reduce functions. With these functions it is very easy to process the contents of lists and work with the Option object. Note that on this site you can find some more snippets:              Scala snippets 1: Folding Scala snippets 2: List symbol magic Scala snippets 3: Lists together with Map, flatmap, zip and reduceLets just start with the map option. With a map option we apply a function to each element of the list and return that as a new list. We can use this to multiply each value in the list: scala> list1 res3: List[Int] = List(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)   scala> list1.map(x=>x*x) res4: List[Int] = List(0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100) Some function you apply to a list might result in Option elements. Take for instance the following function: scala> val evenify = (x:Int) => if (x % 2 == 0) Some(x) else None evenify: Int => Option[Int] = <function1>   scala> list1.map(evenify) res6: List[Option[Int]] = List(Some(0), None, Some(2), None, Some(4), None, Some(6), None, Some(8), None, Some(10)) The problem in this case is that we’re often not that interested in the None results in our list. But how do we easily get them out? For this we can use flatMap. With flatMap we can process lists of sequences. We apply the provided function on each element of each sequence in the list and return a list that contains the elements of each sequence of the original list. An example is much easier to understand: scala> val list3 = 10 to 20 toList list3: List[Int] = List(10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20)   scala> val list2 = 1 to 10 toList list2: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)   scala> val list4 = List(list2, list3) list4: List[List[Int]] = List(List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), List(10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20))   scala> list4.flatMap(x=>x.map(y=>y*2)) res2: List[Int] = List(2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40) As you can see we have two lists. On this list we call the flatMap function. The flatMap processes each of the two entries individually. On each of te individual lists we call the map function to duplicate each entry. The final result is a single list that contains all entries flattened to a single list. Now lets look back at the evenify function we saw earlier and the list of Option elements we had. scala> val list1 = 1 to 10 toList list1: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)   scala> list1.map(evenify) res3: List[Option[Int]] = List(None, Some(2), None, Some(4), None, Some(6), None, Some(8), None, Some(10))   scala> val list2 = list1.map(evenify) list2: List[Option[Int]] = List(None, Some(2), None, Some(4), None, Some(6), None, Some(8), None, Some(10))   scala> list2.flatMap(x => x) res6: List[Int] = List(2, 4, 6, 8, 10) Easy right. And ofcourse we can also write this in one line. scala> list1.flatMap(x=>evenify(x)) res14: List[Int] = List(2, 4, 6, 8, 10) As you can see, not that difficult. Now lets look at the other couple of functions you can use on lists. The first one is zip. And as the name implies with this function we can combine two lists together. scala> val list = "Hello.World".toCharArray list: Array[Char] = Array(H, e, l, l, o, ., W, o, r, l, d)   scala> val list1 = 1 to 20 toList list1: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20)   scala> list.zip(list1) res30: Array[(Char, Int)] = Array((H,1), (e,2), (l,3), (l,4), (o,5), (.,6), (W,7), (o,8), (r,9), (l,10), (d,11))   scala> list1.zip(list) res31: List[(Int, Char)] = List((1,H), (2,e), (3,l), (4,l), (5,o), (6,.), (7,W), (8,o), (9,r), (10,l), (11,d)) As soon as one list reaches the end the zip function stops. We’ve also got a zipAll function, which also processes the left over elements of the larger list: scala> list.zipAll(list1,'a','1') res33: Array[(Char, AnyVal)] = Array((H,1), (e,2), (l,3), (l,4), (o,5), (.,6), (W,7), (o,8), (r,9), (l,10), (d,11), (a,12), (a,13), (a,14), (a,15), (a,16), (a,17), (a,18), (a,19), (a,20)) If the list with characters is exhausted we’ll place the letter ‘a’ if the list of integers is exhausted, we’ll place a 1. We’ve got one final zip function to explore and that’s zipWithIndex. Once again, the name pretty much sums up what will happen. An index element will be added: scala> list.zipWithIndex res36: Array[(Char, Int)] = Array((H,0), (e,1), (l,2), (l,3), (o,4), (.,5), (W,6), (o,7), (r,8), (l,9), (d,10)) So on to the last of the functions to explore:reduce. With reduce we process all the elements in the list and return a single value. With reduceLeft and reduceRight we can force the direction in which the values are processed (with reduce this isn’t guaranteed): scala> list1 res51: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20)   scala> val sum = (x:Int, y:Int) => {println(x,y) ; x + y} sum: (Int, Int) => Int = <function2>   scala> list1.reduce(sum) (1,2) (3,3) (6,4) (10,5) (15,6) (21,7) (28,8) (36,9) (45,10) (55,11) (66,12) (78,13) (91,14) (105,15) (120,16) (136,17) (153,18) (171,19) (190,20) res52: Int = 210   scala> list1.reduceLeft(sum) (1,2) (3,3) (6,4) (10,5) (15,6) (21,7) (28,8) (36,9) (45,10) (55,11) (66,12) (78,13) (91,14) (105,15) (120,16) (136,17) (153,18) (171,19) (190,20) res53: Int = 210   scala> list1.reduceRight(sum) (19,20) (18,39) (17,57) (16,74) (15,90) (14,105) (13,119) (12,132) (11,144) (10,155) (9,165) (8,174) (7,182) (6,189) (5,195) (4,200) (3,204) (2,207) (1,209) res54: Int = 210 Besides these functions we also have reduceOption (and also in the reduceLeftOption and reduceRightOption variants). These functions will return an Option instead of the value. This can be used to safely process empty lists, which will result None. scala> list1.reduceRightOption(sum) (19,20) (18,39) (17,57) (16,74) (15,90) (14,105) (13,119) (12,132) (11,144) (10,155) (9,165) (8,174) (7,182) (6,189) (5,195) (4,200) (3,204) (2,207) (1,209) res65: Option[Int] = Some(210)   scala> val list3 = List() list3: List[Nothing] = List()   scala> list3.reduceRightOption(sum) res67: Option[Int] = None Enough for this snippet and for exploring the List/Collections API for now. In the next snippet we’ll look at some Scalaz stuff, since even though the library has a bit of a bad name of being complex, it provides some really nice features.Reference: Scala snippets 3: Lists together with Map, flatmap, zip and reduce from our JCG partner Jos Dirksen at the Smart Java blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: