Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

grails-logo

Grails domain classes and special presentation requirements

In Grails we often use our domain objects directly as backing model for presentation purposes and only for specialized situations we create value holder objects or DTOs. Beginning Grails developers know how to display individual entity properties pretty easy in a GSP, such as the name of a fictional domain class…             <g:each in=”${breedingGoals}” var=”breedingGoal”> ${breedingGoal.name} </g:each> or <g:select from="${breedingGoals}" optionKey="code" optionValue="name" />but sometimes struggle when new presentational requirements come in and slightly more or combined information needs to be displayed. Special constructions are contrived, such as the following: def list() { def breedingGoals = BreedingGoal.list() breedingGoals.each { it.name = it.name + " ("+ it.code + ")" } render view: “viewName”, model: [breedingGoals: breedingGoals] } to not only display the name any more, but also something else – e.g. its code. And that’s when things get hairy!This looks like a sensible way to get the new correct output in the browser. But it is not. In this case entities are actually updated and leave the database in a very messed up state after a few iterations. This is the point where seasoned developers tell beginning developers not to continue down that road. If you are one of the latter, read some of GORM’s persistence basics for some background first, and get back here for some guidelines you could use, in order of my personal preference. My main rule of thumb is: Do NOT replace data by a (re)presentational version of itself. It’s not a fully comprehensive list, just to get you thinking about a few options taking our aforementioned BreedingGoal use case as an example. If the information is contained within the object itself and you have the object, just format it right on the spot. <g:each in=”${breedingGoals}” var=”breedingGoal”> ${breedingGoal.name} (${breedingGoal.code}) </g:each> A GSP is especially useful for presentation purposes. I would always try that first. Some people still get confused of the various Grails tags which seem to accept “just 1 property” of the domain class being iterated over, such as select. <g:select from="${breedingGoals}" optionKey="code" optionValue="name + (code)? Arh! Now what?" /> Luckily, sometimes the tag author thought about this! In the case of select, optionValue can apply a transformation using a closure: <g:select from="${breedingGoals}" optionKey="code" optionValue="${{ it.name + ' (' + it.code + ')' }}" /> If this new formatted information is needed on more places to stay DRY you can put it in the object itself. In your domain class, add a getter: String getNamePresentation() { name + " ("+ code + ")" } which can be called as it were a property of the object itself. <g:each in=”${breedingGoals}” var=”breedingGoal”> ${breedingGoal.namePresentation} </g:each> If information is NOT in the object itself, but needs to be retrieved/calculated with some other logic you can use a Tag Library. class BreedingGoalTagLib {def translationServicedef formatBreedingGoal = { attrs -> out << translationService.translate(attrs.code, user.lang) } } In this example some kind of translationService is used to get the actual information you want to display. <g:each in=”${breedingGoals}” var=”breedingGoal”> <g:formatBreedingGoal code=${breedingGoal.code} /> </g:each> A TagLib is very easy to call from a GSP and has full access to collaborating services and the Grails environment. If there’s no additional calculation necessary, but just special (HTML) formatting needs Get the stuff and delegate to a template: class BreedingGoalTagLib {def displayBreedingGoal = { attrs -> out << render(template: '/layouts/breedingGoal', bean: attrs.breedingGoal) } } where the specific template could contain some HTML for special markup: <strong>${it.name}</strong> (${it.code}) Or you need to initialize a bunch of stuff once or you’re not really aiming for GSP display you could create a (transient) property and fill it upon initialization time. Adjust the domain class, make your new “presentation” property transient – so it won’t get persisted to the database – and initialize it at some point. class BreedingGoal {String code String name String namePresentationstatic transients = ['namePresentation'] } class SomeInitializationService {// in some method... def breedingGoal = findBreedingGoal(... breedingGoal.namePresentation = translationService.translate(breedingGoal.code, user.lang)return breedingGoal} Friendly word of advice: don’t try and litter your domain classes too much with transient fields for presentation purposes. If your domain class contains 50% properties with weird codes and you need the other half with associated “presentation” fields, you might be better off with a specialized value or DTO object. Conclusion You see there a various presentation options and depending on where the information comes from and where it goes to some approaches are better suited than others. Don’t be afraid to choose one and refactor to another approach when necessary, but I’d stick to the simplest option in the GSP first and try not to mangle your core data for presentation purposes too much!Reference: Grails domain classes and special presentation requirements from our JCG partner Ted Vinke at the Ted Vinke’s Blog blog....
docker-logo

Run Java EE Tests on Docker using Arquillian Cube

Tech Tip #61 showed how to run Java EE 7 Hands-on Lab using Docker. The Dockerfile used there can be used to create a new image that can deploy any Java EE 7 WAR file to the WildFly instance running in the container. For example github.com/arun-gupta/docker-images/blob/master/javaee7-test/Dockerfile can be copied to the root directory of javaee7-samples and be used to deploy jaxrs-client.war file to the container. Of course, you first need to build the sample as:      mvn -f jaxrs/jaxrs-client/pom.xml clean package -DskipTestsThe exact Dockerfile is shown here:FROM arungupta/wildfly-centos ADD jaxrs/jaxrs-client/target/jaxrs-client.war /opt/wildfly/standalone/deployments/If you want to deploy another Java EE 7 application, then you need to do the following steps:Create the WAR file of the sample Change the Dockerfile Build the image Stop the previous container Start the new containerNow, if you want to run tests against this instance then mvn test alone will not do it because either you need to bind the IP address of the Docker container statically, or dynamically find out the address and then patch it at runtime. Anyway, the repeated cycle is little too cumbersome. How do you solve it? Meet Arquillian Cube! Arquillian Cube allows you to control the lifecycle of Docker images as part of the test lifecyle, either automatically or manually. The blog entry provide more details about getting started with Arquillian Cube, and this functionality has now been enabled in “docker” branch of javaee7-samples. Arquillian Cube Extension Alpha2 was recently released and is used to provide integration. Here are the key concepts:A new “wildfly-docker-arquillian” profile is being introduced The profile adds a dependency on:<dependency> <groupId>org.arquillian.cube</groupId> <artifactId>arquillian-cube-docker</artifactId> <version>1.0.0.Alpha2</version> <scope>test</scope> </dependency>Uses Docker REST API to talk to the container. Complete API docs shows the sample payloads and explains the query parameters and status codes. Uses WildFly remote adapter to talk to the application server running within the container Configuration for Docker image is specified as part of maven-surefire-plugin.:<configuration> <systemPropertyVariables> <arquillian.launch>wildfly-docker</arquillian.launch> <arq.container.wildfly-docker.configuration.username>admin</arq.container.wildfly-docker.configuration.username> <arq.container.wildfly-docker.configuration.password>Admin#70365</arq.container.wildfly-docker.configuration.password> <arq.extension.docker.serverVersion>1.15</arq.extension.docker.serverVersion> <arq.extension.docker.serverUri>http://127.0.0.1:2375</arq.extension.docker.serverUri> <arq.extension.docker.dockerContainers> wildfly-docker: image: arungupta/javaee7-samples-wildfly exposedPorts: [8080/tcp, 9990/tcp] await: strategy: polling sleepPollingTime: 50000 iterations: 5 portBindings: [8080/tcp, 9990/tcp] </arq.extension.docker.dockerContainers> </systemPropertyVariables> </configuration>Username and password are specified are for the WildFly in arungupta/javaee7-samples-wildfly image. All the configuration values can be overridden by arquillian.xml for each test case, as explained here.How do you try out this functionality?git clone https://github.com/javaee-samples/javaee7-samples.git git checkout docker mvn test -f servlet/simple-servlet/pom.xml -Pwildfly-docker-arquillianHere is a complete log of running simple-servlet test:Running org.javaee7.servlet.metadata.complete.SimpleServletTest SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Dec 04, 2014 11:19:51 AM org.glassfish.jersey.filter.LoggingFilter log INFO: 1 * Sending client request on thread main 1 > GET http://127.0.0.1:2375/v1.15/_ping Dec 04, 2014 11:19:51 AM org.glassfish.jersey.filter.LoggingFilter log INFO: 2 * Client response received on thread main 2 < 200 2 < Content-Length: 2 2 < Content-Type: application/json; charset=utf-8 2 < Date: Thu, 04 Dec 2014 19:19:51 GMT OK Dec 04, 2014 11:19:51 AM org.glassfish.jersey.filter.LoggingFilter log INFO: 3 * Sending client request on thread main 3 > POST http://127.0.0.1:2375/v1.15/containers/create?name=wildfly-docker 3 > Accept: application/json 3 > Content-Type: application/json {"name":"wildfly-docker","Hostname":"","User":"","Memory":0,"MemorySwap":0,"CpuShares":0,"AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"PortSpecs":null,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":null,"Dns":null,"Image":"arungupta/javaee7-samples-wildfly","Volumes":{},"VolumesFrom":[],"WorkingDir":"","DisableNetwork":false,"ExposedPorts":{"8080/tcp":{},"9990/tcp":{}}} Dec 04, 2014 11:19:51 AM org.glassfish.jersey.filter.LoggingFilter log INFO: 4 * Client response received on thread main 4 < 201 4 < Content-Length: 90 4 < Content-Type: application/json 4 < Date: Thu, 04 Dec 2014 19:19:51 GMT {"Id":"d2fc85815256be7540ae85fef1ecb26a666a41a591e2adfae8aa6a32fde3393b","Warnings":null} Dec 04, 2014 11:19:51 AM org.arquillian.cube.impl.docker.DockerClientExecutor assignPorts INFO: Only exposed port is set and it will be used as port binding as well. 8080/tcp Dec 04, 2014 11:19:51 AM org.arquillian.cube.impl.docker.DockerClientExecutor assignPorts INFO: Only exposed port is set and it will be used as port binding as well. 9990/tcp Dec 04, 2014 11:19:52 AM org.glassfish.jersey.filter.LoggingFilter log INFO: 5 * Sending client request on thread main 5 > POST http://127.0.0.1:2375/v1.15/containers/wildfly-docker/start 5 > Accept: application/json 5 > Content-Type: application/json {"containerId":"wildfly-docker","Binds":[],"Links":[],"LxcConf":null,"PortBindings":{"8080/tcp":[{"HostIp":"","HostPort":"8080"}],"9990/tcp":[{"HostIp":"","HostPort":"9990"}]},"PublishAllPorts":false,"Privileged":false,"Dns":null,"DnsSearch":null,"VolumesFrom":null,"NetworkMode":"bridge","Devices":null,"RestartPolicy":null,"CapAdd":null,"CapDrop":null} Dec 04, 2014 11:19:52 AM org.glassfish.jersey.filter.LoggingFilter log INFO: 6 * Client response received on thread main 6 < 204 6 < Date: Thu, 04 Dec 2014 19:19:52 GMT Dec 04, 2014 11:19:52 AM org.glassfish.jersey.filter.LoggingFilter log INFO: 7 * Sending client request on thread main 7 > GET http://127.0.0.1:2375/v1.15/containers/wildfly-docker/json 7 > Accept: application/json Dec 04, 2014 11:19:52 AM org.glassfish.jersey.filter.LoggingFilter log INFO: 8 * Client response received on thread main 8 < 200 8 < Content-Type: application/json 8 < Date: Thu, 04 Dec 2014 19:19:52 GMT 8 < Transfer-Encoding: chunked {"Args":["-b","0.0.0.0","-bmanagement","0.0.0.0"],"Config":{"AttachStderr":false,"AttachStdin":false,"AttachStdout":false,"Cmd":["/opt/wildfly/bin/standalone.sh","-b","0.0.0.0","-bmanagement","0.0.0.0"],"CpuShares":0,"Cpuset":"","Domainname":"","Entrypoint":null,"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","WILDFLY_VERSION=8.1.0.Final","JBOSS_HOME=/opt/wildfly"],"ExposedPorts":{"8080/tcp":{},"9990/tcp":{}},"Hostname":"d2fc85815256","Image":"arungupta/javaee7-samples-wildfly","Memory":0,"MemorySwap":0,"NetworkDisabled":false,"OnBuild":null,"OpenStdin":false,"PortSpecs":null,"SecurityOpt":null,"StdinOnce":false,"Tty":false,"User":"wildfly","Volumes":null,"WorkingDir":""},"Created":"2014-12-04T19:19:51.7226858Z","Driver":"devicemapper","ExecDriver":"native-0.2","HostConfig":{"Binds":[],"CapAdd":null,"CapDrop":null,"ContainerIDFile":"","Devices":null,"Dns":null,"DnsSearch":null,"ExtraHosts":null,"Links":null,"LxcConf":null,"NetworkMode":"bridge","PortBindings":{"8080/tcp":[{"HostIp":"","HostPort":"8080"}],"9990/tcp":[{"HostIp":"","HostPort":"9990"}]},"Privileged":false,"PublishAllPorts":false,"RestartPolicy":{"MaximumRetryCount":0,"Name":""},"VolumesFrom":null},"HostnamePath":"/var/lib/docker/containers/d2fc85815256be7540ae85fef1ecb26a666a41a591e2adfae8aa6a32fde3393b/hostname","HostsPath":"/var/lib/docker/containers/d2fc85815256be7540ae85fef1ecb26a666a41a591e2adfae8aa6a32fde3393b/hosts","Id":"d2fc85815256be7540ae85fef1ecb26a666a41a591e2adfae8aa6a32fde3393b","Image":"3d08e8466496412daadeba7bb35b5b64d29b32adedd64472ad775d6da5011913","MountLabel":"system_u:object_r:svirt_sandbox_file_t:s0:c34,c113","Name":"/wildfly-docker","NetworkSettings":{"Bridge":"docker0","Gateway":"172.17.42.1","IPAddress":"172.17.0.7","IPPrefixLen":16,"MacAddress":"02:42:ac:11:00:07","PortMapping":null,"Ports":{"8080/tcp":[{"HostIp":"0.0.0.0","HostPort":"8080"}],"9990/tcp":[{"HostIp":"0.0.0.0","HostPort":"9990"}]}},"Path":"/opt/wildfly/bin/standalone.sh","ProcessLabel":"system_u:system_r:svirt_lxc_net_t:s0:c34,c113","ResolvConfPath":"/var/lib/docker/containers/d2fc85815256be7540ae85fef1ecb26a666a41a591e2adfae8aa6a32fde3393b/resolv.conf","State":{"ExitCode":0,"FinishedAt":"0001-01-01T00:00:00Z","Paused":false,"Pid":11406,"Restarting":false,"Running":true,"StartedAt":"2014-12-04T19:19:52.378418242Z"},"Volumes":{},"VolumesRW":{}} Dec 04, 2014 11:20:44 AM org.xnio.Xnio <clinit> INFO: XNIO version 3.2.0.Beta2 Dec 04, 2014 11:20:44 AM org.xnio.nio.NioXnio <clinit> INFO: XNIO NIO Implementation Version 3.2.0.Beta2 Dec 04, 2014 11:20:44 AM org.jboss.remoting3.EndpointImpl <clinit> INFO: JBoss Remoting version (unknown) Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.635 sec - in org.javaee7.servlet.metadata.complete.SimpleServletTest Dec 04, 2014 11:20:54 AM org.glassfish.jersey.filter.LoggingFilter log INFO: 9 * Sending client request on thread main 9 > POST http://127.0.0.1:2375/v1.15/containers/wildfly-docker/stop?t=10 9 > Accept: application/json 9 > Content-Type: application/json Dec 04, 2014 11:21:04 AM org.glassfish.jersey.filter.LoggingFilter log INFO: 10 * Client response received on thread main 10 < 204 10 < Date: Thu, 04 Dec 2014 19:21:04 GMT Dec 04, 2014 11:21:04 AM org.glassfish.jersey.filter.LoggingFilter log INFO: 11 * Sending client request on thread main 11 > DELETE http://127.0.0.1:2375/v1.15/containers/wildfly-docker?v=0&force=0 11 > Accept: application/json Dec 04, 2014 11:21:05 AM org.glassfish.jersey.filter.LoggingFilter log INFO: 12 * Client response received on thread main 12 < 204 12 < Date: Thu, 04 Dec 2014 19:21:05 GMT Results : Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] --- maven-surefire-plugin:2.17:test (spock-test) @ simple-servlet --- [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1:27.831s [INFO] Finished at: Thu Dec 04 11:21:05 PST 2014 [INFO] Final Memory: 21M/59M [INFO] ------------------------------------------------------------------------REST payload from the client to Docker server are shown here. This was verified on a Fedora 20 Virtual Box image. Here are some quick notes on setting it up there:Install the required packagesyum install docker-io git maven yum upgrade selinux-policyConfigure Dockersudo vi /etc/sysconfig/docker Change to "OPTIONS=--selinux-enabled -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock" sudo service docker startVerify Docker TCP configurationdocker -H tcp://127.0.0.1:2375 versionClient version: 1.3.1 Client API version: 1.15 Go version (client): go1.3.3 Git commit (client): 4e9bbfa/1.3.1 OS/Arch (client): linux/amd64 Server version: 1.3.1 Server API version: 1.15 Go version (server): go1.3.3 Git commit (server): 4e9bbfa/1.3.1Boot2docker on Mac still has issue #49, this is Alpha2 after all! Try some other Java EE 7 tests and file bugs here. Enjoy!Reference: Run Java EE Tests on Docker using Arquillian Cube from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
spring-interview-questions-answers

Message Processing with Spring Integration

Spring Integration provides an extension of the Spring framework to support the well-known Enterprise Integration Patterns. It enables lightweight messaging within Spring-based applications and supports integration with external systems. One of the most important goals of Spring Integration is to provide a simple model for building maintainable and testable enterprise integration solutions.             Main Components Message : It is a generic wrapper for any Java object combined with metadata used by the framework while handling that object. It consists of a payload and header(s). Message payload can be any Java Object and Message header is a String/Object Map covering header name and value. MessageBuilder is used to create messages covering payload and headers as follows : import org.springframework.messaging.Message; import org.springframework.messaging.support.MessageBuilder;Message message = MessageBuilder.withPayload("Message Payload") .setHeader("Message_Header1", "Message_Header1_Value") .setHeader("Message_Header2", "Message_Header2_Value") .build(); Message Channel : A message channel is the component through which messages are moved so it can be thought as a pipe between message producer and consumer. A Producer sends the message to a channel, and a consumer receives the message from the channel. A Message Channel may follow either Point-to-Point or Publish/Subscribe semantics. With a Point-to-Point channel, at most one consumer can receive each message sent to the channel. With Publish/Subscribe channels, multiple subscribers can receive each Message sent to the channel. Spring Integration supports both of these. In this sample project, Direct channel and null-channel are used. Direct channel is the default channel type within Spring Integration and simplest point-to-point channel option. Null Channel is a dummy message channel to be used mainly for testing and debugging. It does not send the message from sender to receiver but its send method always returns true and receive method returns null value. In addition to DirectChannel and NullChannel, Spring Integration provides different Message Channel Implementations such as PublishSubscribeChannel, QueueChannel, PriorityChannel, RendezvousChannel, ExecutorChannel and ScopedChannel. Message Endpoint : A message endpoint isolates application code from the infrastructure. In other words, it is an abstraction layer between the application code and the messaging framework. Main Message Endpoints Transformer : A Message Transformer is responsible for converting a Message’s content or structure and returning the modified Message. For example : it may be used to transform message payload from one format to another or to modify message header values. Filter : A Message Filter determines whether the message should be passed to the message channel. Router : A Message Router decides what channel(s) should receive the Message next if it is available. Splitter : A Splitter breaks an incoming message into multiple messages and send them to the appropriate channel. Aggregator : An Aggregator combines multiple messages into a single message. Service Activator : A Service Activator is a generic endpoint for connecting a service instance to the messaging system. Channel Adapter : A Channel Adapter is an endpoint that connects a Message Channel to external system. Channel Adapters may be either inbound or outbound. An inbound Channel Adapter endpoint connects a external system to a MessageChannel. An outbound Channel Adapter endpoint connects a MessageChannel to a external system. Messaging Gateway : A gateway is an entry point for the messaging system and hides the messaging API from external system. It is bidirectional by covering request and reply channels. Also Spring Integration provides various Channel Adapters and Messaging Gateways (for AMQP, File, Redis, Gemfire, Http, Jdbc, JPA, JMS, RMI, Stream etc..) to support Message-based communication with external systems. Please visit Spring Integration Reference documentation for the detailed information. The following sample Cargo messaging implementation shows basic message endpoints’ behaviours for understanding easily. Cargo messaging system listens cargo messages from external system by using a CargoGateway Interface. Received cargo messages are processed by using CargoSplitter, CargoFilter, CargoRouter, CargoTransformer MessageEndpoints. After then, processed successful domestic and international cargo messages are sent to CargoServiceActivator. Cargo Messaging System’ s Spring Integration Flow is as follows :Let us take a look sample cargo messaging implementation. Used TechnologiesJDK 1.8.0_25 Spring 4.1.2 Spring Integration 4.1.0 Maven 3.2.2 Ubuntu 14.04Project Hierarchy is as follows :STEP 1 : Dependencies Dependencies are added to Maven pom.xml. <properties> <spring.version>4.1.2.RELEASE</spring.version> <spring.integration.version>4.1.0.RELEASE</spring.integration.version> </properties><dependencies> <!-- Spring 4 dependencies --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <!-- Spring Integration dependencies --> <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-core</artifactId> <version>${spring.integration.version}</version> </dependency> </dependencies> STEP 2 : Cargo Builder CargoBuilder is created to build Cargo requests. public class Cargo {public enum ShippingType { DOMESTIC, INTERNATIONAL }private final long trackingId; private final String receiverName; private final String deliveryAddress; private final double weight; private final String description; private final ShippingType shippingType; private final int deliveryDayCommitment; private final int region;private Cargo(CargoBuilder cargoBuilder) { this.trackingId = cargoBuilder.trackingId; this.receiverName = cargoBuilder.receiverName; this.deliveryAddress = cargoBuilder.deliveryAddress; this.weight = cargoBuilder.weight; this.description = cargoBuilder.description; this.shippingType = cargoBuilder.shippingType; this.deliveryDayCommitment = cargoBuilder.deliveryDayCommitment; this.region = cargoBuilder.region; }// Getter methods... @Override public String toString() { return "Cargo [trackingId=" + trackingId + ", receiverName=" + receiverName + ", deliveryAddress=" + deliveryAddress + ", weight=" + weight + ", description=" + description + ", shippingType=" + shippingType + ", deliveryDayCommitment=" + deliveryDayCommitment + ", region=" + region + "]"; }public static class CargoBuilder { private final long trackingId; private final String receiverName; private final String deliveryAddress; private final double weight; private final ShippingType shippingType; private int deliveryDayCommitment; private int region; private String description; public CargoBuilder(long trackingId, String receiverName, String deliveryAddress, double weight, ShippingType shippingType) { this.trackingId = trackingId; this.receiverName = receiverName; this.deliveryAddress = deliveryAddress; this.weight = weight; this.shippingType = shippingType; }public CargoBuilder setDeliveryDayCommitment(int deliveryDayCommitment) { this.deliveryDayCommitment = deliveryDayCommitment; return this; }public CargoBuilder setDescription(String description) { this.description = description; return this; } public CargoBuilder setRegion(int region) { this.region = region; return this; }public Cargo build() { Cargo cargo = new Cargo(this); if ((ShippingType.DOMESTIC == cargo.getShippingType()) && (cargo.getRegion() 4)) { throw new IllegalStateException("Region is invalid! Cargo Tracking Id : " + cargo.getTrackingId()); } return cargo; } } STEP 3 : Cargo Message CargoMessage is the parent class of Domestic and International Cargo Messages. public class CargoMessage {private final Cargo cargo;public CargoMessage(Cargo cargo) { this.cargo = cargo; }public Cargo getCargo() { return cargo; }@Override public String toString() { return cargo.toString(); } } STEP 4 : Domestic Cargo Message DomesticCargoMessage Class models domestic cargo messages. public class DomesticCargoMessage extends CargoMessage { public enum Region { NORTH(1), SOUTH(2), EAST(3), WEST(4); private int value;private Region(int value) { this.value = value; }public static Region fromValue(int value) { return Arrays.stream(Region.values()) .filter(region -> region.value == value) .findFirst() .get(); } } private final Region region;public DomesticCargoMessage(Cargo cargo, Region region) { super(cargo); this.region = region; }public Region getRegion() { return region; }@Override public String toString() { return "DomesticCargoMessage [cargo=" + super.toString() + ", region=" + region + "]"; }} STEP 5 : International Cargo Message InternationalCargoMessage Class models international cargo messages. public class InternationalCargoMessage extends CargoMessage { public enum DeliveryOption { NEXT_FLIGHT, PRIORITY, ECONOMY, STANDART } private final DeliveryOption deliveryOption; public InternationalCargoMessage(Cargo cargo, DeliveryOption deliveryOption) { super(cargo); this.deliveryOption = deliveryOption; }public DeliveryOption getDeliveryOption() { return deliveryOption; }@Override public String toString() { return "InternationalCargoMessage [cargo=" + super.toString() + ", deliveryOption=" + deliveryOption + "]"; }} STEP 6 : Application Configuration AppConfiguration is configuration provider class for Spring Container. It creates Message Channels and registers to Spring BeanFactory. Also @EnableIntegration enables imported spring integration configuration and @IntegrationComponentScan scans Spring Integration specific components. Both of them came with Spring Integration 4.0. import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.integration.annotation.IntegrationComponentScan; import org.springframework.integration.channel.DirectChannel; import org.springframework.integration.config.EnableIntegration; import org.springframework.messaging.MessageChannel;@Configuration @ComponentScan("com.onlinetechvision.integration") @EnableIntegration @IntegrationComponentScan("com.onlinetechvision.integration") public class AppConfiguration {/** * Creates a new cargoGWDefaultRequest Channel and registers to BeanFactory. * * @return direct channel */ @Bean public MessageChannel cargoGWDefaultRequestChannel() { return new DirectChannel(); }/** * Creates a new cargoSplitterOutput Channel and registers to BeanFactory. * * @return direct channel */ @Bean public MessageChannel cargoSplitterOutputChannel() { return new DirectChannel(); }/** * Creates a new cargoFilterOutput Channel and registers to BeanFactory. * * @return direct channel */ @Bean public MessageChannel cargoFilterOutputChannel() { return new DirectChannel(); }/** * Creates a new cargoRouterDomesticOutput Channel and registers to BeanFactory. * * @return direct channel */ @Bean public MessageChannel cargoRouterDomesticOutputChannel() { return new DirectChannel(); }/** * Creates a new cargoRouterInternationalOutput Channel and registers to BeanFactory. * * @return direct channel */ @Bean public MessageChannel cargoRouterInternationalOutputChannel() { return new DirectChannel(); }/** * Creates a new cargoTransformerOutput Channel and registers to BeanFactory. * * @return direct channel */ @Bean public MessageChannel cargoTransformerOutputChannel() { return new DirectChannel(); }} STEP 7 : Messaging Gateway CargoGateway Interface exposes domain-specific method to the application. In other words, it provides an application access to the messaging system. Also @MessagingGateway came with Spring Integration 4.0 and simplifies gateway creation in messaging system. Its default request channel is cargoGWDefaultRequestChannel. import java.util.List;import org.springframework.integration.annotation.Gateway; import org.springframework.integration.annotation.MessagingGateway; import org.springframework.messaging.Message;import com.onlinetechvision.model.Cargo;@MessagingGateway(name = "cargoGateway", defaultRequestChannel = "cargoGWDefaultRequestChannel") public interface ICargoGateway {/** * Processes Cargo Request * * @param message SI Message covering Cargo List payload and Batch Cargo Id header. * @return operation result */ @Gateway void processCargoRequest(Message<List<Cargo>> message); } STEP 8 : Messaging Splitter CargoSplitter listens cargoGWDefaultRequestChannel channel and breaks incoming Cargo List into Cargo messages. Cargo messages are sent to cargoSplitterOutputChannel. import java.util.List;import org.springframework.integration.annotation.MessageEndpoint; import org.springframework.integration.annotation.Splitter; import org.springframework.messaging.Message;import com.onlinetechvision.model.Cargo;@MessageEndpoint public class CargoSplitter {/** * Splits Cargo List to Cargo message(s) * * @param message SI Message covering Cargo List payload and Batch Cargo Id header. * @return cargo list */ @Splitter(inputChannel = "cargoGWDefaultRequestChannel", outputChannel = "cargoSplitterOutputChannel") public List<Cargo> splitCargoList(Message<List<Cargo>> message) { return message.getPayload(); } } STEP 9 : Messaging Filter CargoFilter determines whether the message should be passed to the message channel. It listens cargoSplitterOutputChannel channel and filters cargo messages exceeding weight limit. If Cargo message is lower than weight limit, it is sent to cargoFilterOutputChannel channel. If Cargo message is higher than weight limit, it is sent to cargoFilterDiscardChannel channel. import org.springframework.integration.annotation.Filter; import org.springframework.integration.annotation.MessageEndpoint;import com.onlinetechvision.model.Cargo;@MessageEndpoint public class CargoFilter {private static final long CARGO_WEIGHT_LIMIT = 1_000; /** * Checks weight of cargo and filters if it exceeds limit. * * @param Cargo message * @return check result */ @Filter(inputChannel="cargoSplitterOutputChannel", outputChannel="cargoFilterOutputChannel", discardChannel="cargoFilterDiscardChannel") public boolean filterIfCargoWeightExceedsLimit(Cargo cargo) { return cargo.getWeight() ...
java-interview-questions-answers

Accessing Meetup’s streaming API with RxNetty

This article will touch upon multiple subjects: reactive programming, HTTP, parsing JSON and integrating with social API. All in one use case: we will load and process new meetup.com events in real time via non-bloking RxNetty library, combining the power of Netty framework and flexibility of RxJava library. Meetup provides publicly available streaming API that pushes every single Meetup registered all over the world in real-time. Just browse to stream.meetup.com/2/open_events and observe how chunks of JSON are slowly appearing on your screen. Every time someone creates new event, self-containing JSON is pushed from the server to your browser. This means such request never ends, instead we keep receiving partial data as long as we want. We already examined similar scenario in Turning Twitter4J into RxJava’s Observable. Each new meetup event publishes a standalone JSON document, similar to this (lots of details omitted): { "id" : "219088449", "name" : "Silver Wings Brunch", "time" : 1421609400000, "mtime" : 1417814004321, "duration" : 900000, "rsvp_limit" : 0, "status" : "upcoming", "event_url" : "http://www.meetup.com/Laguna-Niguel-Social-Networking-Meetup/events/219088449/", "group" : { "name" : "Former Flight Attendants South Orange and North San Diego Co", "state" : "CA" ... }, "venue" : { "address_1" : "26860 Ortega Highway", "city" : "San Juan Capistrano", "country" : "US" ... }, "venue_visibility" : "public", "visibility" : "public", "yes_rsvp_count" : 1 ... } Every time our long-polling HTTP connection (with Transfer-Encoding: chunkedresponse header) pushes such piece of JSON, we want to parse it and somehow pass further. We hate callbacks, thus RxJava seems like a reasonable alternative (think: Observable<Event>). Step 1: Receiving raw data with RxNetty We can’t use ordinary HTTP client as they are focused on request-response semantics. There is no response here, we simply leave opened connection forever and consume data when it comes. RxJava has an out-of-the-box RxApacheHttp library, but it assumes text/event-stream content type. Instead we will use quite low-level, versatile RxNetty library. It’s a wrapper around Netty (duh!) and is capable of implementing arbitrary TCP/IP (including HTTP) and UDP clients and servers. If you don’t know Netty, it’s packet- rather than stream-oriented, so we can expect one Netty event per each Meetup push. The API certainly isn’t straightforward, but makes sense once you grok it: HttpClient<ByteBuf, ByteBuf> httpClient = RxNetty.<ByteBuf, ByteBuf>newHttpClientBuilder("stream.meetup.com", 443) .pipelineConfigurator(new HttpClientPipelineConfigurator<>()) .withSslEngineFactory(DefaultFactories.trustAll()) .build(); final Observable<HttpClientResponse> responses = httpClient.submit(HttpClientRequest.createGet("/2/open_events")); final Observable byteBufs = responses.flatMap(AbstractHttpContentHolder::getContent); final Observable chunks = byteBufs.map(content -> content.toString(StandardCharsets.UTF_8));First we create HttpClient and set up SSL (keep in mind that trustAll() with regards to server certificates is probably not the best production setting). Later wesubmit() GET request and receive Observable<HttpClientResponse<ByteBuf>> in return. ByteBuf is Netty’s abstraction over a bunch of bytes sent or received over the wire. This observable will tell us immediately about every piece of data received from Meetup. After extracting ByteBuf from response we turn it into a String containing aforementioned JSON. So far so good, it works. Step 2: Aligning packets with JSON documents Netty is very powerful because it doesn’t hide inherent complexity over leaky abstractions. Every time something is received over the TCP/IP wire, we are notified. You might believe that when server sends 100 bytes, Netty on the client side will notify us about these 100 bytes received. However TCP/IP stack is free to split and merge data you send over wire, especially since it is suppose to be a stream, so how it is split into packets should be irrelevant. This caveat is greatly explained in Netty’s documentation. What does it mean to us? When Meetup sends a single event, we might receive just one String in chunks observable. But just as well it can be divided into arbitrary number of packets, thus chunks will emit multiple Strings. Even worse, if Meetup sends two events right after another, they might fit in one packet. In that casechunks will emit one String with two independent JSON documents. As a matter of fact we can’t assume any alignment between JSON strings and networks packets received. All we know is that individual JSON documents representing events are separated by newlines. Amazingly, RxJavaString official add-on has a method precisely for that: Observable jsonChunks = StringObservable.split(chunks, "\n"); Actually there is even simpler StringObservable.byLine(chunks), but it uses platform-dependent end-of-line. What split() does is best explained in official documentation:Now we can safely parse each String emitted by jsonChunks: Step 3: Parsing JSON Interestingly this step is not so straightforward. I admit, I sort-of enjoyed WSDL times because I could easily and predictably generate Java model that follows web-service’s contract. JSON, especially taking marginal market penetration of JSON schema, is basically the Wild West of integration. Typically you are left with informal documentation or samples of requests and responses. No type information or format, whether fields are mandatory, etc. Moreover because I reluctantly work with maps of maps (hi there, fellow Clojure programmers), in order to work with JSON based REST services I have to write mapping POJOs myself. Well, there are workarounds. First I took one representative example of JSON produced by Meetup streaming API and placed it in src/main/json/meetup/event.json. Then I used jsonschema2pojo-maven-plugin (Gradle and Ant versions exist as well). Plugin’s name is confusing, it can also work with JSON example, not only schema, to produce Java models: <plugin> <groupId>org.jsonschema2pojo</groupId> <artifactId>jsonschema2pojo-maven-plugin</artifactId> <version>0.4.7</version> <configuration> <sourceDirectory>${basedir}/src/main/json/meetup</sourceDirectory> <targetPackage>com.nurkiewicz.meetup.generated</targetPackage> <includeHashcodeAndEquals>true</includeHashcodeAndEquals> <includeToString>true</includeToString> <initializeCollections>true</initializeCollections> <sourceType>JSON</sourceType> <useCommonsLang3>true</useCommonsLang3> <useJodaDates>true</useJodaDates> <useLongIntegers>true</useLongIntegers> <outputDirectory>target/generated-sources</outputDirectory> </configuration> <executions> <execution> <id>generate-sources</id> <phase>generate-sources</phase> <goals> <goal>generate</goal> </goals> </execution> </executions> </plugin>At this point Maven will create Event.java, Venue.java, Group.java, etc. compatible with Jackson: private Event parseEventJson(String jsonStr) { try { return objectMapper.readValue(jsonStr, Event.class); } catch (IOException e) { throw new UncheckedIOException(e); } }It just works, sweet: final Observable events = jsonChunks.map(this::parseEventJson); Step 4: ???[1] Step 5: PROFIT!!! Having Observable<Event> we can implement some really interesting use cases. Want to find names of all meetups in Poland that were just created? Sure! events .filter(event -> event.getVenue() != null) .filter(event -> event.getVenue().getCountry().equals("pl")) .map(Event::getName) .forEach(System.out::println); Looking for statistics how many events are created per minute? No problem! events .buffer(1, TimeUnit.MINUTES) .map(List::size) .forEach(count -> log.info("Count: {}", count)); Or maybe you want to continually search for meetups furthest in the future, skipping those closer than ones already found? events .filter(event -> event.getTime() != null) .scan(this::laterEventFrom) .distinct() .map(Event::getTime) .map(Instant::ofEpochMilli) .forEach(System.out::println); //... private Event laterEventFrom(Event first, Event second) { return first.getTime() > second.getTime() ? first : second; } This code filters out events without known time, emits either current event or the previous one (scan()), depending on which one was later, filters out duplicates and displays time. This tiny program running for few minutes already found one just created meetup scheduled for November 2015 – and it’s December 2014 as of this writing. Possibilities are endless. I hope I gave you a good grasp of how you can mashup various technologies together easily: reactive programming to write super fast networking code, type-safe JSON parsing without boiler-plate code and RxJava to quickly process streams of events. Enjoy!Reference: Accessing Meetup’s streaming API with RxNetty from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....
software-development-2-logo

Don’t be “Clever”: The Double Curly Braces Anti Pattern

From time to time, I find someone using the double curly braces anti pattern (also called double brace initialisation) in the wild. This time on Stack Overflow:                 Map source = new HashMap(){{ put("firstName", "John"); put("lastName", "Smith"); put("organizations", new HashMap(){{ put("0", new HashMap(){{ put("id", "1234"); }}); put("abc", new HashMap(){{ put("id", "5678"); }}); }}); }};In case you do not understand the syntax, it’s actually easy. There are two elements:We’re creating anonymous classes that extend HashMap by writing: new HashMap() { }In that anonymous class, we’re using an instance initialiser to initialise the new anonymous HashMap subtype instance by writing things like: { put("id", "1234"); } Essentially, these initialisers are just constructor code.So, why is this called the Double Curly Braces Anti Pattern There are really three reasons for this to be an anti pattern: 1. Readability This is the least important reason, it’s readability. While it may be a bit easier to write, and feel a bit more like the equivalent data structure initialisation in JSON: { "firstName" : "John" , "lastName" : "Smith" , "organizations" : { "0" : { "id", "1234" } , "abc" : { "id", "5678" } } } And yes. It would be really awesome if Java had collection literals for List and Map types. Using double curly braces to emulate that is quirky and doesn’t feel quite right, syntactically. But let’s leave the area where we discuss taste and curly braces (we’ve done that before), because: 2. One type per instance We’re really creating one type per instance! Every time we create a new map this way, we’re also implicitly creating a new non-reusable class just for that one simple instance of a HashMap. If you’re doing this once, that might be fine. If you put this sort of code all over a huge application, you will put some unnecessary burden on your ClassLoader, which keeps references to all these class objects on your heap. Don’t believe it? Compile the above code and check out the compiler output. It will look like this: Test$1$1$1.class Test$1$1$2.class Test$1$1.class Test$1.class Test.class Where the Test.class is the only reasonable class here, the enclosing class. But that’s still not the most important issue. 3. Memory leak! The really most important issue is the problem that all anonymous classes have. They contain a reference to their enclosing instance, and that is really a killer. Let’s imagine, you put your clever HashMap initialisation into an EJB or whatever really heavy object with a well-managed lifecycle like this: public class ReallyHeavyObject {// Just to illustrate... private int[] tonsOfValues; private Resource[] tonsOfResources;// This method almost does nothing public void quickHarmlessMethod() { Map source = new HashMap(){{ put("firstName", "John"); put("lastName", "Smith"); put("organizations", new HashMap(){{ put("0", new HashMap(){{ put("id", "1234"); }}); put("abc", new HashMap(){{ put("id", "5678"); }}); }}); }}; // Some more code here } } So this ReallyHeavyObject has tons of resources that need to be cleaned up correctly as soon as they’re garbage collected, or whatever. But that doesn’t matter for you when you’re calling the quickHarmlessMethod(), which executes in no time. Fine. Let’s imagine some other developer, who refactors that method to return your map, or even parts of your map: public Map quickHarmlessMethod() { Map source = new HashMap(){{ put("firstName", "John"); put("lastName", "Smith"); put("organizations", new HashMap(){{ put("0", new HashMap(){{ put("id", "1234"); }}); put("abc", new HashMap(){{ put("id", "5678"); }}); }}); }}; return source; } Now you’re in big big trouble! You have now inadvertently exposed all the state from ReallyHeavyObject to the outside, because each of those inner classes holds a reference to the enclosing instance, which is the ReallyHeavyObject instance. Don’t believe it? Let’s run this program: public static void main(String[] args) throws Exception { Map map = new ReallyHeavyObject().quickHarmlessMethod(); Field field = map.getClass().getDeclaredField("this$0"); field.setAccessible(true); System.out.println(field.get(map).getClass()); } This program returns: class ReallyHeavyObject Yes, indeed! If you still don’t believe it, you can use a debugger to introspect the returned map:You will see the enclosing instance reference right there in your anonymous HashMap subtype. And all the nested anonymous HashMap subtypes also hold such a reference. So, please, never use this anti pattern You might say that one way to circumvent all that hassle from issue 3 is to make the quickHarmlessMethod() a static method to prevent that enclosing instance, and you’re right about that. But the worst thing that we’ve seen in the above code is the fact that even if you know what you are doing with your map that you might be creating in a static context, the next developer might not notice that and refactor / remove static again. They might store the Map in some other singleton instance and there is literally no way to tell from the code itself that there might just be a dangling, useless reference to ReallyHeavyObject. Inner classes are a beast. They have caused a lot of trouble and cognitive dissonance in the past. Anonymous inner classes can be even worse, because readers of such code might really be completely oblivious of the fact that they’re enclosing an outer instance and that they’re passing around this enclosed outer instance. The conclusion is: Don’t be clever, don’t ever use double brace initialisationReference: Don’t be “Clever”: The Double Curly Braces Anti Pattern from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
mongodb-logo

Creating a REST API with Spring Boot and MongoDB

Spring Boot is an opinionated framework that simplifies the development of Spring applications. It frees us from the slavery of complex configuration files and helps us to create standalone Spring applications that don’t need an external servlet container. This sounds almost too good to be true, but Spring Boot can really do all this. This blog post demonstrates how easy it is to implement a REST API that provides CRUD operations for todo entries that are saved to MongoDB database. Let’s start by creating our Maven project. Note: This blog post assumes that you have already installed the MongoDB database. If you haven’t done this, you can follow the instructions given in the blog post titled: Accessing Data with MongoDB. Creating Our Maven Project We can create our Maven project by following these steps:Use the spring-boot-starter-parent POM as the parent POM of our Maven project. This ensures that our project inherits sensible default settings from Spring Boot. Add the Spring Boot Maven Plugin to our project. This plugin allows us to package our application into an executable jar file, package it into a war archive, and run the application. Configure the dependencies of our project. We need to configure the following dependencies:The spring-boot-starter-web dependency provides the dependencies of a web application. The spring-data-mongodb dependency provides integration with the MongoDB document database.Enable the Java 8 Support of Spring Boot. Configure the main class of our application. This class is responsible of configuring and starting our application.The relevant part of our pom.xml file looks as follows: <properties> <!-- Enable Java 8 --> <java.version>1.8</java.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- Configure the main class of our Spring Boot application --> <start-class>com.javaadvent.bootrest.TodoAppConfig</start-class> </properties> <!-- Inherit defaults from Spring Boot --> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.1.9.RELEASE</version> </parent><dependencies> <!-- Get the dependencies of a web application --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency><!-- Spring Data MongoDB--> <dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-mongodb</artifactId> </dependency> </dependencies><build> <plugins> <!-- Spring Boot Maven Support --> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> Additional Reading:Spring Boot Reference Manual: 9.1.1 Maven installation Spring Boot Reference Manual: 12.1 Maven Spring Boot Maven Plugin – UsageLet’s move on and find out how we can configure our application. Configuring Our Application We can configure our Spring Boot application by following these steps:Create a TodoAppConfig class to the com.javaadvent.bootrest package. Enable Spring Boot auto-configuration. Configure the Spring container to scan components found from the child packages of the com.javaadvent.bootrest package. Add the main() method to the TodoAppConfig class and implement by running our application.The source code of the TodoAppConfig class looks as follows: package com.javaadvent.bootrest;import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration;@Configuration @EnableAutoConfiguration @ComponentScan public class TodoAppConfig { public static void main(String[] args) { SpringApplication.run(TodoAppConfig.class, args); } } We have now created the configuration class that configures and runs our Spring Boot application. Because the MongoDB jars are found from the classpath, Spring Boot configures the MongoDB connection by using its default settings. Additional Reading:Spring Boot Reference Manual: 13.2 Location the main application class Spring Boot Reference Manual: 14. Configuration classes The Javadoc of the @EnableAutoConfiguration annotation Spring Boot Reference Manual: 15. Auto-configuration The Javadoc of the SpringApplication class Spring Boot Reference Manual: 27.2.1 Connecting to a MongoDB databaseLet’s move on and implement our REST API. Implementing Our REST API We need implement a REST API that provides CRUD operations for todo entries. The requirements of our REST API are:A POST request send to the url ‘/api/todo’ must create a new todo entry by using the information found from the request body and return the information of the created todo entry. A DELETE request send to the url ‘/api/todo/{id}’ must delete the todo entry whose id is found from the url and return the information of the deleted todo entry. A GET request send to the url ‘/api/todo’ must return all todo entries that are found from the database. A GET request send to the url ‘/api/todo/{id}’ must return the information of the todo entry whose id is found from the url. A PUT request send to the url ‘/api/todo/{id}’ must update the information of an existing todo entry by using the information found from the request body and return the information of the updated todo entry.We can fulfill these requirements by following these steps:Create the entity that contains the information of a single todo entry. Create the repository that is used to save todo entries to MongoDB database and find todo entries from it. Create the service layer that is responsible of mapping DTOs into domain objects and vice versa. The purpose of our service layer is to isolate our domain model from the web layer. Create the controller class that processes HTTP requests and returns the correct response back to the client.Note: This example is so simple that we could just inject our repository to our controller. However, because this is not a viable strategy when we are implementing real-life applications, we will add a service layer between the web and repository layers. Let’s get started. Creating the Entity We need to create the entity class that contains the information of a single todo entry. We can do this by following these steps:Add the id, description, and title fields to the created entity class. Configure the id field of the entity by annotating the id field with the @Id annotation. Specify the constants (MAX_LENGTH_DESCRIPTION and MAX_LENGTH_TITLE) that specify the maximum length of the description and title fields. Add a static builder class to the entity class. This class is used to create new Todo objects. Add an update() method to the entity class. This method simply updates the title and description of the entity if valid values are given as method parameters.The source code of the Todo class looks as follows: import org.springframework.data.annotation.Id;import static com.javaadvent.bootrest.util.PreCondition.isTrue; import static com.javaadvent.bootrest.util.PreCondition.notEmpty; import static com.javaadvent.bootrest.util.PreCondition.notNull;final class Todo {static final int MAX_LENGTH_DESCRIPTION = 500; static final int MAX_LENGTH_TITLE = 100;@Id private String id;private String description;private String title;public Todo() {}private Todo(Builder builder) { this.description = builder.description; this.title = builder.title; }static Builder getBuilder() { return new Builder(); }//Other getters are omittedpublic void update(String title, String description) { checkTitleAndDescription(title, description);this.title = title; this.description = description; }/** * We don't have to use the builder pattern here because the constructed * class has only two String fields. However, I use the builder pattern * in this example because it makes the code a bit easier to read. */ static class Builder {private String description;private String title;private Builder() {}Builder description(String description) { this.description = description; return this; }Builder title(String title) { this.title = title; return this; }Todo build() { Todo build = new Todo(this);build.checkTitleAndDescription(build.getTitle(), build.getDescription());return build; } }private void checkTitleAndDescription(String title, String description) { notNull(title, "Title cannot be null"); notEmpty(title, "Title cannot be empty"); isTrue(title.length() <= MAX_LENGTH_TITLE, "Title cannot be longer than %d characters", MAX_LENGTH_TITLE );if (description != null) { isTrue(description.length() <= MAX_LENGTH_DESCRIPTION, "Description cannot be longer than %d characters", MAX_LENGTH_DESCRIPTION ); } } } Additional Reading:Item 2: Consider a builder when faced with many constructor parametersLet’s move on and create the repository that communicates with the MongoDB database. Creating the Repository We have to create the repository interface that is used to save Todo objects to MondoDB database and retrieve Todo objects from it. If we don’t want to use the Java 8 support of Spring Data, we could create our repository by creating an interface that extends the CrudRepository<T, ID> interface. However, because we want to use the Java 8 support, we have to follow these steps:Create an interface that extends the Repository<T, ID> interface. Add the following repository methods to the created interface:The void delete(Todo deleted) method deletes the todo entry that is given as a method parameter. The List<Todo> findAll() method returns all todo entries that are found from the database. The Optional<Todo> findOne(String id) method returns the information of a single todo entry. If no todo entry is found, this method returns an empty Optional. The Todo save(Todo saved) method saves a new todo entry to the database and returns the the saved todo entry.The source code of the TodoRepository interface looks as follows: import org.springframework.data.repository.Repository;import java.util.List; import java.util.Optional;interface TodoRepository extends Repository<Todo, String> {void delete(Todo deleted);List<Todo> findAll();Optional<Todo> findOne(String id);Todo save(Todo saved); } Additional Reading:The Javadoc of the CrudRepository<T, ID> interface The Javadoc of the Repository<T, ID> interface Spring Data MongoDB Reference Manual: 5. Working with Spring Data Repositories Spring Data MongoDB Reference Manual: 5.3.1 Fine-tuning repository definitionLet’s move on and create the service layer of our example application. Creating the Service Layer First, we have to create a service interface that provides CRUD operations for todo entries. The source code of the TodoService interface looks as follows: import java.util.List;interface TodoService {TodoDTO create(TodoDTO todo);TodoDTO delete(String id);List<TodoDTO> findAll();TodoDTO findById(String id);TodoDTO update(TodoDTO todo); } The TodoDTO class is a DTO that contains the information of a single todo entry. We will talk more about it when we create the web layer of our example application. Second, we have to implement the TodoService interface. We can do this by following these steps:Inject our repository to the service class by using constructor injection. Add a private Todo findTodoById(String id) method to the service class and implement it by either returning the found Todo object or throwing the TodoNotFoundException. Add a private TodoDTO convertToDTO(Todo model) method the service class and implement it by converting the Todo object into a TodoDTO object and returning the created object. Add a private List<TodoDTO> convertToDTOs(List<Todo> models) and implement it by converting the list of Todo objects into a list of TodoDTO objects and returning the created list. Implement the TodoDTO create(TodoDTO todo) method. This method creates a new Todo object, saves the created object to the MongoDB database, and returns the information of the created todo entry. Implement the TodoDTO delete(String id) method. This method finds the deleted Todo object, deletes it, and returns the information of the deleted todo entry. If no Todo object is found with the given id, this method throws the TodoNotFoundException. Implement the List<TodoDTO> findAll() method. This methods retrieves all Todo objects from the database, transforms them into a list of TodoDTO objects, and returns the created list. Implement the TodoDTO findById(String id) method. This method finds the Todo object from the database, converts it into a TodoDTO object, and returns the created TodoDTO object. If no todo entry is found, this method throws the TodoNotFoundException. Implement the TodoDTO update(TodoDTO todo) method. This method finds the updated Todo object from the database, updates its title and description, saves it, and returns the updated information. If the updated Todo object is not found, this method throws the TodoNotFoundException.The source code of the MongoDBTodoService looks as follows: import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service;import java.util.List; import java.util.Optional;import static java.util.stream.Collectors.toList;@Service final class MongoDBTodoService implements TodoService {private final TodoRepository repository;@Autowired MongoDBTodoService(TodoRepository repository) { this.repository = repository; }@Override public TodoDTO create(TodoDTO todo) { Todo persisted = Todo.getBuilder() .title(todo.getTitle()) .description(todo.getDescription()) .build(); persisted = repository.save(persisted); return convertToDTO(persisted); }@Override public TodoDTO delete(String id) { Todo deleted = findTodoById(id); repository.delete(deleted); return convertToDTO(deleted); }@Override public List findAll() { List todoEntries = repository.findAll(); return convertToDTOs(todoEntries); }private List convertToDTOs(List models) { return models.stream() .map(this::convertToDTO) .collect(toList()); }@Override public TodoDTO findById(String id) { Todo found = findTodoById(id); return convertToDTO(found); }@Override public TodoDTO update(TodoDTO todo) { Todo updated = findTodoById(todo.getId()); updated.update(todo.getTitle(), todo.getDescription()); updated = repository.save(updated); return convertToDTO(updated); }private Todo findTodoById(String id) { Optional result = repository.findOne(id); return result.orElseThrow(() -> new TodoNotFoundException(id));}private TodoDTO convertToDTO(Todo model) { TodoDTO dto = new TodoDTO();dto.setId(model.getId()); dto.setTitle(model.getTitle()); dto.setDescription(model.getDescription());return dto; } } We have now created the service layer of our example application. Let’s move on and create the controller class. Creating the Controller Class First, we need to create the DTO class that contains the information of a single todo entry and specifies the validation rules that are used to ensure that only valid information can be saved to the database. The source code of the TodoDTO class looks as follows: import org.hibernate.validator.constraints.NotEmpty;import javax.validation.constraints.Size;public final class TodoDTO {private String id;@Size(max = Todo.MAX_LENGTH_DESCRIPTION) private String description;@NotEmpty @Size(max = Todo.MAX_LENGTH_TITLE) private String title;//Constructor, getters, and setters are omitted } Additional Reading:The Reference Manual of Hibernate Validator 5.0.3Second, we have to create the controller class that processes the HTTP requests send to our REST API and sends the correct response back to the client. We can do this by following these steps:Inject our service to our controller by using constructor injection. Add a create() method to our controller and implement it by following these steps:Read the information of the created todo entry from the request body. Validate the information of the created todo entry. Create a new todo entry and return the created todo entry. Set the response status to 201.Implement the delete() method by delegating the id of the deleted todo entry forward to our service and return the deleted todo entry. Implement the findAll() method by finding the todo entries from the database and returning the found todo entries. Implement the findById() method by finding the todo entry from the database and returning the found todo entry. Implement the update() method by following these steps:Read the information of the updated todo entry from the request body. Validate the information of the updated todo entry. Update the information of the todo entry and return the updated todo entry.Create an @ExceptionHandler method that sets the response status to 404 if the todo entry was not found (TodoNotFoundException was thrown).The source code of the TodoController class looks as follows: import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpStatus; import org.springframework.web.bind.annotation.ExceptionHandler; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.ResponseStatus; import org.springframework.web.bind.annotation.RestController;import javax.validation.Valid; import java.util.List;@RestController @RequestMapping("/api/todo") final class TodoController {private final TodoService service;@Autowired TodoController(TodoService service) { this.service = service; }@RequestMapping(method = RequestMethod.POST) @ResponseStatus(HttpStatus.CREATED) TodoDTO create(@RequestBody @Valid TodoDTO todoEntry) { return service.create(todoEntry); }@RequestMapping(value = "{id}", method = RequestMethod.DELETE) TodoDTO delete(@PathVariable("id") String id) { return service.delete(id); }@RequestMapping(method = RequestMethod.GET) List<TodoDTO> findAll() { return service.findAll(); }@RequestMapping(value = "{id}", method = RequestMethod.GET) TodoDTO findById(@PathVariable("id") String id) { return service.findById(id); }@RequestMapping(value = "{id}", method = RequestMethod.PUT) TodoDTO update(@RequestBody @Valid TodoDTO todoEntry) { return service.update(todoEntry); }@ExceptionHandler @ResponseStatus(HttpStatus.NOT_FOUND) public void handleTodoNotFound(TodoNotFoundException ex) { } } Note: If the validation fails, our REST API returns the validation errors as JSON and sets the response status to 400. If you want to know more about this, read a blog post titled: Spring from the Trenches: Adding Validation to a REST API. That is it. We have now created a REST API that provides CRUD operations for todo entries and saves them to MongoDB database. Let’s summarize what we learned from this blog post. Summary This blog post has taught us three things:We can get the required dependencies with Maven by declaring only two dependencies: spring-boot-starter-web and spring-data-mongodb. If we are happy with the default configuration of Spring Boot, we can configure our web application by using its auto-configuration support and “dropping” new jars to the classpath. We learned to create a simple REST API that saves information to MongoDB database and finds information from it.You can get the example application of this blog post from Github.Reference: Creating a REST API with Spring Boot and MongoDB from our JCG partner Attila Mihaly Balazs at the Java Advent Calendar blog....
grails-logo

Grails Tutorial for Beginners – Grails Service Layer

This tutorial will discuss the importance of the service layer in Grails and how to work with it. It also explains transaction management and how to utilize it. Introduction Separation Of Concerns Consider this analogy: Imagine a company where employees are assigned tasks on different nature of work. For example, say there is an employee named John Doe with the following responsibilities:  Handle accounting and releasing of check payments Take calls from customers for product support Handle administrative tasks such as booking flights plus accommodation of executives Manage the schedule of the CEOAs you could see, John’s work is too complicated because he needs to multi-task on very different type of tasks. He needs to change his frame of mind when switching from one task to another. He is more likely to be stressed out and commit many mistakes. His errors could cost a lot of money in the long run. Another problem is it’s not easy to replace John later on as he is too involved in a complicated setup. Likewise in software engineering, it is not good idea to write a class that has different nature of responsibilities. The general consensus of experts is that a single class or source file should only be involved in only one nature of task. This is called the “separation of concerns“. If there is a lot of things going on, this will only introduce bugs and problems later as the application will be very complicated to maintain. Although this concept is so simple to state, the effect on a project is enormous! Consider a controller in Grails. Inside a controller, we can do the ff:Handle routing logic Invoke GORM operations to manipulate data in the database Render text and show it to the user.However, it is not advisable that we do all those things inside a controller. Grails allows a developer to do all these things together for flexibility, but it should be avoided. The real purpose of a controller is to deal with routing logic- which means:Receive requests from users Invoke the most appropriate business logic Invoke the view to display the resultView logic should be taken care of inside Groovy Server Pages (GSP) files. Read this previous tutorial about GSPs if you are unfamiliar with it. For business logic, they should be implemented inside the service layer. Grails has a default support and handling for the service layer. Don’t Repeast Yourself (DRY) Principle Another benefit of using a service layer is you could reuse a business logic in multiple places without code duplication. Having a single copy of a particular business logic will make a project shorter (in terms of lines of codes) and easier to maintain. Changing the business logic will just require a change in only one particular place. Not having to duplicate code is a part of another best practice called Don’t Repeast Yourself (DRY) Principle Create a ServiceTo create a service class, invoke the create-service command from the root folder of your project. For example, use the following command inside a command line or terminal: grails create-service asia.grails.sample.Test You can also create a service class inside the GGTS IDE. Just right click the project, select New and then Service.  Provide the name and click Finish:  Below is the resulting class created. A default method is provided as an example. This is where we will write business logic. package asia.grails.sample class TestService { def serviceMethod() { } } Just add as many functions as needed that pertains to business logic and GORM operations. Here is an example: package asia.grails.sample class StudentService { Student createStudent(String lastName, String firstName) { Student student = new Student() student.lastName = lastName student.firstName = firstName student.referenceNumber = generateReferenceNumber() student.save() return student }private String generateReferenceNumber() { // some specific logic for generating reference number return referenceNumber } } Injecting a Service The service is automatically injected inside a controller by just defining a variable with the proper name. (use studentService variable to inject StudentService instance). Example: package asia.grails.sample class MyController { def studentServicedef displayForm() { }def handleFormSubmit() { def student = studentService.createStudent(params.lastName, params.firstName) [student:student] } } If you are not familiar with Spring and the concept of injection, what it means above is that you don’t need to do anything special. Just declare the variable studentService and the Grails framework will automatically assign an instance to it. Just declare and use right away. You can also inject a service to another service. For example: package asia.grails.sample class PersonService { Person createPerson(String lastName, String firstName) { Person p = new Person() p.lastName = lastName p.firstName = firstName p.save() return p } }package asia.grails.sample class EmployeeService { def personService Employee createEmployee(String lastName, String firstName) { Person person = personService.createPerson(lastName, firstName) Employee emp = new Employee() emp.person = person emp.employmentDate = new Date() emp.save() return emp } } You can also inject a service inside Bootstrap.Grovy. For example: class BootStrap { def studentService def init = { servletContext -> if ( Student.count() == 0 ) { // if no students in the database, create some test data studentService.createStudent("Doe", "John") studentService.createStudent("Smith", "Jame") } } } You can also inject a service inside a tag library. For example: package asia.grails.sample class StudentService { def listStudents() { return Student.list() } }class MyTagLib { StudentService studentService static namespace = "my"def renderStudents = { def students = studentService.listStudents() students.each { student -> out << "<div>Hi ${student.firstName} ${student.lastName}, welcome!</div>" } } } Transaction Management If you are are new to working with databases, transaction is a very important concept. Usually, we wish certain sequence of database changes to be all successful. If not possible, we want no operation to happen at all. For example, consider this code to transfer funds between two bank accounts: class AccountService { def transferFunds(long accountFromID, long accountToID, long amount) { Account accountFrom = Account.get(accountFromID) Account accountTo = Account.get(accountToID) accountFrom.balance = accountFrom.balance - amount accountTo.balance = accountTo.balance + amount } } This code deducts money from one account (accountFrom.balance = accountFrom.balance – amount), and adds money to another account (accountTo.balance = accountTo.balance + amount). Imagine if something happened (an Exception) after deducting from the source account and the destination account was not updated. Money will be lost and not accounted for. For this type of codes, we want an “all or nothing” behavior. This concept is also called atomicity. For the scenario given above, transaction management is required to achieve the desired behavior. The program will start a conversation with the database where any update operations are just written on a temporary space (like a scratch paper). The program will later on need to tell the database if it wants to make the changes final, or to scratch out all it did earlier. Since Grails supports transactions, it automatically do these things to us when we declare a service to be transactional:If all db operations are successful, reflect the changes to the database (this is also called commit) If one db operation result in exception, return to the original state and forget/undo all the previous operations (this is also called rollback)Transaction Declaration Grails supports transaction management inside services. By default, all services are transactional. So these 3 declarations have the same effect. class CountryService { } class CountryService { static transactional = true } @Transactional class CountryService { } For readability, I suggest declaring the static transactional at the top of each service. Note that not all applications needs transaction. Non sensitive programs like a blog software can survive without it. Transactions are usually required when dealing with sensitive data such as financial information. Using transactions introduce overhead. Here is an example of how to disable it in a service: class CountryService { static transactional = false } How To Force A Rollback One of the most important thing to remember is what code to write to force Grails to rollback a current succession of operations. To do that, just raise a RuntimeException or a descendant of it. For example, this will rollback the operation accountFrom.balance = accountFrom.balance – amount class AccountService { def transferFunds(long accountFromID, long accountToID, long amount) { Account accountFrom = Account.get(accountFromID) Account accountTo = Account.get(accountToID) accountFrom.balance = accountFrom.balance - amount throw new RuntimeException("testing only") accountTo.balance = accountTo.balance + amount } } But this code will not: class AccountService { def transferFunds(long accountFromID, long accountToID, long amount) { Account accountFrom = Account.get(accountFromID) Account accountTo = Account.get(accountToID) accountFrom.balance = accountFrom.balance - amount throw new Exception("testing only") accountTo.balance = accountTo.balance + amount } } Meaning the line with subtraction will be committed, but the line with addition will not be reached.Reference: Grails Tutorial for Beginners – Grails Service Layer from our JCG partner Jonathan Tan at the Grails cookbook blog....
java-logo

JDK 9 – a letter to Santa?!

As everybody knows, winter (especially the time before Christmas) is a proper time for dreaming and hoping a moment when dreams seem to be touchable. A moment when children and grown-ups write on paper or in their thoughts fictive or real letters to Santa Claus, hoping their dreams will become reality. This is catchy, as even the people behind OpenJDK expressed their wishes for the (of java) on the first day of December, when they published an updated list JEPs. Hold on, don’t get excited just yet…as we bitterly know, theywill might become reality somewhere in early 2016. Or at least this is the plan, and history showed us what sticking to a plan means. Of course, the presence of a JEP in the above mentioned list, doesn’t mean that the final release will contain it as the JEP process diagram clearly explains, but for the sake of winter fairy tails we will go through the list and provide a brief description what the intended purpose of each item is. Disclaimer: the list of JEPs is a moving target, since the publication of this article the list changed at least once. JEP 102: Process API Updates Those of you who where lucky not that good,  it seems that santa punished you and you had the pleasure of working with java’s process api and of course met his limitations. After the changes in JDK 7, the current JEP comes to improve this API even further and to give us the ability to:to get the pid (or equivalent) of the current Java virtual machine and the pid of processes created with the existing API to get/set the process name of the current Java virtual machine and processes created with the existing API (where possible) to enumerate Java virtual machines and processes on the system. Information on each process may include its pid, name, state, and perhaps resource usage to deal with process trees, in particular some means to destroy a process tree to deal with hundreds of sub-processes, perhaps multiplexing the output or error streams to avoid creating a thread per sub-processI don’t know about you, but I can definitely find at least a couple of scenarios where I could put at good use some of this features, so fingers crossed. JEP 143: Improve Contended Locking I had the luck and pleasure to be present to a performance workshop the other days with Peter Lawrey, and one of the thumb rules of java performance tuning was: the least concurrent an application, the more performant it is. With this improvement in place, the rules of performance tuning might need to find another thumb rule, as with this JEP implemented the latency of using monitors in java is targeted. To be more accurate, the targets are:Field reordering and cache line alignment Speed up PlatformEvent::unpark() Fast Java monitor enter operations Fast Java monitor exit operations Fast Java monitor notify/notifyAll operations Adaptive spin improvements and SpinPause on SPARCJEP 158: Unified JVM Logging The title kind of says it all. If you are working with enterprise applications you had to deal at least once or twice with a gc log and I suppose raised at least an eyebrow (if not both) when seeing the amount of information and the way it was presented there. Well, if you were “lucky” enough you probably migrated between JVM versions, and then definitely wanted/needed another two eyebrows to raise when you realised that the parsers you’ve built for the previous version has issues dealing with the current version of the JVM logging. I suppose I can continue with why is bad, but let’s concentrate on the improvements, so hopefully by the next release we will have a reason to complain that before it was better. The gc logging seems to try to align with the other logging frameworks we might be used too like log4j. So, it will work on different levels from the perspective of the logged information’s criticality (error, warning, info, debug, trace) their performance target being that error and warning not to have any performance impact on production environments, info suitable for production environments, while debug and trace don’t have any performance requirements. A default log line will look as follows: [gc][info][6.456s] Old collection complete In order to ensure flexibility the logging mechanisms will be tuneable through JVM parameters, the intention being to have a unified approach to them. For backwards compatibility purposes, the already existing JVM flags will be mapped to new flags, wherever possible. To be as suitable as possible for realtime applications, the logging can be manipulated through jcmd command or MBeans. The sole and probably the biggest downside of this JEP is that it targets only providing the logging mechanisms and doesn’t necessarily mean that the logs will also improve. For having the beautiful logs we dream of maybe we need to wait a little bit more. JEP 165: Compiler Control As you probably know, the java platform uses JIT compilers to ensure an optimum run of the written application. The two existing compilers intuitively named C1 and C2, correspond to client(-client option) respectively server side application (-server option). The expressed goals of this JEP is to increase the manageability of these compilers:Fine-grained and method-context dependent control of the JVM compilers (C1 and C2). The ability to change the JVM compiler control options in run time. No performance degradation.JEP 197: Segmented Code Cache It seems that JVM performance is targeted in the future java release, as the current JEP is intended to optimise the code cache. The goals are:Separate non-method, profiled, and non-profiled code Shorter sweep times due to specialized iterators that skip non-method code Improve execution time for some compilation-intensive benchmarks Better control of JVM memory footprint Decrease fragmentation of highly-optimized code Improve code locality because code of the same type is likely to be accessed close in timeBetter iTLB and iCache behaviorEstablish a base for future extensionsImproved management of heterogeneous code; for example, Sumatra (GPU code) and AOT compiled code Possibility of fine-grained locking per code heap Future separation of code and metadata (see JDK-7072317)The first two declared goals, are from my perspective quite exciting, with the two in place the sweep times of the code cache can be highly improved by simply skiping the non-method areas – areas that should exist on the entire runtime of the JVM. JEP 198: Light-Weight JSON API The presence of this improvement shouldn’t be a surprise, but for me it is surprising that it didn’t make sooner in the JDK, as JSON replaced XML as the “lingua-franca” of the web, not only for reactive JS front-ends but also for structuring the data in NoSQL databases. The declared goals of this JEP are:Parsing and generation of JSON RFC7159. Functionality meets needs of Java developers using JSON. Parsing APIs which allow a choice of parsing token stream, event (includes document hierarchy context) stream, or immutable tree representation views of JSON documents and data streams. Useful API subset for compact profiles and Java ME. Immutable value tree construction using a Builder-style API. Generator style API for JSON data stream output and for JSON “literals”. A transformer API, which takes as input an existing value tree and produces a new value tree as result.Also, the intention is to align with JSR 353. Even if the future JSON will have limited functionalities comparing to the already existing libraries, it has the competitive advantage of integrating and using the newly added features from JDK 8 like streams and lambdas. JEP 199: Smart Java Compilation, Phase Two The sjavac is a wrapper to the already famous javac, a wrapper intended to bring improved performance when compiling big sized projects. As in the current phase, the project has stability and portability issues, the main goal is to fix the given issues and to probably make it the default build tool for the JDK project. The stretched goal would be to make the tool ready to use for projects other than JDK and probably integration with the existing toolchain. JEP 201: Modular Source Code The first steps in the direction of project jigsaw’s implementation, having the intention of reorganising the source code as modules enhancing the build tool for module building and respecting the module boundaries. JEP 211: Elide Deprecation Warnings on Import Statements The goal of this JEP is to facilitate making large code bases clean of lint warnings. The deprecation warnings on imports cannot be suppressed using the @SuppressWarnings annotation, unlike uses of deprecated members in code. In large code bases like that of the JDK, deprecated functionality must often be supported for some time and merely importing a deprecated construct does not justify a warning message if all the uses of the deprecated construct are intentional and suppressed. JEP 212: Resolve Lint and Doclint Warnings As the lunch date for the JDK 9 is early 2016, this JEP is perfect for that time of the year and the corresponding chores: the spring clean-up. The main goal of it is to have a clean compile under javac’s lint option (-Xlint:all) for at least the fundamental packages of the platform. JEP 213: Milling Project Coin Project coin’s target starting with JDK 7 was to bring some syntactic sugar in the java language, to bring some new clothes on the mature platform. Even if it didn’t bring any improvements to the performance of the language, it increased the readability of the code hence it brought a plus to one of the most important assets of a software project, in my opinion, a more readable code base. This JEP targets four changes:Allow @SafeVargs on private instance methods. Allow effectively-final variables to be used as resources in the try-with-resources statement. Allow diamond with inner classes if the argument type of the inferred type is denotable. Complete the removal, begun in Java SE 8, of underscore from the set of legal identifier names.JEP 214: Remove GC Combinations Deprecated in JDK 8 Spring time cleaning continues with the removal of the JVM flags deprecated in Java 8 release, so with release 9 the following options will no longer be supported: DefNew + CMS : -XX:-UseParNewGC -XX:+UseConcMarkSweepGC ParNew + SerialOld : -XX:+UseParNewGC ParNew + iCMS : -XX:+CMSIncrementalMode -XX:+UseConcMarkSweepGC ParNew + iCMS : -Xincgc DefNew + iCMS : -XX:+CMSIncrementalMode -XX:+UseConcMarkSweepGC -XX:-UseParNewGC CMS foreground : -XX:+UseCMSCompactAtFullCollection CMS foreground : -XX:+CMSFullGCsBeforeCompaction CMS foreground : -XX:+UseCMSCollectionPassing JEP 216: Process Import Statements Correctly This JEP targets to Fix javac to properly accept and reject programs regardless of the order of import statements and extends and implements clauses. JEP 219: Datagram Transport Layer Security (DTLS) The increasing number of application layer protocols have been designed that use UDP transport,in particular, protocols such as the Session Initiation Protocol (SIP) and electronic gaming protocols made security concerns higher than ever especially since TLS can be used only over reliable protocols like TCP. The current JEP intends to fill this gap by defining an API for Datagram Transport Layer Security (DTLS) version 1.0 (RFC 4347) and 1.2 (RFC 6347). JEP 220: Modular Run-Time Images Comes as a follow-up step to JEP 201, with the intention to restructure the JDK and run-time environment to accommodate modules and to improve performance, security, and maintainability. Define a new URI scheme for naming the modules, classes, and resources stored in a run-time image without revealing the internal structure or format of the image. Revise existing specifications as required to accommodate these changes. JEP 224: HTML5 Javadoc As the HTML standard version reached version 5, the javadoc pages of the JDK need to keep up the pace as well, hence upgrade from HTML 4.01. JEP 231: Remove Launch-Time JRE Version Selection API updates Remove the ability to request (by using -version:), at JRE launch time, a version of the JRE that is not the JRE being launched. The removal will be done stepwise: a warning will be emitted in version 9 while Java 10 will probably throw an error. This is the current form of the list of enhancements prepared for JDK 9, to be honest when I first looked over it, I was somehow blue but after reading more into it I became rather excited as it seems that java is yet to start the road for another adventure and they need all the help they could get. So if you want to get involved (please do!), a later blog post of the java advent series will present you how to get involved. Imagine it like the fellow ship of the ring, but target of the adventure is building java not destroying the ring…who might Mr. Frodo be?Reference: JDK 9 – a letter to Santa?! from our JCG partner Olimpiu Pop at the Java Advent Calendar blog....
docker-logo

Java EE 7 Hands-on Lab on WildFly and Docker

Java EE 7 Hands-on Lab has been delivered all around the world and is a pretty standard application that shows design patterns and anti-patterns for a typical Java EE 7 application. It shows how the following technologies can be used in a close-to-real-world application:              WebSocket 1.0 JSON Processing 1.0 Batch 1.0 Contexts & Dependency Injection 1.1 Java Message Service 2.0 Java API for RESTFul Services 2.0 Java Persistence API 2.0 Enterprise JavaBeans 3.1 JavaSever Faces 2.2However the lab requires you to download NetBeans (Java EE 7 tooling) and WildFly or GlassFish (Java EE 7 runtime). If you don’t want to follow the instructions and create the app, a pre-built solution zip file is available. But this still requires you to download Maven and build the app. You still have to download the runtime, which is pretty straight forward for WildFly, but still an extra task. Maven step can be reduced using a pre-built WAR file, but runtime is still required. Docker containers allows you to simplify application delivery by packaging all the key components together in an image. So how do you get the first feel of Java EE 7 hands-on lab with Docker ? If you are new to Docker, Tech Tip #39 provide more background and details on how to get started. After initial setup, you can pull the Docker image that contains WildFly and pre-built Java EE 7 hands-on lab WAR file as shown: docker pull arungupta/javaee7-hol And then you can run it as: docker run -it -p 80:8080 arungupta/javaee7-hol Find out the IP address where your container is hosted using boot2docker ip command. And now access your Java EE 7 application at http://<IP>/movieplex7. The app would look like:Here is the complete log shown by the Docker container:javaee7-hol> docker run -it -p 80:8080 arungupta/javaee7-hol =========================================================================JBoss Bootstrap EnvironmentJBOSS_HOME: /opt/jboss/wildflyJAVA: /usr/lib/jvm/java/bin/javaJAVA_OPTS: -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true=========================================================================22:24:12,214 INFO [org.jboss.modules] (main) JBoss Modules version 1.3.3.Final 22:24:12,463 INFO [org.jboss.msc] (main) JBoss MSC version 1.2.2.Final 22:24:12,541 INFO [org.jboss.as] (MSC service thread 1-7) JBAS015899: WildFly 8.2.0.Final "Tweek" starting 22:24:13,566 INFO [org.jboss.as.server] (Controller Boot Thread) JBAS015888: Creating http management service using socket-binding (management-http) 22:24:13,586 INFO [org.xnio] (MSC service thread 1-9) XNIO version 3.3.0.Final 22:24:13,595 INFO [org.xnio.nio] (MSC service thread 1-9) XNIO NIO Implementation Version 3.3.0.Final 22:24:13,623 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 35) JBAS010280: Activating Infinispan subsystem. 22:24:13,631 INFO [org.jboss.as.jacorb] (ServerService Thread Pool -- 36) JBAS016300: Activating JacORB Subsystem 22:24:13,650 INFO [org.wildfly.extension.io] (ServerService Thread Pool -- 34) WFLYIO001: Worker 'default' has auto-configured to 16 core threads with 128 task threads based on your 8 available processors 22:24:13,678 INFO [org.jboss.as.security] (ServerService Thread Pool -- 51) JBAS013171: Activating Security Subsystem 22:24:13,682 INFO [org.jboss.as.naming] (ServerService Thread Pool -- 46) JBAS011800: Activating Naming Subsystem 22:24:13,684 WARN [org.jboss.as.txn] (ServerService Thread Pool -- 52) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique. 22:24:13,690 INFO [org.jboss.as.security] (MSC service thread 1-6) JBAS013170: Current PicketBox version=4.0.21.Final 22:24:13,694 INFO [org.jboss.as.connector.logging] (MSC service thread 1-10) JBAS010408: Starting JCA Subsystem (IronJacamar 1.1.9.Final) 22:24:13,706 INFO [org.jboss.as.jsf] (ServerService Thread Pool -- 42) JBAS012615: Activated the following JSF Implementations: [main] 22:24:13,764 INFO [org.jboss.as.webservices] (ServerService Thread Pool -- 54) JBAS015537: Activating WebServices Extension 22:24:13,826 INFO [org.wildfly.extension.undertow] (MSC service thread 1-14) JBAS017502: Undertow 1.1.0.Final starting 22:24:13,827 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 53) JBAS017502: Undertow 1.1.0.Final starting 22:24:14,086 INFO [org.jboss.remoting] (MSC service thread 1-9) JBoss Remoting version 4.0.6.Final 22:24:14,087 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 30) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3) 22:24:14,092 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-13) JBAS010417: Started Driver service with driver-name = h2 22:24:14,126 INFO [org.jboss.as.naming] (MSC service thread 1-6) JBAS011802: Starting Naming Service 22:24:14,139 INFO [org.jboss.as.mail.extension] (MSC service thread 1-15) JBAS015400: Bound mail session 22:24:14,316 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 53) JBAS017527: Creating file handler for path /opt/jboss/wildfly/welcome-content 22:24:14,343 INFO [org.wildfly.extension.undertow] (MSC service thread 1-9) JBAS017525: Started server default-server. 22:24:14,364 INFO [org.wildfly.extension.undertow] (MSC service thread 1-14) JBAS017531: Host default-host starting 22:24:14,433 WARN [jacorb.codeset] (MSC service thread 1-4) Warning - unknown codeset (ASCII) - defaulting to ISO-8859-1 22:24:14,462 INFO [org.wildfly.extension.undertow] (MSC service thread 1-9) JBAS017519: Undertow HTTP listener default listening on /0.0.0.0:8080 22:24:14,488 WARN [org.jboss.as.messaging] (MSC service thread 1-6) JBAS011600: AIO wasn't located on this platform, it will fall back to using pure Java NIO. If your platform is Linux, install LibAIO to enable the AIO journal 22:24:14,492 INFO [org.jboss.as.jacorb] (MSC service thread 1-4) JBAS016330: CORBA ORB Service started 22:24:14,566 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221000: live server is starting with configuration HornetQ Configuration (clustered=false,backup=false,sharedStore=true,journalDirectory=/opt/jboss/wildfly/standalone/data/messagingjournal,bindingsDirectory=/opt/jboss/wildfly/standalone/data/messagingbindings,largeMessagesDirectory=/opt/jboss/wildfly/standalone/data/messaginglargemessages,pagingDirectory=/opt/jboss/wildfly/standalone/data/messagingpaging) 22:24:14,574 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221006: Waiting to obtain live lock 22:24:14,623 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-8) JBAS010400: Bound data source 22:24:14,630 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221013: Using NIO Journal 22:24:14,664 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-8) JBAS015012: Started FileSystemDeploymentService for directory /opt/jboss/wildfly/standalone/deployments 22:24:14,671 INFO [org.jboss.as.server.deployment] (MSC service thread 1-16) JBAS015876: Starting deployment of "movieplex7-1.0-SNAPSHOT.war" (runtime-name: "movieplex7-1.0-SNAPSHOT.war") 22:24:14,692 INFO [org.jboss.as.jacorb] (MSC service thread 1-15) JBAS016328: CORBA Naming Service started 22:24:14,737 INFO [io.netty.util.internal.PlatformDependent] (ServerService Thread Pool -- 56) Your platform does not provide complete low-level API for accessing direct buffers reliably. Unless explicitly requested, heap buffer will always be preferred to avoid potential system unstability. 22:24:14,771 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221043: Adding protocol support CORE 22:24:14,777 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221043: Adding protocol support AMQP 22:24:14,780 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221043: Adding protocol support STOMP 22:24:14,836 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221034: Waiting to obtain live lock 22:24:14,837 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221035: Live Server Obtained live lock 22:24:14,987 INFO [org.jboss.messaging] (MSC service thread 1-4) JBAS011615: Registered HTTP upgrade for hornetq-remoting protocol handled by http-acceptor-throughput acceptor 22:24:14,987 INFO [org.jboss.messaging] (MSC service thread 1-13) JBAS011615: Registered HTTP upgrade for hornetq-remoting protocol handled by http-acceptor acceptor 22:24:14,997 INFO [org.jboss.as.jpa] (MSC service thread 1-16) JBAS011401: Read persistence.xml for movieplex7PU 22:24:15,055 INFO [org.jboss.ws.common.management] (MSC service thread 1-9) JBWS022052: Starting JBoss Web Services - Stack CXF Server 4.3.2.Final 22:24:15,082 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221007: Server is now live 22:24:15,083 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221001: HornetQ Server version 2.4.5.FINAL (Wild Hornet, 124) [71b2a3db-7ccd-11e4-8f44-5d96c66ef94c] 22:24:15,084 INFO [org.jboss.as.jpa] (ServerService Thread Pool -- 57) JBAS011409: Starting Persistence Unit (phase 1 of 2) Service 'movieplex7-1.0-SNAPSHOT.war#movieplex7PU' 22:24:15,094 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221003: trying to deploy queue jms.queue.ExpiryQueue 22:24:15,100 INFO [org.hibernate.jpa.internal.util.LogHelper] (ServerService Thread Pool -- 57) HHH000204: Processing PersistenceUnitInfo [ name: movieplex7PU ...] 22:24:15,161 INFO [org.jboss.as.messaging] (ServerService Thread Pool -- 56) JBAS011601: Bound messaging object to jndi name java:/jms/queue/ExpiryQueue 22:24:15,177 INFO [org.jboss.as.messaging] (ServerService Thread Pool -- 59) JBAS011601: Bound messaging object to jndi name java:/ConnectionFactory 22:24:15,180 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221003: trying to deploy queue jms.queue.DLQ 22:24:15,183 INFO [org.jboss.as.messaging] (ServerService Thread Pool -- 60) JBAS011601: Bound messaging object to jndi name java:/jms/queue/DLQ 22:24:15,193 INFO [org.hornetq.jms.server] (ServerService Thread Pool -- 58) HQ121005: Invalid "host" value "0.0.0.0" detected for "http-connector" connector. Switching to "e953a86d3fc0". If this new address is incorrect please manually configure the connector to use the proper one. 22:24:15,194 INFO [org.hibernate.Version] (ServerService Thread Pool -- 57) HHH000412: Hibernate Core {4.3.7.Final} 22:24:15,194 INFO [org.jboss.as.messaging] (ServerService Thread Pool -- 58) JBAS011601: Bound messaging object to jndi name java:jboss/exported/jms/RemoteConnectionFactory 22:24:15,197 INFO [org.hibernate.cfg.Environment] (ServerService Thread Pool -- 57) HHH000206: hibernate.properties not found 22:24:15,198 INFO [org.hibernate.cfg.Environment] (ServerService Thread Pool -- 57) HHH000021: Bytecode provider name : javassist 22:24:15,200 INFO [org.jboss.as.connector.deployment] (MSC service thread 1-12) JBAS010406: Registered connection factory java:/JmsXA 22:24:15,234 INFO [org.hornetq.ra] (MSC service thread 1-12) HornetQ resource adaptor started 22:24:15,235 INFO [org.jboss.as.connector.services.resourceadapters.ResourceAdapterActivatorService$ResourceAdapterActivator] (MSC service thread 1-12) IJ020002: Deployed: file://RaActivatorhornetq-ra 22:24:15,238 INFO [org.jboss.as.connector.deployment] (MSC service thread 1-11) JBAS010401: Bound JCA ConnectionFactory 22:24:15,238 INFO [org.jboss.as.messaging] (MSC service thread 1-6) JBAS011601: Bound messaging object to jndi name java:jboss/DefaultJMSConnectionFactory 22:24:15,355 INFO [org.jboss.weld.deployer] (MSC service thread 1-11) JBAS016002: Processing weld deployment movieplex7-1.0-SNAPSHOT.war 22:24:15,405 INFO [org.hibernate.validator.internal.util.Version] (MSC service thread 1-11) HV000001: Hibernate Validator 5.1.3.Final 22:24:15,477 INFO [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-11) JNDI bindings for session bean named ShowTimingFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:java:global/movieplex7-1.0-SNAPSHOT/ShowTimingFacadeREST!org.javaee7.movieplex7.rest.ShowTimingFacadeREST java:app/movieplex7-1.0-SNAPSHOT/ShowTimingFacadeREST!org.javaee7.movieplex7.rest.ShowTimingFacadeREST java:module/ShowTimingFacadeREST!org.javaee7.movieplex7.rest.ShowTimingFacadeREST java:global/movieplex7-1.0-SNAPSHOT/ShowTimingFacadeREST java:app/movieplex7-1.0-SNAPSHOT/ShowTimingFacadeREST java:module/ShowTimingFacadeREST22:24:15,478 INFO [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-11) JNDI bindings for session bean named TheaterFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:java:global/movieplex7-1.0-SNAPSHOT/TheaterFacadeREST!org.javaee7.movieplex7.rest.TheaterFacadeREST java:app/movieplex7-1.0-SNAPSHOT/TheaterFacadeREST!org.javaee7.movieplex7.rest.TheaterFacadeREST java:module/TheaterFacadeREST!org.javaee7.movieplex7.rest.TheaterFacadeREST java:global/movieplex7-1.0-SNAPSHOT/TheaterFacadeREST java:app/movieplex7-1.0-SNAPSHOT/TheaterFacadeREST java:module/TheaterFacadeREST22:24:15,478 INFO [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-11) JNDI bindings for session bean named MovieFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:java:global/movieplex7-1.0-SNAPSHOT/MovieFacadeREST!org.javaee7.movieplex7.rest.MovieFacadeREST java:app/movieplex7-1.0-SNAPSHOT/MovieFacadeREST!org.javaee7.movieplex7.rest.MovieFacadeREST java:module/MovieFacadeREST!org.javaee7.movieplex7.rest.MovieFacadeREST java:global/movieplex7-1.0-SNAPSHOT/MovieFacadeREST java:app/movieplex7-1.0-SNAPSHOT/MovieFacadeREST java:module/MovieFacadeREST22:24:15,479 INFO [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-11) JNDI bindings for session bean named SalesFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:java:global/movieplex7-1.0-SNAPSHOT/SalesFacadeREST!org.javaee7.movieplex7.rest.SalesFacadeREST java:app/movieplex7-1.0-SNAPSHOT/SalesFacadeREST!org.javaee7.movieplex7.rest.SalesFacadeREST java:module/SalesFacadeREST!org.javaee7.movieplex7.rest.SalesFacadeREST java:global/movieplex7-1.0-SNAPSHOT/SalesFacadeREST java:app/movieplex7-1.0-SNAPSHOT/SalesFacadeREST java:module/SalesFacadeREST22:24:15,479 INFO [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-11) JNDI bindings for session bean named TimeslotFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:java:global/movieplex7-1.0-SNAPSHOT/TimeslotFacadeREST!org.javaee7.movieplex7.rest.TimeslotFacadeREST java:app/movieplex7-1.0-SNAPSHOT/TimeslotFacadeREST!org.javaee7.movieplex7.rest.TimeslotFacadeREST java:module/TimeslotFacadeREST!org.javaee7.movieplex7.rest.TimeslotFacadeREST java:global/movieplex7-1.0-SNAPSHOT/TimeslotFacadeREST java:app/movieplex7-1.0-SNAPSHOT/TimeslotFacadeREST java:module/TimeslotFacadeREST22:24:15,679 INFO [org.jboss.as.messaging] (MSC service thread 1-10) JBAS011601: Bound messaging object to jndi name java:global/jms/pointsQueue 22:24:15,753 INFO [org.jboss.weld.deployer] (MSC service thread 1-3) JBAS016005: Starting Services for CDI deployment: movieplex7-1.0-SNAPSHOT.war 22:24:15,787 INFO [org.jboss.weld.Version] (MSC service thread 1-3) WELD-000900: 2.2.6 (Final) 22:24:15,823 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 57) HQ221003: trying to deploy queue jms.queue.movieplex7-1.0-SNAPSHOT_movieplex7-1.0-SNAPSHOT_movieplex7-1.0-SNAPSHOT_java:global/jms/pointsQueue 22:24:15,825 INFO [org.jboss.weld.deployer] (MSC service thread 1-6) JBAS016008: Starting weld service for deployment movieplex7-1.0-SNAPSHOT.war 22:24:15,995 INFO [org.jboss.as.jpa] (ServerService Thread Pool -- 57) JBAS011409: Starting Persistence Unit (phase 2 of 2) Service 'movieplex7-1.0-SNAPSHOT.war#movieplex7PU' 22:24:16,097 INFO [org.hibernate.annotations.common.Version] (ServerService Thread Pool -- 57) HCANN000001: Hibernate Commons Annotations {4.0.4.Final} 22:24:16,403 INFO [org.hibernate.dialect.Dialect] (ServerService Thread Pool -- 57) HHH000400: Using dialect: org.hibernate.dialect.H2Dialect 22:24:16,409 WARN [org.hibernate.dialect.H2Dialect] (ServerService Thread Pool -- 57) HHH000431: Unable to determine H2 database version, certain features may not work 22:24:16,551 INFO [org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory] (ServerService Thread Pool -- 57) HHH000397: Using ASTQueryTranslatorFactory 22:24:17,005 INFO [org.hibernate.dialect.Dialect] (ServerService Thread Pool -- 57) HHH000400: Using dialect: org.hibernate.dialect.H2Dialect 22:24:17,006 WARN [org.hibernate.dialect.H2Dialect] (ServerService Thread Pool -- 57) HHH000431: Unable to determine H2 database version, certain features may not work 22:24:17,014 WARN [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE SALES] 22:24:17,015 WARN [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE POINTS] 22:24:17,015 WARN [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE SHOW_TIMING] 22:24:17,015 WARN [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE MOVIE] 22:24:17,016 WARN [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE TIMESLOT] 22:24:17,016 WARN [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE THEATER] 22:24:18,043 INFO [io.undertow.websockets.jsr] (MSC service thread 1-10) UT026003: Adding annotated server endpoint class org.javaee7.movieplex7.chat.ChatServer for path /websocket 22:24:18,144 INFO [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-10) Initializing Mojarra 2.2.8-jbossorg-1 20140822-1131 for context '/movieplex7' 22:24:18,593 INFO [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-10) Monitoring file:/opt/jboss/wildfly/standalone/tmp/vfs/temp/tempcccef9c92c7b9e85/movieplex7-1.0-SNAPSHOT.war-fb17dc17ca73dfb8/WEB-INF/faces-config.xml for modifications 22:24:19,013 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-10) Deploying javax.ws.rs.core.Application: class org.javaee7.movieplex7.rest.ApplicationConfig 22:24:19,013 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-10) Adding provider class org.javaee7.movieplex7.json.MovieWriter from Application class org.javaee7.movieplex7.rest.ApplicationConfig 22:24:19,013 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-10) Adding class resource org.javaee7.movieplex7.rest.SalesFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig 22:24:19,014 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-10) Adding class resource org.javaee7.movieplex7.rest.MovieFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig 22:24:19,014 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-10) Adding class resource org.javaee7.movieplex7.rest.ShowTimingFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig 22:24:19,014 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-10) Adding provider class org.javaee7.movieplex7.json.MovieReader from Application class org.javaee7.movieplex7.rest.ApplicationConfig 22:24:19,014 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-10) Adding class resource org.javaee7.movieplex7.rest.TimeslotFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig 22:24:19,014 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-10) Adding class resource org.javaee7.movieplex7.rest.TheaterFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig 22:24:19,118 INFO [org.wildfly.extension.undertow] (MSC service thread 1-10) JBAS017534: Registered web context: /movieplex7 22:24:19,166 INFO [org.jboss.as.server] (ServerService Thread Pool -- 31) JBAS018559: Deployed "movieplex7-1.0-SNAPSHOT.war" (runtime-name : "movieplex7-1.0-SNAPSHOT.war") 22:24:19,187 INFO [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on http://127.0.0.1:9990/management 22:24:19,188 INFO [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9990 22:24:19,188 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.2.0.Final "Tweek" started in 7285ms - Started 400 of 452 services (104 services are lazy, passive or on-demand)Source code for this Dockerfile is pretty straight forward and at github.com/arun-gupta/docker-images/blob/master/javaee7-hol/Dockerfile.Enjoy!Reference: Java EE 7 Hands-on Lab on WildFly and Docker from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
software-development-2-logo

Writing your own logging service?

Application logging is one those things like favorite Editors war: everyone has their own opinions and there are endless of implemenations and flavors out there. Now a days, you likely would want to use something already available such as Log4j or Logback. Even JDK has a built in “java.util.logging” implementation. To avoid couple to a direct logger, many projects would opt to use a facade interface, and there is already couple good ones out there already, such as SLF4J or Apache Common Logging etc. Despite all these, many project owners still want to try write their own logger service! I wondered if I were to ask and write one myself, what would it be like? So I played around and come up with this simple facade that wraps one of the logger provider (JDK logger in this case), and you can check it out here. With my logger, you can use it like this in your application: import zemian.service.logging.*; class MyService { Log log = LogFactory.createLog(MyService.class); public void run() { log.info(Message.msg("%s service is running now.", this)); } }Some principles I followed when trying this out:Use simple names for different level of messages: error, warn, info, debug and trace (no crazy fine, finer and finest level names.) Seperate Log service from implementation so you can swap provider. Uses Message logging POJO as data encapsulation. It simplifies the log service interface. Use log parameters and lazy format binding to construct log message to speed performance.Do not go crazy with logging service implemenation make it complex. For example I recommend NOT to mix business logic or data in your logging if possible! If you need custom error codes to be logged for example, you can write your own Exception class and encapsulate there, and then let the logging service do its job: just logging. Here are some general rules about using logger in your application that I recommend: Use ERROR log messages when there is reallyl a error! Try not to log an “acceptable” error message in your application. Treat an ERROR as critical problem in your application, like if it’s in production, some one should be paged to take care of the problem immediately. Each message should have a full Java stacktrace! Some application might want to assign a unique Error Code to these level of messages for easier identification and troubleshoot purpose. Use WARN log messages if it’s a problem that’s ignorable during production operation, but not good idea to supress it. Likely these might point to potentially problem in your application or env. Each message should have a full Java stacktrace, if available that is! Use INFO log messages for admin operators or application monitors peoples to see how your application is doing. High level application status or some important and meaningful business information indicators etc. Do not litter your log with developer’s messages and unessary verbose and unclear message. Each message should be written in clear sentence so operators knows it’s meaningful. Use DEBUG log messages for developers to see and troubleshoot the application. Use this for critical application junction and operation to show objects and services states etc. Try not to add repeated loop info messages here and litter your log content. Use TRACE log message for developers to troubleshoot tight for loop and high traffic messages information. You should select a logger provider that let you configure and turn these logging levels ON or OFF (preferrable at runtime if possible as well). Each level should able to automatically suppress all levels below it. And ofcourse you want a logger provider that can handle log message output to STDOUT and/or to FILE as destination as well.Reference: Writing your own logging service? from our JCG partner Zemian Deng at the A Programmer’s Journal blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close