Featured FREE Whitepapers

What's New Here?

fabric8_logo

DevOps with Apache Tomcat/TomEE and Fabric8

DevOps is all about automating your build and release environment, reducing human error/tasks, and building up a pipeline to support continuous delivery so you can get quick feedback about your IT solutions. It takes communication between your development teams and operations teams, as well as discipline and commitment to automation. Part of the automation I’m talking about is deploying your applications to environments in a consistent way AND being able to build your Dev/QA/Prod environment from scratch in a few seconds. And that’s where Fabric8 comes into the picture!   Fabric8 is an open-source configuration and automation platform for middleware that’s run in OSGi, Tomcat, Java EE, and MicroServices, Cloud, Anywhere really… I’ve blogged a little about what is Fabric8, but the best way to understand it is to download it and try it!! In the latest beta release, there is now support for running Apache Tomcat and TomEE! Both skinny and fat war deployments are supported. Here’s a quick video to help you get started with Tomcat and Fabric8. A couple of highlights: * You just create a war project * Use the fabric8 maven plugin to upload your project your instance of fabric8 * "Deploy" to a new container, which is Apache Tomcat * Use the Tomcat and Apache Camel plugins for HawtIO to get introspection * Use fabric:watch * for rapid deploys during development    Provision, Manage Tomcat with Fabric8 from Christian Posta on Vimeo.Reference: DevOps with Apache Tomcat/TomEE and Fabric8 from our JCG partner Christian Posta at the Christian Posta – Software Blog blog....
java-logo

Spring 4: @DateTimeFormat with Java 8 Date-Time API

@DateTimeFormat annotation that was introduced in Spring 3.0 as a part of Formatter SPI can be used to to parse and print localized field values in web applications. In Spring 4.0, @DateTimeFormat annotation can be used with Java 8 Date-Time API (java.time) out-of-the-box, without extra effort. In Spring, field formatting can be configured by field type or annotation. To bind an annotation to a formatter AnnotationFormatterFactory must be implemented. Spring 4.0 brings Jsr310DateTimeFormatAnnotationFormatterFactory that formats Java 8 Date-Time fields annotated with the @DateTimeFormat. Supported field types are as follows:java.util.LocalDate java.util.LocalTime java.util.LocalDateTime java.util.ZonedDateTime java.util.OffsetDateTime java.util.OffsetTimeOne can utilize all mentioned types in a form like below: public class DatesForm {@DateTimeFormat(iso = ISO.DATE) private LocalDate localDate;@DateTimeFormat(iso = ISO.TIME) private LocalTime localTime;@DateTimeFormat(iso = ISO.TIME) private OffsetTime offsetTime;@DateTimeFormat(iso = ISO.DATE_TIME) private LocalDateTime localDateTime;@DateTimeFormat(iso = ISO.DATE_TIME) private ZonedDateTime zonedDateTime;@DateTimeFormat(iso = ISO.DATE_TIME) private OffsetDateTime offsetDateTime; } The form can be passed to the view and Spring will take care of proper formatting of the fields.While specifying the formatting on the fields of types: java.util.LocalDate, java.util.LocalTime, java.util.OffsetTime you need to remember to properly configure @DateTimeFormat. @DateTimeFormat declares that a field should be formatted as a date time and since java.util.LocalDate represents a date, and the other two represent time – you will get java.time.temporal.UnsupportedTemporalTypeException (e.g.: Unsupported field: ClockHourOfAmPm, Unsupported field: MonthOfYear) thrown by java.time.format.DateTimeFormatter.Reference: Spring 4: @DateTimeFormat with Java 8 Date-Time API from our JCG partner Rafal Borowiec at the Codeleak.pl blog....
software-development-2-logo

REST Maturity

In 2008, Leonard Richardson published his Maturity Heuristic that classified web services into three levels based on their use of URI, HTTP, and hypermedia. Back then, most web services were stuck at either level 1 or 2. Unfortunately, not a whole lot has improved since then in that respect: so-called pragmatic REST is still the norm.   BTW, I really dislike the term “pragmatic REST”. It’s a cheap rhetoric trick to put opponents (“dogmatists”) on the defensive. More importantly, it creates semantic diffusion: pragmatic REST is not actually REST according to the definition, so please don’t call it that way or else we’re going to have a hard time understanding each other. The term REST hardly means anything anymore these days.Anyway, there is some light at the end of the tunnel: more services are now at level 3, where they serve hypermedia. A good example by a big name is Amazon’s AppStream API. The difference between plain media types, like image/jpeg, and hypermedia types, like text/html, is of course the “hyper” part. Links allow a client to discover functionality without being coupled to the server’s URI structure. BTW, application/json is not a hypermedia type, since JSON doesn’t define links. We can, of course, use a convention on top of JSON, for instance that there should be a links property with a certain structure to describes the links, like Spring HATEOAS does. The problem with conventions is that they are out-of-band communication, and a client has no way of knowing for sure whether that convention is followed when it sees a Content-Type of application/json. It’s therefore much better to use a media type that turns the convention into a rule, like HAL does. Speaking of out-of-band communication, the amount of it steadily decreases as we move up the levels. This is a very good thing, as it reduces the amount of coupling between clients and servers. Level 3 isn’t really the end station, however. Even with a hypermedia format like HAL there is still a lot of out-of-band communication. HAL doesn’t tell you which HTTP method to use on a particular link, for instance. The client can only know because a human has programmed it with that knowledge, based on some human-readable description that was published somewhere. Imagine that the human Web would work this way. We wouldn’t be able to use the same browser to shop at Amazon and read up at Wikipedia and do all those other things we take for granted. Instead, we would need an Amazon Browser, a Wikipedia Browser, etc. This is what we do with APIs today! Moving further into the direction of less out-of-band communication requires more than just links. Links only specify the URI part and we also need the HTTP and media type parts inside our representations. We might call this level 3b, Full Hypermedia. Siren gives you this. Uber even goes a step further and also abstracts the protocol, so that you can use it with, say, CoAP rather than HTTP. These newer hypermedia types allow for the use of a generic client that can handle any REST API that serves that hypermedia type, just like a web browser can be used against anything that serves HTML. An example of such an effort is the HAL browser (even though HAL is stuck at level 3a). However, even with the inclusion of protocol, media type, and method in the representation, we still need some out-of-band communication. The HAL browser can navigate any API that serves HAL, but it doesn’t understand the responses it gets. Therefore it can’t navigate links on its own to reach a certain goal. For true machine-to-machine (M2M) communication, we still need more. If we ever get the whole semantic web sorted out, this might one day be the final answer, but I’m not holding my breath. In the meantime we’ll have to settle for partial answers. One piece of the puzzle could be to define application semantics using profiles, for instance in the ALPS format. We might call this level 4, Semantic Profile. We’d still need a human to read out-of-band communication and build a special-purpose client for M2M scenarios. But this client could handle all services in the application domain it is programmed to understand, not just one. Also, the human could be helped a lot by a generic API browser that fetches ALPS profiles to explain the API. All this is currently far from a reality. But we can all work towards this vision by choosing generic, full-featured hypermedia types like Siren or Uber for our APIs and by documenting our application semantics using profiles in ALPS. If you need more convincing then please read RESTful Web APIs, which Leonard Richardson co-wrote with Uber and ALPS creator Mike Amundsen. This is easily the best book on REST on the market today.Reference: REST Maturity from our JCG partner Remon Sinnema at the Secure Software Development blog....
javascript-logo

JavaScript multi module project – Continuous Integration

JavaScript multi module project Few days ago, I wrote blog post about JavaScript multi module project with Grunt. This approach allows you to split application into various modules. But at the same time it allows you to create one deployment out of these modules. It was inspired by Maven multi module concept (Maven is build system from Java world). But configuration of the project is just a half of the puzzle. Testing is must for me. And tests have to be executed. Execution must be automatic. So I needed to figure out Continuous Integration story for this project configuration.   Project parameters to consider Let me quickly summarize base attributes of project configuration:Two Github repositories/sub-projectsprimediser-server – Node.JS/Express based back-end primediser-client- Angular based front-endOne main project called primediserbuilds sub-projects gathers deployment runs Protractor end-to-end tests against deployment gathers code coverage stats for client side codeChoosing CI server First of all I had to pick CI server. Obviously it has to support chaining of builds (one build would be able to kick off another build). I have Java background and experience with Jenkins. So it would be natural choice. I bet that Jenkins can handle it quite easily. But I wanted to try workflow closer to majority of JavaScript community. So I tried TravisCI. Straight away not suitable because it’s not possible to chain builds. Next I tried drone.io. Relatively new service. Initially seemed not very feature reach. But closer look actually showed that this is the right choice. It provides these features necessary for me:Temporary Docker Linux instance (is deleted after build) with pre-installed Node.JS Web hook for remote triggering builds Web interface for specifying Bash build commands Web interface for specifying private keys or credentials needed during buildThis combination of features turned to be very powerful. There are few unpolished problems (e.g. visualization of build process makes Firefox unresponsive, missing option for following build progress – so need to scroll manually), but I can live with these. I was also able to execute Protractor tests against Sauce Labs from drone.io and measure the code coverage. Measuring Protractor tests code coverage is described in separate blog. Any change against sub-project triggers also build of main project. Does this sound interesting? Let me describe this CI configuration in details. Documentation I wouldn’t dive into drone.io, Sauce Labs or Protractor basic usage. They were new for me and I easily and quickly came through their docs. Here is list of docs I used to put together this CI configuration.drone.io is very intuitive. Here is their short Node.JS doc. Sauce LabsCheck out Node.JS section in getting started section Their Protractor exampleProtractor – Souce Labs integration Important part of this setup is Protractor integration with Sauce Labs. Sauce Labs provide Selenium server with WebDiver API for testing. Protractor uses Sauce Labs by default when you specify their credentials. So credentials are the only special configuration in test/protractor/protractorConf.js (bottom of the snippet). Other configuration was taken from grunt-protractor-coverage example. I am using this grunt plug-in for running Protractor tests and measuring code coverage. // A reference configuration file. exports.config = {   // ----- What tests to run -----   //   // Spec patterns are relative to the location of this config.   specs: [     'test/protractor/*Spec.js'   ],  // ----- Capabilities to be passed to the webdriver instance ----   //   // For a full list of available capabilities, see   // https://code.google.com/p/selenium/wiki/DesiredCapabilities   // and   // https://code.google.com/p/selenium/source/browse/javascript/webdriver/capabilities.js   capabilities: {     'browserName': 'chrome'     //  'browserName': 'firefox'     //  'browserName': 'phantomjs'   },   params: {   },   // ----- More information for your tests ----   //   // A base URL for your application under test. Calls to protractor.get()   // with relative paths will be prepended with this.   baseUrl: 'http://localhost:3000/',  // Options to be passed to Jasmine-node.   jasmineNodeOpts: {     showColors: true, // Use colors in the command line report.     isVerbose: true, // List all tests in the console     includeStackTrace: true,     defaultTimeoutInterval: 90000  },     sauceUser: process.env.SAUCE_USERNAME,   sauceKey: process.env.SAUCE_ACCESS_KEY }; You may ask now, how can I use localhost in the configuration, when remote selenium server is used for testing. Good question. Sauce Labs provide very useful feature called Sauce Connect. It is a tunnel that emulates access to your machine from Selenium server. This is super useful when you need to bypass company firewall. It will be used later in main project CI configuration. Setting up CI for sub-projects The idea is that each change in sub-project would trigger a build of main project. That is why we need to copy a Build Hook of main project from Settings -> Repository section.  Main project kicked off by hitting the Web hook via wget Linux command from sub-project. As you can see in following picture, sub-project informs main project about a change and process its own build afterwards. Drone.io doesn’t provide concurrent builds (not sure if this is limitation of open source projects only), so main project will wait for sub-project’s build to finish. After that main project is build.CI Configuration of Main Project So build of main project is now triggered from sub-projects. Main project configuration is slightly more complicated. I uses various commands:First of all we need to specify Sauce Labs credentials in Environment Variables sectionexport SAUCE_USERNAME=******** export SAUCE_ACCESS_KEY=****************************Download, extract and execute Sauce Connect tunnel for Linux. This will make accessible some ports on build image for Selenium testing. Notice that execution of the tunnel is done with & (ampersand). This means that tunnel execution is done in forked process and current console can continue executing build.wget https://d2nkw87yt5k0to.cloudfront.net/downloads/sc-latest-linux.tar.gz tar -xzvf sc-latest-linux.tar.gz cd sc-4.2-linux/bin ./sc & cd ../..Now standard Node.JS/Grunt build workflow is executed. Downloading and installing all NPM dependencies creates enough time for Sauce Connect tunnel to start.npm install phantomjs time npm install -g protractor time npm install -g grunt-cli bower time npm install gruntLast command is closing the Sauce Labs tunnel after build, so that Docker build image wouldn’t hang.kill %1  Such configured job now takes 9 minutes. drone.io limit is 15 minutes. Hopefully there will be enough space for all end-to-end tests I am going to create. If you are curious about build output/progress, take a look at my drone.io jobs below. Project Links Github repositories:primediser – main project primediser-server – Node.JS/Express based back-end primediser-client- Angular based front-endDrone.io jobs:primediser primediser-server primediser-clientReference: JavaScript multi module project – Continuous Integration from our JCG partner Lubos Krnac at the Lubos Krnac Java blog blog....
junit-logo

Writing Clean Tests – Replace Assertions with a Domain-Specific Language

It is pretty hard to figure out a good definition for clean code because everyone of us has our own definition for the word clean. However, there is one definition which seems to be universal:Clean code is easy to read.This might come as a surprise to some of you, but I think that this definition applies to test code as well. It is in our best interests to make our tests as readable as possible because:  If our tests are easy to read, it is easy to understand how our code works. If our tests are easy to read, it is easy to find the problem if a test fails (without using a debugger).It isn’t hard to write clean tests, but it takes a lot of practice, and that is why so many developers are struggling with it. I have struggled with this too, and that is why I decided to share my findings with you. This is the fifth part of my tutorial which describes how we can write clean tests. This time we will replace assertions with a domain-specific language. Data Is Not That Important In my previous blog post I identified two problems caused by data centric tests. Although that blog post talked about the creation of new objects, these problems are valid for assertions as well. Let’s refresh our memory and take a look at the source code of our unit test which ensures that the registerNewUserAccount (RegistrationForm userAccountData) method of the RepositoryUserService class works as expected when a new user account is created by using a unique email address and a social sign in provider. Our unit test looks as follows (the relevant code is highlighted): import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.Mock; import org.mockito.invocation.InvocationOnMock; import org.mockito.runners.MockitoJUnitRunner; import org.mockito.stubbing.Answer; import org.springframework.security.crypto.password.PasswordEncoder;import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNull; import static org.mockito.Matchers.isA; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.verifyNoMoreInteractions; import static org.mockito.Mockito.verifyZeroInteractions; import static org.mockito.Mockito.when;@RunWith(MockitoJUnitRunner.class) public class RepositoryUserServiceTest {private static final String REGISTRATION_EMAIL_ADDRESS = "john.smith@gmail.com"; private static final String REGISTRATION_FIRST_NAME = "John"; private static final String REGISTRATION_LAST_NAME = "Smith"; private static final Role ROLE_REGISTERED_USER = Role.ROLE_USER; private static final SocialMediaService SOCIAL_SIGN_IN_PROVIDER = SocialMediaService.TWITTER;private RepositoryUserService registrationService;@Mock private PasswordEncoder passwordEncoder;@Mock private UserRepository repository;@Before public void setUp() { registrationService = new RepositoryUserService(passwordEncoder, repository); }@Test public void registerNewUserAccount_SocialSignInAndUniqueEmail_ShouldCreateNewUserAccountAndSetSignInProvider() throws DuplicateEmailException { RegistrationForm registration = new RegistrationFormBuilder() .email(REGISTRATION_EMAIL_ADDRESS) .firstName(REGISTRATION_FIRST_NAME) .lastName(REGISTRATION_LAST_NAME) .isSocialSignInViaSignInProvider(SOCIAL_SIGN_IN_PROVIDER) .build();when(repository.findByEmail(REGISTRATION_EMAIL_ADDRESS)).thenReturn(null);when(repository.save(isA(User.class))).thenAnswer(new Answer<User>() { @Override public User answer(InvocationOnMock invocation) throws Throwable { Object[] arguments = invocation.getArguments(); return (User) arguments[0]; } });User createdUserAccount = registrationService.registerNewUserAccount(registration);assertEquals(REGISTRATION_EMAIL_ADDRESS, createdUserAccount.getEmail()); assertEquals(REGISTRATION_FIRST_NAME, createdUserAccount.getFirstName()); assertEquals(REGISTRATION_LAST_NAME, createdUserAccount.getLastName()); assertEquals(SOCIAL_SIGN_IN_PROVIDER, createdUserAccount.getSignInProvider()); assertEquals(ROLE_REGISTERED_USER, createdUserAccount.getRole()); assertNull(createdUserAccount.getPassword());verify(repository, times(1)).findByEmail(REGISTRATION_EMAIL_ADDRESS); verify(repository, times(1)).save(createdUserAccount); verifyNoMoreInteractions(repository); verifyZeroInteractions(passwordEncoder); } } As we can see, the assertions found from our unit test ensures that the property values of the returned User object are correct. Our assertions ensure that:The value of the email property is correct. The value of the firstName property is correct. The value of the lastName property is correct. The value of the signInProvider is correct. The value of the role property is correct. The password is null.This is of course pretty obvious but it is important to repeat these assertions in this way because it helps us to identify the problem of our assertions. Our assertions are data centric and this means that:The reader has to know the different states of the returned object. For example, if we think about our example, the reader has to know that if the email, firstName, lastName, and signInProvider properties of returned RegistrationForm object have non-null values and the value of the password property is null, it means that the object is a registration which is made by using a social sign in provider. If the created object has many properties, our assertions litters the source code of our tests. We should remember that even though we want to ensure that the data of the returned object is correct, it is much more important that we describe the state of the returned object.Let’s see how we can improve our assertions. Turning Assertions into a Domain-Specific Language You might have noticed that often the developers and the domain experts use different terms for the same things. In other words, developers don’t speak the same language than the domain experts. This causes unnecessary confusion and friction between the developers and the domain experts. Domain-driven design (DDD) provides one solution to this problem. Eric Evans introduced the term ubiquitous language in his book titled Domain-Driven Design. Wikipedia specifies ubiquitous language as follows:Ubiquitous language is a language structured around the domain model and used by all team members to connect all the activities of the team with the software.If we want write assertions which speak the “correct” language, we have to bridge the gap between the developers and the domain experts. In other words, we have to create a domain-specific language for writing assertions. Implementing Our Domain-Specific Language Before we can implement our domain-specific language, we have to design it. When we design a domain-specific language for our assertions, we have to follow these rules:We have to abandon the data centric approach and think more about the real user whose information is found from a User object. We have to use the language spoken by the domain experts.I won’t do into the details here because this is a huge topic and it is impossible to explain it in a single blog. If you want learn more about domain-specific languages and Java, you can get started by reading the following blog posts:The Java Fluent API Designer Crash Course Creating DSLs in Java, Part 1: What is a domain specific language? Creating DSLs in Java, Part 2: Fluency and context Creating DSLs in Java, Part 3: Internal and external DSLs Creating DSLs in Java, Part 4: Where metaprogramming mattersIf we follow these two rules, we can create the following rules for our domain-specific language:A user has a first name, last name, and email address. A user is a registered user. A user is registered by using a social sign provider which means that this user doesn’t have a password.Now that we have specified the rules of our domain-specific language, we are ready to implement it. We are going to do this by creating a custom AssertJ assertion which implements the rules of our domain-specific language. I will not describe the required steps in this blog post because I have written a blog post which describes them. If you are not familiar with AssertJ, I recommend that you read that blog post before reading the rest of this blog post. The source code of our custom assertion class looks as follows: mport org.assertj.core.api.AbstractAssert; import org.assertj.core.api.Assertions;public class UserAssert extends AbstractAssert<UserAssert, User> {private UserAssert(User actual) { super(actual, UserAssert.class); }public static UserAssert assertThat(User actual) { return new UserAssert(actual); }public UserAssert hasEmail(String email) { isNotNull();Assertions.assertThat(actual.getEmail()) .overridingErrorMessage( "Expected email to be <%s> but was <%s>", email, actual.getEmail() ) .isEqualTo(email);return this; }public UserAssert hasFirstName(String firstName) { isNotNull();Assertions.assertThat(actual.getFirstName()) .overridingErrorMessage("Expected first name to be <%s> but was <%s>", firstName, actual.getFirstName() ) .isEqualTo(firstName);return this; }public UserAssert hasLastName(String lastName) { isNotNull();Assertions.assertThat(actual.getLastName()) .overridingErrorMessage( "Expected last name to be <%s> but was <%s>", lastName, actual.getLastName() ) .isEqualTo(lastName);return this; }public UserAssert isRegisteredByUsingSignInProvider(SocialMediaService signInProvider) { isNotNull();Assertions.assertThat(actual.getSignInProvider()) .overridingErrorMessage( "Expected signInProvider to be <%s> but was <%s>", signInProvider, actual.getSignInProvider() ) .isEqualTo(signInProvider);hasNoPassword();return this; }private void hasNoPassword() { isNotNull();Assertions.assertThat(actual.getPassword()) .overridingErrorMessage("Expected password to be <null> but was <%s>", actual.getPassword() ) .isNull(); }public UserAssert isRegisteredUser() { isNotNull();Assertions.assertThat(actual.getRole()) .overridingErrorMessage( "Expected role to be <ROLE_USER> but was <%s>", actual.getRole() ) .isEqualTo(Role.ROLE_USER);return this; } } We have now created a domain-specific language for writing assertions to User objects. Our next step is to modify our unit test to use our new domain-specific language. Replacing JUnit Assertions with a Domain-Specific Language After we have rewritten our assertions to use our domain-specific language, the source code of our unit test looks as follows (the relevant part is highlighted): import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.Mock; import org.mockito.invocation.InvocationOnMock; import org.mockito.runners.MockitoJUnitRunner; import org.mockito.stubbing.Answer; import org.springframework.security.crypto.password.PasswordEncoder;import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNull; import static org.mockito.Matchers.isA; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.verifyNoMoreInteractions; import static org.mockito.Mockito.verifyZeroInteractions; import static org.mockito.Mockito.when;@RunWith(MockitoJUnitRunner.class) public class RepositoryUserServiceTest {private static final String REGISTRATION_EMAIL_ADDRESS = "john.smith@gmail.com"; private static final String REGISTRATION_FIRST_NAME = "John"; private static final String REGISTRATION_LAST_NAME = "Smith"; private static final Role ROLE_REGISTERED_USER = Role.ROLE_USER; private static final SocialMediaService SOCIAL_SIGN_IN_PROVIDER = SocialMediaService.TWITTER;private RepositoryUserService registrationService;@Mock private PasswordEncoder passwordEncoder;@Mock private UserRepository repository;@Before public void setUp() { registrationService = new RepositoryUserService(passwordEncoder, repository); }@Test public void registerNewUserAccount_SocialSignInAndUniqueEmail_ShouldCreateNewUserAccountAndSetSignInProvider() throws DuplicateEmailException { RegistrationForm registration = new RegistrationFormBuilder() .email(REGISTRATION_EMAIL_ADDRESS) .firstName(REGISTRATION_FIRST_NAME) .lastName(REGISTRATION_LAST_NAME) .isSocialSignInViaSignInProvider(SOCIAL_SIGN_IN_PROVIDER) .build();when(repository.findByEmail(REGISTRATION_EMAIL_ADDRESS)).thenReturn(null);when(repository.save(isA(User.class))).thenAnswer(new Answer<User>() { @Override public User answer(InvocationOnMock invocation) throws Throwable { Object[] arguments = invocation.getArguments(); return (User) arguments[0]; } });User createdUserAccount = registrationService.registerNewUserAccount(registration);assertThat(createdUserAccount) .hasEmail(REGISTRATION_EMAIL_ADDRESS) .hasFirstName(REGISTRATION_FIRST_NAME) .hasLastName(REGISTRATION_LAST_NAME) .isRegisteredUser() .isRegisteredByUsingSignInProvider(SOCIAL_SIGN_IN_PROVIDER);verify(repository, times(1)).findByEmail(REGISTRATION_EMAIL_ADDRESS); verify(repository, times(1)).save(createdUserAccount); verifyNoMoreInteractions(repository); verifyZeroInteractions(passwordEncoder); } } Our solution has the following the benefits:Our assertions use the language which is understood by the domain experts. This means that our test is an executable specification which is easy to understand and always up-to-date. We don’t have to waste time for figuring out why a test failed. Our custom error messages ensure that we know why it failed. If the API of the User class changes, we don’t have to fix every test method that writes assertions to User objects. The only class which we have to change is the UserAssert class. In other words, moving the actual assertions logic away from our test method made our test less brittle and easier to maintain.Let’s spend a moment to summarize what we learned from this blog post. Summary We have now transformed our assertions into a domain-specific language. This blog post taught us three things:Following the data centric approach causes unnecessary confusion and friction between the developers and the domain experts. Creating a domain-specific language for our assertions makes our tests less brittle because the actual assertion logic is moved to custom assertion classes. If we write assertions by using a domain-specific language, we transform our tests into executable specifications which are easy to understand and speak the language of the domain experts.Reference: Writing Clean Tests – Replace Assertions with a Domain-Specific Language from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
java-logo

Java 8 Friday: Most Internal DSLs are Outdated

At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem. Java 8 Friday Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, extension methods, and other great stuff. You’ll find the source code on GitHub.       Most Internal DSLs are Outdated That’s quite a statement from a vendor of one of the most advanced internal DSLs currently on the market. Let me explain: Languages are hard Learning a new language (or API) is hard. You have to understand all the keywords, the constructs, the statement and expression types, etc. This is true both for external DSLs, internal DSLs and “regular” APIs, which are essentially internal DSLs with less fluency. When using JUnit, people have grown used to using hamcrest matchers. The fact that they’re available in six languages (Java, Python, Ruby, Objective-C, PHP, Erlang) makes them somewhat of a sound choice. As a domain-specific language, they have established idioms that are easy to read, e.g. assertThat(theBiscuit, equalTo(myBiscuit)); assertThat(theBiscuit, is(equalTo(myBiscuit))); assertThat(theBiscuit, is(myBiscuit)); When you read this code, you will immediately “understand” what is being asserted, because the API reads like prosa. But learning to write code in this API is harder. You will have to understand:Where all of these methods are coming from What sorts of methods exist Who might have extended hamcrest with custom Matchers What are best practices when extending the DSLFor instance, in the above example, what exactly is the difference between the three? When should I use one and when the other? Is is() checking for object identity? Is equalTo() checking for object equality? The hamcrest tutorial goes on with examples like these: public void testSquareRootOfMinusOneIsNotANumber() { assertThat(Math.sqrt(-1), is(notANumber())); } You can see that notANumber() apparently is a custom matcher, implemented some place in a utility: public class IsNotANumber extends TypeSafeMatcher<Double> {@Override public boolean matchesSafely(Double number) { return number.isNaN(); }public void describeTo(Description description) { description.appendText("not a number"); }@Factory public static <T> Matcher<Double> notANumber() { return new IsNotANumber(); } } While this sort of DSL is very easy to create, and probably also a bit fun, it is dangerous to start delving into writing and enhancing custom DSLs for a simple reason. They’re in no way better than their general-purpose, functional counterparts – but they’re harder to maintain. Consider the above examples in Java 8: Replacing DSLs with Functions Let’s assume we have a very simple testing API: static <T> void assertThat( T actual, Predicate<T> expected ) { assertThat(actual, expected, "Test failed"); }static <T> void assertThat( T actual, Predicate<T> expected, String message ) { assertThat(() -> actual, expected, message); }static <T> void assertThat( Supplier<T> actual, Predicate<T> expected ) { assertThat(actual, expected, "Test failed"); }static <T> void assertThat( Supplier<T> actual, Predicate<T> expected, String message ) { if (!expected.test(actual.get())) throw new AssertionError(message); } Now, compare the hamcrest matcher expressions with their functional equivalents: // BEFORE // --------------------------------------------- assertThat(theBiscuit, equalTo(myBiscuit)); assertThat(theBiscuit, is(equalTo(myBiscuit))); assertThat(theBiscuit, is(myBiscuit));assertThat(Math.sqrt(-1), is(notANumber()));// AFTER // --------------------------------------------- assertThat(theBiscuit, b -> b == myBiscuit); assertThat(Math.sqrt(-1), n -> Double.isNaN(n)); With lambda expressions, and a well-designed assertThat() API, I’m pretty sure that you won’t be looking for the right way to express your assertions with matchers any longer. Note that unfortunately, we cannot use the Double::isNaN method reference, as that would not be compatible with Predicate<Double>. For that, we’d have to do some primitive type magic in the assertion API, e.g. static void assertThat( double actual, DoublePredicate expected ) { ... } Which can then be used as such: assertThat(Math.sqrt(-1), Double::isNaN); Yeah, but… … you may hear yourself saying, “but we can combine matchers with lambdas and streams”. Yes, of course we can. I’ve just done so now in the jOOQ integration tests. I want to skip the integration tests for all SQL dialects that are not in a list of dialects supplied as a system property: String dialectString = System.getProperty("org.jooq.test-dialects");// The string must not be "empty" assumeThat(dialectString, not(isOneOf("", null)));// And we check if the current dialect() is // contained in a comma or semi-colon separated // list of allowed dialects, trimmed and lowercased assumeThat( dialect().name().toLowerCase(),// Another matcher here isOneOf(stream(dialectString.split("[,;]")) .map(String::trim) .map(String::toLowerCase) .toArray(String[]::new)) ); … and that’s pretty neat, too, right? But why don’t I just simply write: // Using Apache Commons, here assumeThat(dialectString, StringUtils::isNotEmpty); assumeThat( dialect().name().toLowerCase(), d -> stream(dialectString.split("[,;]")) .map(String::trim) .map(String::toLowerCase()) .anyMatch(d::equals) ); No Hamcrest needed, just plain old lambdas and streams! Now, readability is a matter of taste, of course. But the above example clearly shows that there is no longer any need for Hamcrest matchers and for the Hamcrest DSL. Given that within the next 2-3 years, the majority of all Java developers will be very used to using the Streams API in every day work, but not very used to using the Hamcrest API, I urge you, JUnit maintainers, to deprecate the use of Hamcrest in favour of Java 8 APIs. Is Hamcrest now considered bad? Well, it has served its purpose in the past, and people have grown somewhat used to it. But as we’ve already pointed out in a previous post about Java 8 and JUnit Exception matching, yes, we do believe that we Java folks have been barking up the wrong tree in the last 10 years. The lack of lambda expressions has lead to a variety of completely bloated and now also slightly useless libraries. Many internal DSLs or annotation-magicians are also affected. Not because they’re no longer solving the problems they used to, but because they’re not Java-8-ready. Hamcrest’s Matcher type is not a functional interface, although it would be quite easy to transform it into one. In fact, Hamcrest’s CustomMatcher logic should be pulled up to the Matcher interface, into default methods. Things dont’ get better with alternatives, like AssertJ, which create an alternative DSL that is now rendered obsolete (in terms of call-site code verbosity) through lambdas and the Streams API. If you insist on using a DSL for testing, then probably Spock would be a far better choice anyway. Other examples Hamcrest is just one example of such a DSL. This article has shown how it can be almost completely removed from your stack by using standard JDK 8 constructs and a couple of utility methods, which you might have in JUnit some time soon, anyway. Java 8 will bring a lot of new traction into last decade’s DSL debate, as also the Streams API will greatly improve the way we look at transforming or building data. But many current DSLs are not ready for Java 8, and have not been designed in a functional way. They have too many keywords for things and concepts that are hard to learn, and that would be better modelled using functions. An exception to this rule are DSLs like jOOQ or jRTF, which are modelling actual pre-existing external DSLs in a 1:1 fashion, inheriting all the existing keywords and syntax elements, which makes them much easier to learn in the first place. What’s your take? What is your take on the above assumptions? What is your favourite internal DSL, that might vanish or that might be completely transformed in the next five years because it has been obsoleted by Java 8?Reference: Java 8 Friday: Most Internal DSLs are Outdated from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
apache-maven-logo

Java EE7 and Maven project for newbies – part 4 – defining the ear module

Resuming from the previous parts Part #1 Part #2 Part #3 We are resuming for the 4th part, our simple project currently hasa web maven module (a war) an ejb module (ejb)  holding our stateless session beans (EJB 3.1) and a second (ejb) module holding our entity beans (JPA2)but we are still missing the one to package them all, archive, which will be of ‘ear’ type (aka Enterprise Archive).  Defining our ear maven module As you can see in the image below, we create emtpy folder called sample-ear under the sample-parent. This folder needs to have a pom.xml file. Our new module needs to be correctly referenced in the  ‘modules‘  section of the sample-parent\pom.xml.The main purpose of our ear maven module is to ‘configure’ the famous maven-ear-plugin, which is going to be invoked by maven and is going to produce our final deployable application. There 2 simple things we need to do, add configuration for the maven-ear-plugin, and add our ‘internal‘ application dependencies on the ear module, so that it ‘knows’ which modules should look up. Let’s have a look: Inside the ear pom.xml <build> <finalName>sampleapp</finalName> <plugins> <!--Ear plugin -creating the ear - watch out skinny WARS!--> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ear-plugin</artifactId> <configuration> <finalName>sampleapp</finalName> <defaultJavaBundleDir>lib/</defaultJavaBundleDir> <skinnyWars>true</skinnyWars> <modules> <webModule> <groupId>gr.javapapo</groupId> <artifactId>sample-web</artifactId> </webModule> <ejbModule> <groupId>gr.javapapo</groupId> <artifactId>sample-services</artifactId> </ejbModule> </modules> </configuration> </plugin> </plugins> </build> This is the build, section make note on the following things:Remember as we did other modules, we have defined some basic common configuration for our plugin, in the ‘parent‘ pom. Go back and have a look what is already there for you. Watch out the ‘defaultJavaBundleDir‘ this where we define where all the libraries (apart from the top-level modules that will reside in our ear, usually is  a sub-folder in the ear called ‘lib’. What is a top level module? It is actually, the  jar(s), and wars that are going to be packaged in the ear, and are considered first level citizens,as you can see we define 2, the sample-web and the sample-services. Watch out the ‘skinnyWars‘ property. With this switch enabled, we enforce a certain pattern on packaging our third party libs, referenced from our war project. Simply put, our war archives are NOT going to include any external libraries we might define as dependencies under their WEB-INF\lib folder, instead all those libs,they are going to be packaged in the ‘defaultJavaBundleDir‘ path on the ear level.The above configuration is not going to work, if we dont add the ‘dependencies’ section of our ear-pom. <!-- our in app dependencies--> <dependencies> <dependency> <groupId>gr.javapapo</groupId> <artifactId>sample-web</artifactId> <version>${project.version}</version> <type>war</type> </dependency> <dependency> <groupId>gr.javapapo</groupId> <artifactId>sample-services</artifactId> <version>${project.version}</version> <type>ejb</type> </dependency> </dependencies> Make note of the following:the dependency element in this pom, needs the ‘type’ attribute.One good question you may have is, where the sample-domain (jar) module? Well this module, is not promoted as a top level element in our ear, because we are going to add it as a dependency on the sample-services module. So our services will hold a dependency on the module of the entity beans. (Sounds fair). So we need to update the pom.xml of our sample-services module. <artifactId>sample-services</artifactId> <name>sample-services</name> <description>EJB service layer</description> <packaging>ejb</packaging> <dependencies> <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> </dependency> <dependency> <groupId>gr.javapapo</groupId> <artifactId>sample-domain</artifactId> <version>${project.version}</version> </dependency> </dependencies> </project> By doing that, the sample-services.jar is going to ‘fetch’ along the sample-domain.jar. By default (remember Maven is all about conventions), when we define a top level module to an ear,l ike the sample-services, it’s dependencies are bundled automatically under the defaultJavaBundleDir lib of the ear! So when we package our ear, we will be expecting to see the sample-domain jar packaged. One more missing dependency After our first ‘ in app dependency between the services module and the entities module, we need another one. Our war module, (web layer) is going to use some of our services, but in order to being able to do it needs to have a dependency on the ‘services’ module. So we need to the pom.xml on the sample-web project, accordingly. <packaging>war</packaging> <build> <finalName>${project.artifactId}</finalName> </build> <dependencies> <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>gr.javapapo</groupId> <artifactId>sample-services</artifactId> <version>0.0.1-SNAPSHOT</version> </dependency> </dependencies> Let’s package our war. We are ready for now, our basic dependencies are set, our ear is configured, we just need to package. Under the sample-parent folder level on command line we just need to type: mvn clean package We are done, let’s check under the ‘target’ folder of the sample-ear module. Our final ear is ready, maven also creates the ‘exploded‘ version of the ear, (it is, expanded in the image below). Notice our 2 top level ear elements, and how the sample-domain.jar is under the ‘lib’ folder of our ear. Also notice that some basic libraries like the javaee-api.jar are not included in the lib folder. Since we have added the provided in the pom. (see the final version of the xml).One last thing…skinny war(s) and MANIFEST.MF files Eventually, we could stop here, our final ear is ok and is going to work, but with all the above configuration, especially with our preference to create, skinny wars , we need to pay attention to a small detail. MANIFEST files are special descriptors within jars and wars, that are used by application servers on locating and class loading ‘dependent’ jars in the class path, within the ear. Our small problem resides in the MANIFEST.MF file of the sample-web.war. If we unpack the generated war file and we open with a text editor the MANIFEST.MF we will see is something like that. Manifest-Version: 1.0 Built-By: papo Build-Jdk: 1.7.0_45 Class-Path: lib/sample-services-0.0.1-SNAPSHOT.jar lib/sample-services-0.0 .1-SNAPSHOT.jar lib/sample-domain-0.0.1-SNAPSHOT.jar Created-By: Apache Maven 3.2.1 Archiver-Version: Plexus Archiver Can you spot the mistake? By default the MANIFEST.MF generated, is indicating a wrong path for one of our top-level ejb jars(sample-services). Our sample-services.jar is not place under the \lib within the ear, but is a top level element. So how are we going to create a correct MANIFEST? Eventually we need to fine tune a bit the maven-war plugin. We need to overwrite the default behaviour as specified in the parent pom, and specify a correct entry for this particular dependency. If you happen to have more than one, then you need to append all the jars that are top level elements in the configuration (make sure you do it properly, use a space between entries).So in the sample-war pom we need to add some configuration (extra) on top of the one applied. See the image below.There is an interesting stackoverflow issue, that you can read more about this, little trick or other potential workarounds in case you use skinny-wars. That’s it, our ear is ready. Summary You can find the final version for this post in this Git Tag.With this post, we are completing a first series of posts, starting from scratch, applying basic maven principles and creating some basic maven modules for a java enterprise application. Feel free to re-use this example and extend it in order to meet your own needs. It is by far complete in terms of covering all your needs, but it is a solid example on getting starting, thinking and configuring in Maven. I am going to expand on this example, adding more modules and using more features of maven in future posts.Reference: Java EE7 and Maven project for newbies – part 4 – defining the ear module from our JCG partner Paris Apostolopoulos at the Papo’s log blog....
akka-logo

Simplifying trading system with Akka

My colleagues are developing a trading system that processes quite heavy stream of incoming transactions. Each transaction covers one Instrument (think bond or stock) and has some (now) unimportant properties. They are stuck with Java (< 8), so let’s stick to it:               class Instrument implements Serializable, Comparable<Instrument> { private final String name;public Instrument(String name) { this.name = name; }//...Java boilerplate}public class Transaction { private final Instrument instrument;public Transaction(Instrument instrument) { this.instrument = instrument; }//...Java boilerplate}Instrument will later be used as a key in HashMap, so for the future we pro-actively implement Comparable<Instrument>. This is our domain, now the requirements:Transactions come into the system and need to be processed (whatever that means), as soon as possible We are free to process them in any order …however transactions for the same instrument need to be processed sequentially in the exact same order as they came in.Initial implementation was straightforward – put all incoming transactions into a queue (e.g. ArrayBlockingQueue) with a single consumer. This satisfies last requirement, since queue preserves strict FIFO ordering across all transactions. But such an architecture prevents concurrent processing of unrelated transactions for different instruments, thus wasting compelling throughput improvement. Not surprisingly this implementation, while undoubtedly simple, became a bottleneck. The first idea was to somehow split incoming transactions by instrument and process instruments individually. We came up with the following data structure: priavate final ConcurrentMap<Instrument, Queue<Transaction>> queues = new ConcurrentHashMap<Instrument, Queue<Transaction>>();public void accept(Transaction tx) { final Instrument instrument = tx.getInstrument(); if (queues.get(instrument) == null) { queues.putIfAbsent(instrument, new LinkedBlockingQueue<Transaction>()); } final Queue<Transaction> queue = queues.get(instrument); queue.add(tx); }Yuck! But worst is yet to come. How do you make sure at most one thread processes each queue at a time? After all, otherwise two threads could pick up items from one queue (one instrument) and process them in reversed order, which is not allowed. The simplest case is to have a Thread per queue – this won’t scale, as we expect tens of thousands of different instruments. So we can have say N threads and let each of them handle a subset of queues, e.g. instrument.hashCode() % N tells us which thread takes care of given queue. But it’s still not perfect for three reasons:One thread must “observe” many queues, most likely busy-waiting, iterating over them all the time. Alternatively queue might wake up its parent thread somehow In worst case scenario all instruments will have conflicting hash codes, targeting only one thread – which is effectively the same as our initial solution It’s just damn complex! Beautiful code is not complex!Implementing this monstrosity is possible, but hard and error-prone. Moreover there is another non-functional requirement: instruments come and go and there are hundreds of thousands of them over time. After a while we should remove entries in our map representing instruments that were not seen lately. Otherwise we’ll get a memory leak. If you can come up with some simpler solution, let me know. In the meantime let me tell you what I suggested my colleagues. As you can guess, it was Akka – and it turned out to be embarrassingly simple. We need two kinds of actors: Dispatcher and Processor. Dispatcher has one instance and receive all incoming transactions. Its responsibility is to find or spawn worker Processor actor for each Instrument and push transaction to it: public class Dispatcher extends UntypedActor {private final Map<Instrument, ActorRef> instrumentProcessors = new HashMap<Instrument, ActorRef>();@Override public void onReceive(Object message) throws Exception { if (message instanceof Transaction) { dispatch(((Transaction) message)); } else { unhandled(message); } }private void dispatch(Transaction tx) { final ActorRef processor = findOrCreateProcessorFor(tx.getInstrument()); processor.tell(tx, self()); }private ActorRef findOrCreateProcessorFor(Instrument instrument) { final ActorRef maybeActor = instrumentProcessors.get(instrument); if (maybeActor != null) { return maybeActor; } else { final ActorRef actorRef = context().actorOf( Props.create(Processor.class), instrument.getName()); instrumentProcessors.put(instrument, actorRef); return actorRef; } } }This is dead simple. Since our Dispatcher actor is effectively single-threaded, no synchronization is needed. We barely receive Transaction, lookup or create Processor and pass Transaction further. This is how Processor implementation could look like: public class Processor extends UntypedActor {private final LoggingAdapter log = Logging.getLogger(getContext().system(), this);@Override public void onReceive(Object message) throws Exception { if (message instanceof Transaction) { process(((Transaction) message)); } else { unhandled(message); } }private void process(Transaction tx) { log.info("Processing {}", tx); } }That’s it! Interestingly our Akka implementation is almost identical to our first idea with map of queues. After all an actor is just a queue and a (logical) thread processing items in that queue. The difference is: Akka manages limited thread pool and shares it between maybe hundreds of thousands of actors. And because every instrument has its own dedicated (and “single-threaded”) actor, sequential processing of transactions per instrument is guaranteed. One more thing. As stated earlier, there is an enormous amount of instruments and we don’t want to keep actors for instruments that weren’t seen for quite a while. Let’s say that if a Processor didn’t receive any transaction within an hour, it should be stopped and garbage collected. If later we receive new transaction for such instrument, we can always recreate it. This one is quite tricky – we must ensure that if transaction arrives when processor decided to delete itself, we can’t loose that transaction. Rather than stopping itself, Processor signals its parent it was idle for too long. Dispatcher will then send PoisonPill to it. Because both ProcessorIdle and Transaction messages are processed sequentially, there is no risk of transaction being sent to no longer existing actor. Each actor manages its lifecycle independently by scheduling timeout using setReceiveTimeout: public class Processor extends UntypedActor {@Override public void preStart() throws Exception { context().setReceiveTimeout(Duration.create(1, TimeUnit.HOURS)); }@Override public void onReceive(Object message) throws Exception { //... if (message instanceof ReceiveTimeout) { log.debug("Idle for two long, shutting down"); context().parent().tell(ProcessorIdle.INSTANCE, self()); } else { unhandled(message); } }}enum ProcessorIdle { INSTANCE }Clearly, when Processor did not receive any message for a period of one hour, it gently signals that to its parent (Dispatcher). But the actor is still alive and can handle transactions if they happen precisely after an hour. What Dispatcher does is it kills given Processor and removes it from a map: public class Dispatcher extends UntypedActor {private final BiMap<Instrument, ActorRef> instrumentProcessors = HashBiMap.create();public void onReceive(Object message) throws Exception { //... if (message == ProcessorIdle.INSTANCE) { removeIdleProcessor(sender()); sender().tell(PoisonPill.getInstance(), self()); } else { unhandled(message); } }private void removeIdleProcessor(ActorRef idleProcessor) { instrumentProcessors.inverse().remove(idleProcessor); }private void dispatch(Transaction tx) { final ActorRef processor = findOrCreateProcessorFor(tx.getInstrument()); processor.tell(tx, self()); }//...}There was a slight inconvenience. instrumentProcessors used to be a Map<Instrument, ActorRef>. This proved to be insufficient, since we suddenly have to remove an entry in this map by value. In other words we need to find a key (Instrument) that maps to a given ActorRef (Processor). There are different ways to handle it (e.g. idle Processor could send an Instrumnt it handles), but instead I used BiMap<K, V> from Guava. It works because both Instruments and ActorRefs pointed are unique (actor-per-instrument). Having BiMap I could simply inverse() the map (from BiMap<Instrument, ActorRef> to BiMap<ActorRef, Instrument> and treat ActorRef as key. This Akka example is not more than “hello, world“. But compared to convoluted solution we would have to write using concurrent queues, locks and thread pools, it’s perfect. My team mates were so excited that by the end of the day they decided to rewrite their whole application to Akka.Reference: Simplifying trading system with Akka from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....
enterprise-java-logo

The data knowledge stack

Concurrency is not for the faint-hearted We all know concurrency programming is difficult to get it right. That’s why threading tasks are followed by extensive design and code reviewing sessions. You never assign concurrent issues to inexperienced developers. The problem space is carefully analysed, a design emerges and the solution is both documented and reviewed. That’s how threading related tasks are usually addressed. You will naturally choose a higher level abstraction, since you don’t want to get tangled up in low level details. That’s why the java.util.concurrent is usually better (unless you build a High Frequency Trading system) than hand-made producer/consumer Java 1.2 style thread-safe structures. Is database programming any different? In a database system the data is spread across various structures (SQL tables or NoSQL collections) and multiple users may select/insert/update/delete whatever they choose to. From a concurrency point of view this is a very challenging task and it’s not just the database system developer’s problem. It’s our problem as well. A typical RDBMS data layer requires you to master various technologies and your solution is only as strong as your team’s weakest spot. A recipe for success When it comes to database programming you shouldn’t ever remain untrained. Constantly learning is your best weapon, there’s no other way. For this I came up with my own data knowledge stack:You should always master the bottom layers prior to advancing to upper ones. So these are the golden rules for taming the data layer:The database manual was not only meant for Database AdministratorsIf you are doing any database related task than reading your current database manual is not optional. You should be familiar with both the SQL standard and your database specific traits. Break free from the SQL-92 mindset. Don’t let the portability fear make you reject highly effective database specific features. It’s more common to end-up with a sluggish database layer than to port an already running system to a new database solution. Fully read “Patterns of Enterprise Application Architecture“I’ll give you a great investment tip. You are 50$ away from understanding the core concepts of any available ORM tool. Martin Fowler‘s book is a must for any enterprise developer. The on-line pattern catalogue is a great teaser. Read your ORM documentationSome argue that their ORM tool is the root of all evil. Unless you spend your time and read all the available documentation, you are going to have a very hard time taming your ORM data layer. The object to relational mismatch has always been a really complex problem, yet it simplifies the CREATE/UPDATE/DELETE operations of complex object tree structures. The ORM’s optimistic locking feature is a great way of dealing with the “lost update” problem. Pick and mixJPA/Hibernate are not a substitute for SQL. You should get the best of JPA and SQL and combine them into one winning solution. Because SQL is unavoidable on any non-trivial application, it’s wise to invest some time (and maybe a license) for using a powerful querying framework. If you fear that database portability is holding you back from using proprietary database querying features than a JPA/JOOQ medley is a recipe for success.A Hibernate Master class I’ve been using Hibernate for almost a decade and I admit it wasn’t an easy journey. The StackOverflow Hibernate related questions are pouring on a daily basis. That’s why I decided to come up with my own Hibernate material that I’ll share on this blog and my GitHub account, because if you are willing to spend your time to learn it you shouldn’t be charged for your effort. For those who want an intensive and personalized Hibernate Master class training, feel free to contact me. We’ll find a way to get you trained.Reference: The data knowledge stack from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
apache-camel-logo

Adding WS-Security over soap using Apache Camel

WS-Security (Web Services Security) is a protocol which allows you to secure your soap web services. The client, who makes the soap request, has to provide a login and a password in the soap header. The server receives the soap request, checks the credentials and validates or not the request. With Apache Camel, it is easy to work with soap web services (especially if you use Apache CXF), but handling with WS-Security can be tricky. The idea is to create an xml template with all required information (including login and password) and add the template to the soap header.   public void addSoapHeader(Exchange exchange,String soapHeader){List<SoapHeader> soapHeaders = CastUtils.cast((List<?>) exchange.getIn().getHeader(Header.HEADER_LIST)); SoapHeader newHeader;if(soapHeaders == null){soapHeaders = new ArrayList<SoapHeader>(); } try { newHeader = new SoapHeader(new QName("soapHeader"), DOMUtils.readXml(new StringReader(soapHeader)).getDocumentElement()); newHeader.setDirection(Direction.DIRECTION_OUT);soapHeaders.add(newHeader);exchange.getIn().setHeader(Header.HEADER_LIST, soapHeaders);} catch (Exception e) { //log error } } Apache Camel uses an Exchange interface which has methods to retrieve or update headers. The soapHeader parameter is a string containing the xml template. We retrieve the current headers and we add a new header named soapHeader. We convert the soapHeader property from string to XML thanks to the DOMUtils class. The newHeader.setDirection(Direction.DIRECTION_OUT) instruction means that the header will be applied to a request either leaving a consumer endpoint or entering a producer endpoint (that is, it applies to a WS request message propagating through a route). Now let’s create the xml template and call the addSoapHeader method: public void addWSSESecurityHeader(Exchange exchange,String login,String password){String soapHeader = "<?xml version=\"1.0\" encoding=\"utf-8\"?><wsse:Security xmlns:wsse=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\"+ "xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\"><wsse:UsernameToken wsu:Id=\"UsernameToken-50\"><wsse:Username>" + login + "</wsse:Username><wsse:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</wsse:Password></wsse:UsernameToken></wsse:Security>";//Add wsse security header to the exchangeaddSoapHeader(exchange, soapHeader);} As we can see, we need two namespaces in our xml (to handle with WS-Security):http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsdWe can then use interesting tags into our xml:wsse :UsernameToken : Includes both username and password information wsse :Username : Username required for authentication wsse :Password : Password required for authenticationNext we just have to call our method addSoapHeaderto add our xml into the soap header. Here’s the full code with a complete Apache Camel route: package com.example.test;import java.io.StringReader; import java.util.ArrayList; import java.util.List; import javax.xml.namespace.QName; import org.apache.camel.Exchange; import org.apache.camel.util.CastUtils; import org.apache.cxf.binding.soap.SoapHeader; import org.apache.cxf.headers.Header; import org.apache.cxf.headers.Header.Direction; import org.apache.cxf.helpers.DOMUtils;public class MyRoute extends RouteBuilder {public void addSoapHeader(Exchange exchange,String soapHeader){List<SoapHeader> soapHeaders = CastUtils.cast((List<?>) exchange.getIn().getHeader(Header.HEADER_LIST)); SoapHeader newHeader;if(soapHeaders == null){soapHeaders = new ArrayList<SoapHeader>(); } try { newHeader = new SoapHeader(new QName("soapHeader"), DOMUtils.readXml(new StringReader(soapHeader)).getDocumentElement()); newHeader.setDirection(Direction.DIRECTION_OUT);soapHeaders.add(newHeader);exchange.getIn().setHeader(Header.HEADER_LIST, soapHeaders);} catch (Exception e) { //log error } }public void addWSSESecurityHeader(Exchange exchange,String login,String password){String soapHeader = "<?xml version=\"1.0\" encoding=\"utf-8\"?><wsse:Security xmlns:wsse=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\"+ "xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\"><wsse:UsernameToken wsu:Id=\"UsernameToken-50\"><wsse:Username>" + login + "</wsse:Username><wsse:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</wsse:Password></wsse:UsernameToken></wsse:Security>";//Add wsse security header to the exchangeaddSoapHeader(exchange, soapHeader);}@Override public void configure() throws Exception { from("endpointIn") .process(new Processor(){ @Override public void process(Exchange exchange) throws Exception { addWSSESecurityHeader(exchange, "login","password"); } }) .to("endointOut") ; } } ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books