Featured FREE Whitepapers

What's New Here?

java-logo

Java 8 Functional Programming: Lazy Instantiation

Singletons often instantiate themselves lazily, and sometimes, if the object is heavy enough, class fields can be instantiated lazily. Generally, when going the lazy route, the getter method (or accessor) has to have a block of code that checks whether the object is instantiated yet (and, if it isn’t, it needs to be instantiated) before returning it. This check is pointless once the object has been instantiated; it only serves to slow down a method that’s already (usually) blocking with synchronized or a Lock. Let’s look at a way to remove that code, shall we? Disclaimer I didn’t come up with this. I got it from Functional Programming in Java by Venkat Subramaniam. I highly recommend both the book and the author. Everything I’ve read by Venkat was well-done and easy to learn from. How Does It Work? The basic idea is to:replace the lazily instantiated field with a Supplier of the type you need. the Supplier instantiates the object (but doesn’t return it yet) then it sets the field to a new Supplier that only returns the instantiated object return the instanceSo let’s see this in action. We’re going to have a class called Holder that wants to lazily instantiate an object of type Heavy. This code is compiled directly from Venkat’s book: public class Holder { private Supplier heavy = () -> createAndCacheHeavy(); public Heavy getHeavy() { return heavy.get(); } private synchronized Heavy createAndCacheHeavy() { class HeavyFactory implements Supplier { private final Heavy heavyInstance = new Heavy(); public Heavy get() { return heavyInstance; } } if(!HeavyFactory.class.isInstance(heavy)) { heavy = new HeavyFactory(); } return heavy.get(); } } Now, this code works just fine, but I find the implementation of createAndCacheHeavy to be unnecessarily confusing. The first time I saw this code, it took me quite a while to figure out what it was doing. So let’s make just a few modifications to it, shall we? We’ll make it so it visibly looks like it’s following the steps I laid out before. private Heavy createAndCacheHeavy() { Heavy instance = new Heavy(); heavy = () -> instance; return instance; } Isn’t that better? It’s a heck of a lot simpler and cleaner than before, in my opinion. And it still works! Well, there’s a small caveat: in order to make the code thread-safe, you need to synchronize the getInstance() method instead of createAndCacheHeavy method. That change will slow down the code a little bit compared to Venkat’s, since his code doesn’t use synchronization once the HeavyFactory is in place. But it’s still faster than the old method that required a synchronization AND a conditional check every time. So, this is some helpful code, but do you want to be typing that code in every time that you want to lazily instantiate something? I didn’t think so. So let’s make a class that will be reusable and make our lives a lot easier. But first, just to show you how much easier it becomes to use, let me show you it being used. Supplier<Heavy> heavy = LazilyInstantiate.using(() -> new Heavy()); That’s it! Let’s look at this a little closer, and dig into it to see what’s going on before we make it. The declaration bit of the line is the same as it was before; a Supplier of Heavy named heavy. But then we call a static method of LazilyInstantiate which turns out to be a static factory method that returns a LazilyInstantiate object which implements Supplier. The argument passed into the method is a Heavy Supplier that is there so the user can supply the instantiator with the correct code to instantiate the object. So, are you curious as to how it works? Well here’s the code for LazilyInstantiate: public class LazilyInstantiate implements Supplier { public static LazilyInstantiate using(Supplier supplier) { return new LazilyInstantiate<>(supplier); } public synchronized T get() { return current.get(); } private LazilyInstantiate(Supplier supplier) { this.supplier = supplier; this.current = () -> swapper(); } private final Supplier supplier; private Supplier current; private T swapper() { T obj = supplier.get(); current = () -> obj; return obj; } } You may find the order of my methods and such to be a bit different than what is usual. I prefer to have public stuff first, then package-private and protected, then private. Within those chunks, I do static fields, then constructors, then static methods, then normal fields, then normal methods. In general, this seems to sort things in the order of most important to user reading my code to least important. You are free to copy this code anywhere you want, or you can check out my functional-java library on github, which has a fully documented version of this class (func.java.lazy.LazilyInstantiate) and many other helpful functional classes.Reference: Java 8 Functional Programming: Lazy Instantiation from our JCG partner Jacob Zimmerman at the Programming Ideas With Jake blog....
software-development-2-logo

The Internet Is Pseudo-Decentralized

We mostly view the internet as something decentralized and resilient. And from practical point of view, it almost is. The Web is decentralized – websites reside on many different servers in many different networks and opening each site does not rely on a central server or authority. And there are all these cool peer-to-peer technologies like BitTorrent and Bitcoin that do not rely on any central server or authority. Let’s take a look at these two examples more closely. And it may sound like common sense, but bear with me. If you don’t know the IP address of the server, you need to type its alias, i.e. the domain name. And then, in order to resolve that domain name, the whole DNS resolution procedure takes place. So, if all caches are empty, you must go to the root. The root is not a single server of course, but around 500. There are 13 logical root DNS servers (lucky number for the internet), operated by 12 different companies/institutions, most of them in the US. 500 servers for the whole internet is enough, because each intermediate DNS server on the way from your machine to the root have caches, and also because the root itself does not have all the names in a database – it only has the addresses of the top level domains’ name servers. What about BitTorrent and Bitcoin? In newer, DHT-based BitTorrent clients you can even do full-text search in a decentralized way, only among your peers, without any need for a central coordinator (or tracker). In theory, even if all the cables under the Atlantic get destroyed, we will just have a fragmented distributed hash table, which will still work with all the peers inside. Because these Bit* technologies create the so-called overlay networks. They do not rely on the web, on DNS or anything else to function – each node (user) has a list of the IP addresses of its peers, so any node that is in the network, has all the information it needs to perform its operations – seed files, confirm transactions in the blockchain, etc. But there’s a caveat. How do you get to join the overlay network in the first place? DNS. Each BitTorrent and Bitcoin client has a list of domain names hardcoded in it, which are using round-robin DNS to configure multiple nodes to which each new node connects first. The nodes in the DNS record are already in the network, so they (sort-of) provide a list of peers to the newcomer, and only then he becomes a real member of the overlay network. So, even these fully-decentralized technologies rely on DNS, and so in turn rely on the root name servers. (Note: if DNS lookup fails, the Bit* clients have a small hardcoded list of IP addresses of nodes that are supposed to be always up.). And DNS is required even here, because (to my knowledge) there is no way for a machine to broadcast to the entire world “hey, I have this software running, if anyone else is running it, send me your IP, so that we can form a network” (and thank goodness) (though, probably asking your neighbouring IP range “hey, I’m running this software; if anyone else is, let me know, so that we can form a network” might work). So, even though the internet is decentralized, practically all services ontop of it that we use (the web and overlay-network based software) do rely on a central service. Which fortunately is highly-available, spread across multiple companies and physical sites and rarely accessed directly. But that doesn’t make it really decentralized. It is a wonderful practical solution that is “good enough”, so that we can rely on DNS as de-facto decentralized, but will it be in case of extreme circumstances? Here are three imaginary scenarios: Imagine someone writes a virus, that slowly and invisibly infects all devices on the network. Then the virus disables all caching functionality, and suddenly billions of requests go to the root servers. The root servers will then fail, and the internet will be broken. Imagine the US becomes “unstable”, the government seizes all US companies running the root servers (including ICANN) and starts blackmailing other countries to allow requests to the root servers (only 3 are outside the US). Imagine you run a website, or even a distributed network, that governments really don’t want you to run. If it’s something that serious, for example – an alternative, global democracy that ignores countries and governments – then they can shut you down. Even if the TLD name server does resolve your domain, the root server can decide not to resolve the TLD, until it stops resolving your domain. None of that is happening, because it doesn’t make sense to go to such lengths in order to just generate chaos (V and The Joker are fictional characters, right?). And probably it’s not happening anytime soon. So, practically, we’re safe. Theoretically, and especially if we have conspiracies stuck in our heads, we may have something to worry about. Would there be a way out of these apocalyptic scenarios? I think existing overlay networks that are big enough can be used to “rebuild” the internet even if DNS is dead. But let’s not think about the details for now, as hopefully we won’t need them.Reference: The Internet Is Pseudo-Decentralized from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
spring-security-logo

Stateless Spring Security Part 3: JWT + Social Authentication

This third and final part in my Stateless Spring Security series is about mixing previous post about JWT token based authentication with spring-social-security. This post directly builds upon it and focusses mostly on the changed parts. The idea is to substitude the username/password based login with “Login with Facebook” functionality based on OAuth 2, but still use the same token based authentication after that. Login flow Client-side The user clicks on the “Login with Facebook” button which is a simple link to “/auth/facebook”, the SocialAuthenticationFilter notices the lack of additional query parameters and triggers a redirect leading the user of your site to Facebook. They login with their username/password and are redirected back, again to “/auth/facebook” but this time with “?code=…&state=…” parameters specified. (If the user previously logged in at facebook and had a cookie set, facebook will even instantly redirect back and no facebook screen is shown at all to the user.) The fun part is that you can follow this in a browsers network log as it’s all done using plain HTTP 302 redirects. (The “Location” header in the HTTP response is used to tell the browser where to go next) Server-side After the redirect from facebook to “/auth/facebook?code=…&state=…” the SocialAuthenticationFilter now sees the proper parameters and will trigger two server calls to facebook. The first is to acquire an Access Token for the logged in user, the second is test if the whole process was succesful by acquiring user details using the access token. After this is all done the user is considered to be logged in and he can be redirected back to the root of the application using yet another 302 redirect (to “/”). Some words on Spring Social Spring Social is a complete framework for dealing with social networks and has a scope far beyond a mere login scenario. Apart from the different social network adapters, there is also a small integration library called Spring Social Security that implements the social authentication use-cases in such a way that it integrates better with Spring Security. It comes with an SocialAuthenticationFilter that maps to “/auth”, this is what we’ll use. So setting up social authentication requires configuring Spring Social itself as well as Spring Security using the neat little Spring Social Security library. Spring Social Configuring it basically involves extending the SocialConfigurerAdapter. First you tell it what social networks to support: Add facebook as provider @Override public void addConnectionFactories(ConnectionFactoryConfigurer cfConfig, Environment env) { cfConfig.addConnectionFactory(new FacebookConnectionFactory( env.getProperty("facebook.appKey"), env.getProperty("facebook.appSecret"))); } It also needs to know how to acquire the user id for current user: retrieve the UserId @Override public UserIdSource getUserIdSource() { //retrieve the UserId from the UserAuthentication in security context return new UserAuthenticationUserIdSource(); } Finally it needs a UsersConnectionRepository. Which is basically in charge of the relation between a user and his connections to a social network. Spring Social comes with two implementations of its own (jdbc or in-memory). I choose to roll my own as I wanted to re-use my Spring Data JPA based UserDetailsService. Custom UsersConnectionRepository @Override public UsersConnectionRepository getUsersConnectionRepository(ConnectionFactoryLocator connectionFactoryLocator) { SimpleUsersConnectionRepository usersConnectionRepository = new SimpleUsersConnectionRepository(userService, connectionFactoryLocator); // if no local user record exists yet for a facebook's user id // automatically create a User and add it to the database usersConnectionRepository.setConnectionSignUp(autoSignUpHandler); return usersConnectionRepository; } Spring Security As in last blog post, configuring it basically involves extending the WebSecurityConfigurerAdapter. Apart from the usual stuff like configuring and exposing an AuthenticationManager and UserDetailsService, it now needs to configure and plug-in the SocialAuthenticationFilter. This basically involves very little code as the SpringSocialConfigurer does most of the work. It could be as simple as: @Override protected void configure(HttpSecurity http) throws Exception { // apply the configuration from the socialConfigurer // (adds the SocialAuthenticationFilter) http.apply(new SpringSocialConfigurer()); } Considering I wanted to plug-in the Token based authentication, my own succesHandler and userIdSource; I had to make some configuration changes: @Autowired private SocialAuthenticationSuccessHandler successHandler; @Autowired private StatelessAuthenticationFilter jwtFilter; @Autowired private UserIdSource userIdSource;@Override protected void configure(HttpSecurity http) throws Exception {// Set a custom successHandler on the SocialAuthenticationFilter (saf) final SpringSocialConfigurer sc = new SpringSocialConfigurer(); sc.addObjectPostProcessor(new ObjectPostProcessor<...>() { @Override public <...> O postProcess(O saf) { saf.setAuthenticationSuccessHandler(successHandler); return saf; } });http....// add custom authentication filter for stateless JWT based authentication .addFilterBefore(jwtFilter, AbstractPreAuthenticatedProcessingFilter.class)// apply the configuration from the SocialConfigurer .apply(sc.userIdSource(userIdSource)); } If you wanted to you could also subclass the SpringSocialConfigurer and provide a more elegant setter for a custom successHandler… Past the Boilerplate (kudos to you for making it here) It’s now time to focus on some of the more interesting bits. Right after an initial successful connection to facebook is established a custom ConnectionSignUp is triggered: @Override @Transactional public String execute(final Connection<?> connection) { //add new users to the db with its default roles final User user = new User(); final String firstName = connection.fetchUserProfile().getFirstName(); user.setUsername(generateUniqueUserName(firstName)); user.setProviderId(connection.getKey().getProviderId()); user.setProviderUserId(connection.getKey().getProviderUserId()); user.setAccessToken(connection.createData().getAccessToken()); grantRoles(user); userRepository.save(user); return user.getUserId(); } As you can see my version simply persists the user with its connection data as a single JPA object. Purposely supporting only one-to-one relations between a user and an identity on facebook. Note that I ended up excluding the connection properties from the actual token generated from the user. Just like I previously excluded the password field (which is no longer part of the User object at all): @JsonIgnore private String accessToken; Going this route does mean that any call to the facebook API needs a database query for the additional connection fields. More on this later on. Right after the user is authenticated the custom AuthenticationSuccessHandler is triggered: @Override public void onAuthenticationSuccess(HttpServletRequest request, HttpServletResponse response, Authentication auth) {// Lookup the complete User object from the database final User user = userService.loadUserByUsername(auth.getName());// Add UserAuthentication to the response final UserAuthentication ua = new UserAuthentication(user); tokenAuthenticationService.addAuthentication(response, ua); super.onAuthenticationSuccess(request, response, auth); } This looks a lot like the code from previous blog post but I had to make some changes in the TokenAuthenticationService. Because the client is loaded after a redirect, to preserve the token on the client-side until then, it must be send to client as a cookie: public void addAuthentication(HttpServletResponse response, UserAuthentication authentication) { final User user = authentication.getDetails(); user.setExpires(System.currentTimeMillis() + TEN_DAYS); final String token = tokenHandler.createTokenForUser(user);// Put the token into a cookie because the client can't capture response // headers of redirects / full page reloads. // (this response triggers a redirect back to "/") response.addCookie(createCookieForToken(token)); } This ends up being part of the final redirect response looking like this:The final redirect back to the client after succesful login The last and best part is of course where all code comes together to form a pretty sweet API. Because Spring Social already takes care of creating a user specific request-scoped ConnectionRepository, a connection specific API of it can be created by adding the following bean code to the SocialConfigurerAdapter: @Bean @Scope(value = "request", proxyMode = ScopedProxyMode.INTERFACES) public Facebook facebook(ConnectionRepository repo) { Connection<Facebook> connection = repo.findPrimaryConnection(Facebook.class); return connection != null ? connection.getApi() : null; } This user specific facebook bean can be used in a controller like so: @Autowired Facebook facebook;@RequestMapping(value = "/api/facebook/details", method = RequestMethod.GET) public FacebookProfile getSocialDetails() { return facebook.userOperations().getUserProfile(); } client-side implementation As mentioned the token is now passed to the client as a Cookie. However just like previous time, the server-side still only accepts tokens send into a special HTTP header. Granted that this is pretty arbitrary and you could have it simply accept the cookie. I prefer it not to as it prevents CSRF attacks. (Because the browser can’t be instructed to automatically add the proper authentication token to a request.) So before retrieving the current user details the init method of the front-end now first tries to move the cookie to local storage: $scope.init = function () { var authCookie = $cookies['AUTH-TOKEN']; if (authCookie) { TokenStorage.store(authCookie); delete $cookies['AUTH-TOKEN']; } $http.get('/api/user/current').success(function (user) { if (user.username) { $rootScope.authenticated = true; $scope.username = user.username; // For display purposes only $scope.token = JSON.parse(atob( TokenStorage.retrieve().split('.')[0])); } }); }; The placement of the custom HTTP header is handled in a the same http interceptor as last time. The actual “Login with Facebook” button is just a link to trigger the whole redirect frenzy: <a href="/auth/facebook"><button>Login with Facebook</button></a> To check if the actual Facebook API works, I’ve included another button to display the user details from facebook after login. Final words (of advice) It’s been quite a ride to integrate my custom version of JWT with social authentication. Some parts were less than trivial. Like finding a good balance between offloading database calls to JWT tokens. Ultimately I choose not to share facebook’s access token with the client as it’s only needed when using Facebook’s API. This means that any query to Facebook requires a database call to fetch the token. In fact it means that any REST API call to any controller that has an @Autowired Facebook service results in an eagerly fetched access token as part of the request-scoped bean creation. This is however easily mitigated by using a dedicated controller for facebook calls, but definitely something to be aware of. If you plan on actually using this code and making Facebook API calls, make sure your JWT token expires before the facebook token does (currently valid for 60ish days). Better yet implement a forced re-login when you detect a failure as any re-login will automatically store the newly acquired facebook token in the database. You can find a complete working example at github. Details on how to run it can be found there as well. I’ve included both maven and gradle build files.Reference: Stateless Spring Security Part 3: JWT + Social Authentication from our JCG partner Robbert van Waveren at the JDriven blog....
java-interview-questions-answers

Learning Netflix Governator – Part 1

I have been working with Netflix Governator for the last few days and got to try out a small sample using Governator as a way to compare it with the dependency injection feature set of Spring Framework. The following is by no means comprehensive, I will expand on this in the next series of posts. So Governator for the uninitiated is an extension to Google Guice enhancing it with some Spring like features, to quote the Governator site:classpath scanning and automatic binding, lifecycle management, configuration to field mapping, field validation and parallelized object warmup.Here I will demonstrate two features, classpath scanning and automatic binding. Basic Dependency Injection Consider a BlogService, depending on a BlogDao: public class DefaultBlogService implements BlogService { private final BlogDao blogDao;public DefaultBlogService(BlogDao blogDao) { this.blogDao = blogDao; }@Override public BlogEntry get(long id) { return this.blogDao.findById(id); } } If I were using Spring to define the dependency between these two components, the following would be the configuration: package sample.spring;import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import sample.dao.BlogDao; import sample.service.BlogService;@Configuration public class SampleConfig {@Bean public BlogDao blogDao() { return new DefaultBlogDao(); }@Bean public BlogService blogService() { return new DefaultBlogService(blogDao()); } } In Spring, the dependency configuration is specified in a class annotated with @Configuration annotation. The methods annotated with @Bean return the components, note how the blogDao is being injected through constructor injection in blogService method. A unit test for this configuration is the following: package sample.spring;import org.junit.Test; import org.springframework.context.annotation.AnnotationConfigApplicationContext; import sample.service.BlogService;import static org.hamcrest.MatcherAssert.*; import static org.hamcrest.Matchers.*;public class SampleSpringExplicitTest {@Test public void testSpringInjection() { AnnotationConfigApplicationContext context = new AnnotationConfigApplicationContext(); context.register(SampleConfig.class); context.refresh();BlogService blogService = context.getBean(BlogService.class); assertThat(blogService.get(1l), is(notNullValue())); context.close(); }} Note that Spring provides good support for unit testing, a better test would be the following: package sample.spring;package sample.spring;import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import sample.service.BlogService;import static org.hamcrest.MatcherAssert.*; import static org.hamcrest.Matchers.*;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class SampleSpringAutowiredTest {@Autowired private BlogService blogService;@Test public void testSpringInjection() { assertThat(blogService.get(1l), is(notNullValue())); }@Configuration @ComponentScan("sample.spring") public static class SpringConig {}} This is basic dependency injection, so to specify such a dependency Governator itself is not required, Guice is sufficient, this is how the configuration would look using Guice Modules: package sample.guice;import com.google.inject.AbstractModule; import sample.dao.BlogDao; import sample.service.BlogService;public class SampleModule extends AbstractModule{@Override protected void configure() { bind(BlogDao.class).to(DefaultBlogDao.class); bind(BlogService.class).to(DefaultBlogService.class); } } and a Unit test for this configuration is the following: package sample.guice;import com.google.inject.Guice; import com.google.inject.Injector; import org.junit.Test; import sample.service.BlogService;import static org.hamcrest.Matchers.*; import static org.hamcrest.MatcherAssert.*;public class SampleModuleTest {@Test public void testExampleBeanInjection() { Injector injector = Guice.createInjector(new SampleModule()); BlogService blogService = injector.getInstance(BlogService.class); assertThat(blogService.get(1l), is(notNullValue())); }} Classpath Scanning and Autobinding Classpath scanning is a way to detect the components by looking for markers in the classpath. A sample with Spring should clarify this: @Repository public class DefaultBlogDao implements BlogDao { .... }@Service public class DefaultBlogService implements BlogService {private final BlogDao blogDao;@Autowired public DefaultBlogService(BlogDao blogDao) { this.blogDao = blogDao; } ... } Here the annotations @Service, @Repository are used as markers to indicate that these are components and the dependencies are specified by the @Autowired annotation on the constructor of the DefaultBlogService. Given this the configuration is now simplified, we just need to provide the package name that should be scanned for such annotated components and this is how a full test would look: package sample.spring; ... @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class SampleSpringAutowiredTest {@Autowired private BlogService blogService;@Test public void testSpringInjection() { assertThat(blogService.get(1l), is(notNullValue())); }@Configuration @ComponentScan("sample.spring") public static class SpringConig {} } Governator provides a similar kind of a support: @AutoBindSingleton(baseClass = BlogDao.class) public class DefaultBlogDao implements BlogDao { .... }@AutoBindSingleton(baseClass = BlogService.class) public class DefaultBlogService implements BlogService { private final BlogDao blogDao;@Inject public DefaultBlogService(BlogDao blogDao) { this.blogDao = blogDao; } .... } Here, @AutoBindSingleton annotation is being used as a marker annotation to define the guice binding, given this a test with classpath scanning is the following: package sample.gov;import com.google.inject.Injector; import com.netflix.governator.guice.LifecycleInjector; import com.netflix.governator.lifecycle.LifecycleManager; import org.junit.Test; import sample.service.BlogService;import static org.hamcrest.MatcherAssert.assertThat; import static org.hamcrest.Matchers.is; import static org.hamcrest.Matchers.notNullValue;public class SampleWithGovernatorTest {@Test public void testExampleBeanInjection() throws Exception { Injector injector = LifecycleInjector .builder() .withModuleClass(SampleModule.class) .usingBasePackages("sample.gov") .build() .createInjector();LifecycleManager manager = injector.getInstance(LifecycleManager.class);manager.start();BlogService blogService = injector.getInstance(BlogService.class); assertThat(blogService.get(1l), is(notNullValue())); }} See how the package to be scanned is specified using a LifecycleInjector component of Governator, this autodetects the components and wires them together. Just to wrap the classpath scanning and Autobinding features, Governator like Spring provides a support for junit testing and a better test would be the following: package sample.gov;import com.google.inject.Injector; import com.netflix.governator.guice.LifecycleTester; import org.junit.Rule; import org.junit.Test; import sample.service.BlogService;import static org.hamcrest.MatcherAssert.*; import static org.hamcrest.Matchers.*;public class SampleWithGovernatorJunitSupportTest {@Rule public LifecycleTester tester = new LifecycleTester();@Test public void testExampleBeanInjection() throws Exception { tester.start(); Injector injector = tester .builder() .usingBasePackages("sample.gov") .build() .createInjector();BlogService blogService = injector.getInstance(BlogService.class); assertThat(blogService.get(1l), is(notNullValue())); }} Conclusion If you are interested in exploring this further I have a sample in this github project, I would be expanding this project as I learn more about Governator.Reference: Learning Netflix Governator – Part 1 from our JCG partner Biju Kunjummen at the all and sundry blog....
java-design-patterns-logo_scaled

Transforming Collections with Decorators

The Decorator Pattern Ever since first learning the programming design patterns, the decorator pattern has been my favorite. It seemed such a novel idea to me, and so much more interesting than the others. Don’t get me wrong, most of the others blew my mind too, but none so much as the decorator pattern. To this day, it’s still one of my favorites. (If you’re unfamiliar with design patterns, I highly recommend Head First Design Patterns. If you just want to learn about the decorator pattern, here is an excerpt of the decorator chapter from that book.) Personally, I believe the decorator pattern is generally underutilized. There’s a couple probably reasons for this. For one, I don’t think it applies to all that many situations. Another thing, problems that can be solved with the decorator pattern are generally fairly difficult to spot. What makes the pattern so mind-blowing to me is the same reason it can be difficult to figure out where it’s needed, that reason being that it’s such an unusual idea. That is, it seems to be until you’re strongly acquainted with the principle of “composition over inheritance”. So many places drill inheritance into your head so much that it’s really difficult for the mind to believe that composition can often be a better idea than inheritance. Anyway, not only is the decorator pattern my favorite pattern, it’s strongly used in one my favorite new features of Java 8: the Stream API. In fact, much of what I’m going to show you largely mimics some of the behavior of the Stream API. The Problem Let’s say you have a list of Strings, but they may or may not have leading or trailing spaces that you don’t want. You’d probably do something like this to get rid of the unwanted spaces. List untrimmedStrings = aListOfStrings(); List trimmedStrings = new ArrayList();for(String untrimmedString : untrimmedStrings) { trimmedStrings.add(untrimmedString.trim()); }//use trimmed strings... In this case, you create a whole new list of Strings and fill it with the Strings from the first list, but trimmed. There are several problems with this. First off, it creates an entire new list right off the bat. Instead, the creation of each trimmed String could be delayed until needed, and never even be done if it isn’t needed. Also, if someone wanted to add more Strings, you’d have to add them to both lists. You’d also have to make sure you trim the new Strings before putting them into the trimmed list. Lastly, this code is imperative instead of declarative. Let’s look at a more declarative version of the code, then see how to use it to solve the other problems. List untrimmedStrings = aListOfStrings(); List trimmedStrings = trimmed(untrimmedStrings);//use trimmed strings... Heck, anything could be happening in that trimmed() function! And look at that; it returns a list of Strings, just like the previous way. Fat load of good that did, right? Wrong. Yes, that function could technically just be doing the same thing we did earlier, which means all we did was make this outer code declarative. But in this example, it is intended to be a static factory method (with a static import) that creates a new Trimmed object that wraps the untrimmedStrings list. Trimmed implements the List interface, but it delegates nearly everything to the wrapped list, but often with decorated functionality. When a new String is added or removed, it’s done to “both” lists by doing it to the wrapped list. And when it adds the new String, it can add it as-is, but then it simply needs to make sure that it’s trimmed on the way out. Also, since trimming is only done when pulling data from the list, we didn’t have to do all the work of trimming every String right away. There’s a chance that some of the Strings will never even be dealt with, thus those String will never be needlessly trimmed. There are some downsides to this, though. One, if the trimmed String is pulled from the list multiple times, it ends up getting trimmed every time. This doesn’t take any additional memory, but it does add a bit of time, especially if you loop over the entire list several times. Secondly, it creates the sort of side effect of the trimmed list and untrimmed list being the same list. A change to one affects the other, whether we want that or not. I don’t want to waste too much time and space in this article to show you a fully created List implementation of Trimmed (there are over 30 methods to define for List), so I’m going to tweak it so that it’s just the Iterable methods that are defined. Since, a lot of the time, all you really do is iterate over collections, this will have to be relatively acceptable. public class Trimmed implements Iterable { public static List trimmed(List base) { return base; }public Trimmed(Iterable base) { this.base = base; }public Iterator iterator() { return new TrimmedIterator(base.iterator()); }private Iterable base; }class TrimmedIterator implements Iterator { public TrimmedIterator(Iterator base) { this.base = base; }public boolean hasNext() { return base.hasNext(); }public String next() { return base.next().trim(); }public void remove() { throw new UnsupportedOperationException(); }private Iterator base; } How To Decorate Objects I don’t recall anyone ever mentioning this anywhere, but it’s fairly important, so I want to tell you about it. There are 2 basic schools of thought for how to decorate an object. The first is when you simply create a new instance of the decorator with the decorated/wrapped object passed in. The second option is to call a method on the object to be decorated. Both options are shown here MyCollection untrimmedStrings = aCollectionOfStrings();//new Decorator Instance MyCollection trimmedStrings = new TrimmingDecorator(untrimmedStrings);//OR//method call on the to-be-decorated object MyCollection trimmedStrings2 = untrimmedStrings.trimmed(); And the code of trimmed() looks like this: public MyCollection trimmed() { return new TrimmingDecorator(this); } Either way has its pros and cons. Since each options’ cons are essentially the lack of the other option’s pros, I’ll just list each option’s pros. New Instance Pros:More extensible than method call option, since the method calls have to try to cover every possibility of decorator Users can see that it’s the decorator pattern more easily Fewer methods required in the Decoratable interfaceMethod Call Pros:Hides the decorator implementation if the user has no need to know No explicit “new” keywords on the user end (which is generally considered bad) Users have an easier time finding all of the decorators, since they’re all listed there on the decoratable object’s interfaceJava’s original IO library is a good example of new instance decorating while the Stream API in Java 8 is a good example of method call decorating. My personal preference is to use the method call option, since it makes all the possibilities obvious to the user, but if the point is to make it so the user can extend your objects with their own decorators, too, then you should definitely go with the new instance route.Reference: Transforming Collections with Decorators from our JCG partner Jacob Zimmerman at the Programming Ideas With Jake blog....
java-logo

Using Java 8 to Prevent Excessively Wide Logs

Some logs are there to be consumed by machines and kept forever. Other logs are there just to debug and to be consumed by humans. In the latter case, you often want to make sure that you don’t produce too much logs, especially not too wide logs, as many editors and other tools have problems once line lenghts exceed a certain size (e.g. this Eclipse bug). String manipulation used to be a major pain in Java, with lots of tedious-to-write loops and branches, etc. No longer with Java 8! The following truncate method will truncate all lines within a string to a certain length: public String truncate(String string) { return truncate(string, 80); }public String truncate(String string, int length) { return Seq.of(string.split("\n")) .map(s -> StringUtils.abbreviate(s, 400)) .join("\n"); } The above example uses jOOλ 0.9.4 and Apache Commons Lang, but you can achieve the same using vanilla Java 8, of course: public String truncate(String string) { return truncate(string, 80); }public String truncate(String string, int length) { return Stream.of(string.split("\n")) .map(s -> s.substring(0, Math.min(s.length(), length))) .collect(Collectors.joining("\n")); } The above when truncating logs to length 10, the above program will produce: Input Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Output Lorem ipsum dolor... incididunt ut lab... nostrud exercitat... Duis aute irure d... fugiat nulla pari... culpa qui officia... Happy logging!Reference: Using Java 8 to Prevent Excessively Wide Logs from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
java-interview-questions-answers

MDB != JMS and vice-versa

BasicsA Message Driven Bean (further referred to as MDB) is just another EJB like Stateless, Stateful or a Singleton. It’s specified using the @MessageDriven annotation. MDBs are used for asynchronous message processing They are similar to Stateless EJBs since both of them are pooled by the EJB container However they differ from Stateless EJBs since MDBs cannot be directly accessed by a client. Only the container calls them in response to a message sent by the client to an endpoint which the MDB is listening to.Commonly used pattern for MDBMDBs are generally used along with JMS (Java Message Service API) A MDB is configured to listen to a JMS destination using @ActivationConfigProperty, implements the javax.jms.MessageListener interface and provides the business logic (message processing) in the onMessage method A component sends a Message to the JMS destination (end point). This is not a synchronous process (as already mentioned above). The message firing method returns immediately and the container takes care of calling the MDB configured to listen that particular JMS destinationMDB mythMDBs are not part of the JMS spec or coupled with JMS by any means – this is a misconception. MDB are pooled beans which can process messages in an async fashion and can listen to any end point including a JMS queue or destination (most generally seen). In fact, this has been the case since EJB 2.1 and is made possible by the JCA (Java Connector Architecture) specWhat’s JCA ?On a high level, JCA enables Java EE servers to interact with external systems e.g. legacy enterprise information sources etc via a standard SPI (not dealing with intricate JCA details here) One can use the JCA standard interfaces to build a Resource Adapter (RAR file) for a specific system JCA provides contracts for two-way communication (inbound and outbound) b/w the Java EE container and the external system – the implementation for which needs to be present withing the Resource Adapter itselfHow does JCA enable the concept of Generic MDBs ?JCA defines MDB specific features Just like in the case of a JMS based MDB, a JCA based MDB also needs to implement an interface and define activation properties (both are specific to the JCA Resource Adapter implementation) The external system sends a message which the Resource Adapter accepts via its implementation of the inbound JCA contract and this message is relayed to an internal endpoint (this is again specific to the JCA adapter implementation) The MDB registered to this endpoint kicks in an executes the business logic on the received messageEnd result An external system sending messages to a Java EE container using a standard interface (JCA) while the JCA implementation takes care of delivering it to the appropriate endpoint which further delivers it to the registered MDB Thing to notice is that this is completely portable across Java EE servers since, EJB spec vendors have to support JCA based MDBs. Further readingJCA specification JMS specification EJB specificationReference: MDB != JMS and vice-versa from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
hamcrest-logo

Advanced Creation of Hamcrest Matchers

Intro Last time, I went over what a Hamcrest Matcher was, how it’s used, and how to make one. In this article, I will explain more advanced steps in the creation of Hamcrest Matchers. First, I’ll share how to make your matchers more easily type-safe, then some techniques for stateless Matchers, then finally how to cut down on so many static imports on your test classes. I’ll also give some quick tips on naming your static factory methods.     Typesafe Matchers You may have noticed in the matches() method that we developed last time, I put in a comment that I had used the “yoda condition” to avoid a null check as well as a type check. First off, it wouldn’t hurt to do a little bit of research on yoda conditions yourself (I may put out an article about it someday, but no guarantees), but the biggest thing to note here is that some sort of type check and null check is needed. This is because the matches() method takes in an object, not the type specified in the generics argument. As is described in Hamcrest’s documentation: This method matches against Object, instead of the generic type T. This is because the caller of the Matcher does not know at runtime what the type is (because of type erasure with Java generics). Because of this, we need to make sure of the type of the Object being passed in. Also, we should make sure there are no nulls being passed in (unless our specific Matcher is okay with that, but that’s super rare), or at least make certain that a null being passed in won’t cause a NullPointerException. But there’s an easier way: the TypeSafeMatcher. If you extend this class instead of the BaseMatcher class, it’ll do the type checking and null checking for you, then pass the object to a matching method that only takes the generics-specified type. Defining a TypeSafeMatcher is very similar to defining a Matcher the way we did last time, with a few differences: instead of overriding matches(), you override matchesSafely() which takes in the generic type instead of Object; and instead of overriding describeMismatch(), you override describeMismatchSafely(). It may be a surprise that there isn’t a new describeTo(), but seeing as that doesn’t take in anything other than the Description, there’s no need for a type safe version. Otherwise, creating the TypeSafeMatcher is very much the same. I have to mention something that I forgot last week, though. Someone who is defining their own Matchers doesn’t need to override the describeMismatch() or describeMismatchSafely() methods. BaseMatcher and TypeSafeMatcher both have default implementations of those methods that simply output “was item.toString()” ( or “was a itemClassName(item.toString())” if the TypeSafeMatcher gets an item of an incorrect type). These default implementations are generally good enough, but if a type being worked with doesn’t have a useful implementation of toString(), it’s obviously more useful to use your own mismatch message that describes what is wrong with the item. I always do, even if the class has a decent toString() implementation, since it can direct a little more quickly to the problem. A Note About Other Extendable Matcher Classes There are several other Matcher classes in the Hamcrest core library that are meant for users to extend from. These come in a few flavors. First off, there’s CustomMatcher and CustomTypeSafeMatcher. These are designed for making one-off Matchers via anonymous classes. They can be useful, but I’d prefer to always make a proper implementation in case I ever do need it again. Next, there’s the DiagnosingMatcher and the TypeSafeDiagnosingMatcher, which have you create the mismatch description within the matches() method. This would seem like a nice way to kill two birds with one stone, but I have several beefs with it: 1) it violates the SRP 2) if there’s a mismatch, it makes a second call to the matches() method just to fill in the mismatch description. So the first call ignores getting the description, and the second ignores the matching. The last special Matcher that you can extend is the FeatureMatcher. This can be fairly useful, but it’s complicated to understand (I’m not sure if I understand it correctly – not until I try making one of my own or reading up on how to do one). If I figure it out and gain a good understanding, I’ll write another post here for you. Stateless Matchers Any Matcher that doesn’t require anything passed into its constructor (and therefore, it’s static factory method) is a stateless Matcher. They have a nice little advantage over other Matchers in that you only need a single instance of it to exist at any point, which can be reused any time you need to use that Matcher. This is a really simple addition. All you need to do is create a static instance of the class and have your static factories return that instance instead of calling the constructor. The IsEmptyString Matcher that actually comes with library does this (our example last time didn’t, but that was for simplicity’s sake). Reducing the Number of Static Imports After writing a fair few tests with Hamcrest Matchers, you’ll probably notice that you have quite a few static imports at the top of your file. This can become a big fat nuisance after a while, so let’s look at something to reduce this problem. This is actually almost as simple of a solution as the last one. You can reduce the static imports by creating a new class that essentially does it for you. This new class has those annoying static imports, but then it defines its own static factory methods that delegate to the originals. Here’s an example of combining some core Matchers into one place: import org.hamcrest.core.IsEqual; import org.hamcrest.core.IsNull; import org.hamcrest.core.IsSame; import org.hamcrest.Matcher;public class CoreMatchers { public static Matcher equalTo(T object) { return IsEqual.equalTo(object); }public static Matcher notNullValue() { return IsNull.notNullValue(); }public static Matcher notNullValue(Class type) { return IsNull.notNullValue(type); }public static Matcher nullValue() { return IsNull.nullValue(); }public static Matcher nullValue(Class type) { return IsNull.nullValue(type); }public static Matcher sameInstance(T target) { return IsSame.sameInstance(target); }public static Matcher theInstance(T target) { return IsSame.theInstance(target); } } Then, to use any or all of those Matchers, you only need to do a static import of CoreMatchers.*There is also a way generate these combined Matcher classes, shown on the official Hamcrest tutorials. I won’t go over it, since it’s outside the scope of this article, and I’m not a fan of it. Closing Tips: Naming If you go through the official Hamcrest tutorial and/or look over built-in Matchers, you may notice a trend for the naming of the static factory methods. The general grammar matches “assert that testObject is factoryMethod“. The grammar of the method name is generally designed to be a present tense action that can be preceded with “is”.When naming your own static factory methods, you should usually follow this convention, but I actually suggest putting “is” into the name already. That way, users of your Matcher don’t need to nest your method inside the is() method. If you do this, though, you will need to create the inverse function too. The reason to allow the is() method to wrap your Matcher is so you can also wrap it in the not() method to test the inverse of what you’re already testing. This leads to a sentence like “assert that testObject is not factoryMethod“.If you feel that following the convention is too restrictive for your specific Matcher, just make sure you’re using a present tense action test. For example, I made a matcher that checks for an exception being thrown whose static factory method is throwsA(). I just didn’t like naming it throwingA() in order to work with “is”. But, again, if you break the convention, you have to be certain to create an inverse static factory method; doesntThrowA(), for example.If you’re implementing your own inverse factories, the simplest way to do so is usually to wrap your positive factory with not(). So, my doesntThrowA() method would return not(throwsA()). Be careful, though: simply reversing true and false sometimes doesn’t actually give the proper inverse you’re going for. Outro Well, that’s all I have for you. If there’s anything else about Hamcrest Matchers you’d like me to go over, let me know in the comments. Otherwise, you can do your own research on Hamcrest Matchers on its github page.Next week, I’m going to go over how you can get your Hamcrest Matchers to check multiple things in a similar fluent way that AssertJ does their assertions.Reference: Advanced Creation of Hamcrest Matchers from our JCG partner Jacob Zimmerman at the Programming Ideas With Jake blog....
hamcrest-logo

Redesigning Hamcrest

I’ve done a few posts on the Hamcrest library, and I really do enjoy using it, but there are a few changes I would love to make to it. I understand most of the design decisions that they made, but I think some of them weren’t really worth it. Introducing Litecrest Most of the changes I would make to the library help to lighten the load of Hamcrest, since I feel like there are a few things that weigh it down unnecessarily. This is why I call my changes Litecrest. It won’t be an actual library; this is all just thinking aloud. I also hope that you’ll learn a little about designing libraries from this. No Descriptions The Description interface and StringDescription and BaseDescription classes aren’t really worthwhile. They provide some nice methods for converting lists to nice Strings, but the toString() method on all of those should be sufficient. If not, one could put some protected final methods on the BaseMatcher to use for conveniently building Strings for lists. Granted, this doesn’t really follow SRP that closely, so you could use something like Description to provide the convenience methods. Description, otherwise, isn’t very helpful. Its very presence supposes that it’s there specifically to provide an output that may not be a String in the long run. Being a well-used library, changing it from String to a output-agnostic type would break backwards compatibility down the road, but such a change isn’t likely to be needed. Apply YAGNI, and the Description class goes right down the toilet. No Out Parameters The describeTo() and describeMismatch should not be taking in a Description or any other type of String appending object, especially as an out parameter (something to avoid as often as possible). Seeing as those methods don’t have a return type to begin with, there’s definitely no reason to use an out parameter. Looking at the problem a little closer, you’ll see there’s no reason for a parameter at all. I understand that they may have been trying to force the creators of matchers to not use String concatenation, but that shouldn’t be. If a matcher’s description was just a simple little String, there’s no reason why they shouldn’t be able to just return that String. Personally, I would have removed the Description parameters and given them a return type of String or CharSequence. I consider CharSequence because then it gives a higher incentive to use StringBuilder, but simply returning a String is no big deal either, since they can call toString() on it. I probably would go with CharSequence, though, too, since I’d be using a StringBuilder in the assertion logic to put together the output, and StringBuilders can take in CharSequences too, so the only toString() that would ever have to be called is when finalizing the output. Type-Safety The Matcher interface takes in a generic parameter, which is meant to go with the matches() method, but said method takes in an Object instead of the generic type. The javadoc claims that this is because of type erasure, but I don’t see how that’s a problem. I haven’t done any digging to try out whether you could switch it over to the generic type, but if I found that you actually could use the generic type, I would. This eliminates the need for the TypeSafeMatcher, which, because it also checks for null, could be replaced with a simpler NullCheckingMatcher, or just implement it so that the assertion will change the mismatch description to “was null” if it catches a NullPointerException. By doing all of this, we can possibly eliminate all the other base classes that had to be doubled up just to cover the type-safe matchers and matchers that are less so. (examples: CustomMatcher and CustomTypeSafeMatcher, DiagnosingMatcher and TypeSafeDiagnosingMatcher, and my doubled-up ChainableMatchers – heck, get rid of both DiagnosingMatchers; they’re a poor design, calling matches() twice) Change Some Names I really don’t like the name describeTo(). It should be describeExpected() or describeMatch(). I understand that they were following the naming convention of SelfDescribing in the JMock Constraints, but seeing as they didn’t bother to finish copying the rest of the method signature, it doesn’t really do any good. CustomMatchers should be called OneOffMatchers or QuickMatchers. Custom is a misleading name, making it sound like you need to extend from it in order to even make your own matchers. More Examples in Documentation There are a few classes in the library that I’m not sure how useful they are because their documentation doesn’t show how they’re used. Condition is one of those. From the little bit of documentation, it sounds like it would be relatively useful, but since it provides no examples of use (and it’s a relatively complex file with an inner interface and two inner classes), I have no idea how to use it. It also doesn’t document its public methods, so I’m not sure what they do without a lot of digging. FeatureMatcher is decently documented, but again, there are no examples. Those writing documentation for a library much keep that in mind at all times; if it’s not totally obvious (often, even if it is), you should give examples of your class in use. Remove Extraneous Classes Some of these have already been gone over, whether directly or indirectly. Remove Description and all of its subclasses. Remove SelfDescribing, since it’s really only useful if Description still exists. Remove all the TypeSafe versions of base matchers. Remove the Diagnosing matchers. I’m not sure if I should remove Condition because I don’t how useful it is. If we keep Condition, then we end up with five of the original eleven classes in the core org.hamcrest package and two of the original four interfaces in the api org.hamcrest package. Now let’s dig into org.hamcrest.internal package. ArrayIterator isn’t useful since you can just use arrays can already be used with a foreach loop. NullSafety seems to mimic Arrays.toList() functionality, but replaces null matchers with the IsNull matcher. I don’t see how this is helpful, so I’ll remove it. ReflectiveTypeFinder may end up being useful. I’ve only seen it used in TypeSafeMatcher and FeatureMatcher, though I’m not sure how much it’s used in FeatureMatcher. I’ll keep it, though. The last two deal with SelfDescribing, which we’ve removed, so these two go as well. That only leaves ReflectiveTypeFinder from the five classes that used to be here. I’m not going to go into the all the other matchers; for the most part, they’ve been added for their usefulness. There would likely have to be changes to almost all of them due to the removal of so many of the base classes. Lambdas! You could expand the usefulness of the matcher idea if you applied the new functional paradigm to hamcrest as well. I haven’t thought of much, but for one-off matchers, you could modify the library to include a new assertThat() method that looks like this: public static void assertThat(T item, String description, Predicate matcher) { if(!matcher.test(item)) { StringBuilder output = new StringBuilder(); output.append("Expected: ") .append(description) .append("\n but: was") .append(item.toString()); throw new AssertionError(output.toString()); } } This would allow you to write assertions similar to: assertThat("cats", "doesn't contain \"dogs\"", str -> !str.contains("dogs")); In fact, I’ve actually added a LambdaAssert class to my ez-testing mini library, so you could use this with the original hamcrest library. Matcher Interface There is a Matcher interface that is essentially pointless because hamcrest wants you to extend BaseMatcher rather than implementing Matcher. Why would you create an interface if you very strictly don’t want anyone to implement? Especially since the only thing that BaseMatcher does for us is create a default implementation for describeMismatch() (that, and “implement” the deprecated method that was put there to tell you to use BaseMatcher instead of Matcher). If you really really don’t want people using the interface, then get rid of it. Personally, since I often override describeMismatch() anyway, I feel that it should be totally okay to simply implement the interface, instead of having to make the JVM load a base class that actually provides nothing for me. Plus, since we have Java 8 now, the interface could just use a default method to make the default implementation. I can understand wanting to avoid this, though, since older versions of Java would not be able to utilize this. So, either just make BaseMatcher or be okay with Matcher being implemented. Outro There are other little things that I would like to change, such as forcing people to override describeMismatch() instead of providing a default, but I’m not even certain about that one, since the default would generally be effective enough. Anyway, even if you have a popular library, it doesn’t mean that it’s perfect. Always be on the lookout for refactoring you can do. Unfortunately, all of these changes would not be backwards-compatible, but sometimes it’s worth it.Reference: Redesigning Hamcrest from our JCG partner Jacob Zimmerman at the Programming Ideas With Jake blog....
java-logo

Java Lambdas and Low Latency

Overview The main question around the use of Lambdas in Java and Low Latency is; Does they produce garbage and is there anything you can do about it? Background I am working on a library which supports different wire protocols. The idea being that you can describe the data you want to write/read and the wire protocol determines if it uses text with fields like JSon or YAML, text with field numbers like FIX, binary with field names like BSON or a Binary form of YAML, binary with fields name, field numbers or no field meta at all. The values can be fixed length, variables length and/or self describing data types. The idea being that it can handle a variety of schema changes or if you can determine the schema is the same e.g. over a TCP session, you can skip all that and just send the data. Another big idea is using lambdas to support this. What is the problem with Lambdas The main issue is the need to avoid significant garbage in low latencies applications. Notionally, every time you see lambda code this is a new Object. Fortunately, Java 8 has significantly improved Escape Analysis. Escape Analysis allows the JVM to replace new Object by unpacking them onto the stack, effectively giving you stack allocation. This feature was available in Java 7 however it rarely eliminated objects. Note: when you use a profiler it tends to prevent Escape Analysis from working so you can’t trust profilers that use code injection as the profiler might say an object is being creation when without the profiler it doesn’t create an object. Flight Recorder does appear to mess with Escape Analysis. Escape Analysis has always had quirks and it appears that it still does. For example, if you have an IntConsumer or any other primitive consumer, the allocation of the lambda can be eliminated in Java 8 update 20 – update 40. However, the exception being boolean where this doesn’t appear to happen. Hopefully this will be fixed in a future version. Another quirk is that the size (after inlining) of the method where the object elimination occurs matters and in relatively modest methods, escape analysis can give up. A specific case In my case I have a read method which looks like this: public void readMarshallable(Wire wire) throws StreamCorruptedException { wire.read(Fields.I).int32(this::i) .read(Fields.J).int32(this::j) .read(Fields.K).int32(this::k) .read(Fields.L).int32(this::l) .read(Fields.M).int32(this::m) .read(Fields.N).int32(this::n) .read(Fields.O).int32(this::o) .read(Fields.P).int32(this::p) .read(Fields.Q).int32(this::q) .read(Fields.R).int32(this::r) .read(Fields.S).int32(this::s) .read(Fields.T).int32(this::t) .read(Fields.U).int32(this::u) .read(Fields.V).int32(this::v) .read(Fields.W).int32(this::w) .read(Fields.X).int32(this::x) ; } I am using lambdas for setting the fields the framework can handle optional, missing or out of order fields. In the optimal case, the fields are available in the order provided. In the case of a schema change, the order may be different or have a different set of fields. The use of lambdas allows the framework to handle in order and out of order fields differently. Using this code, I performed a test, serializing and deserializing the object 10 million times. I configured the JVM to have an eden size of 10 MB with -Xmn14m -XX:SurvivorRatio=5 The Eden space 5x the two survivor spaces with ratio 5:2. The Eden space is 5/7th of the total young generation i.e. 10 MB. By having an Eden size of 10 MB and 10 million tests I can estimate the garbage created by counting the number of GCs printed by -verbose:gc For every GC I get, an average of one byte per test was crated. When I varied the number of fields serialized and deserialized I got the following result on an Intel i7-3970X.In this chart you can see that for 1 to 8 fields deserialized i.e. up to 8 lambdas in the same method, there is almost no garbage created i.e. at most one GC. However at 9 or more fields or lambdas, the escape analysis fails and you get garbage being created, increasing linearly with the number of fiedls. I wouldn’t want you to believe that 8 is some magic number. It is far more likely to be a limit of the size in bytes of the method, though I couldn’t find such a command line setting. The difference occurs when the method grew to 170 bytes. Is there anything which can be done? The simplest “fix” turned out to be breaking the code into two methods (possibly more if needed) by deserializing half the fields in one method and half the fields in another, it was able to deserialize 9 to 16 fields without garbage. This is the “bytes(2)” and “ns (2)” results. By eliminating garbage the code also runs faster on average. Note: the time to serialize and deserialize an object with 14 x 32-bit integer was under 100 ns. Other notes: When I used a profiler, YourKit in this case, code which produced no garbage started producing garbage as the Escape Analysis failed. I printed the method inlining and found assert statements in some key methods prevented them from being inlined as it made the methods larger. I fixed this by creating a sub-class of by main class with assertions on to be created by a factory method when assertions are enabled. The default class has no assertions and no performance impact. Before I moved these assertions I could only deserialize 7 fields without triggering garbage. When I replaced the lambdas with anonymous inner classes, I saw similar object elimination though in most cases if you can use lambda that is preferred. Conclusion Java 8 appears to be much smarter at removing garbage produce by very short lived objects. This means that techniques such as passing lambdas can be an option in Low Latency applications. EDIT I have found the option which helps in this situation though I am not yet sure why. If I use the option -XX:InlineSmallCode=1000 (default) and I change it to -XX:InlineSmallCode=5000 the “fixed” example above starts producing garbage, however if I reduce it to -XX:InlineSmallCode=500 even the code example I gave originally performs without producing garbage.Reference: Java Lambdas and Low Latency from our JCG partner Peter Lawrey at the Vanilla Java blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close