Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

spring-interview-questions-answers

SSL / TLS REST Server – Client with Spring and TomEE

When building a system, developers usually disregard the security aspects. Security has been always something very important to worry about, but it’s attracting even higher concerns than before. Just this year we had a few cases like the Heartbleed Bug or the CelebrityGate scandal. This has nothing to do with the post, but are just examples that security really matters and we should be aware of it. With the increasing popularity of REST services it makes sense that these need to be secured in some way. A couple of weeks ago, I had to integrate my client with a REST service behind https. I have never done it before and that’s the reason for this post. I have to confess that I’m no security expert myself, so please correct me if I write anything stupid. The Setup For this example I have used the following setup:TomEE (or Tomcat) with SSL Configuration Spring Apache HTTP ComponentsI’m not going into many details about SSL and TSL, so please check here for additional content. Note that TLS is the new name for SSL evolution. Sometimes there is confusion between the two and people often say SSL, but use the newest version of TSL. Keep that in mind. Don’t forget to follow the instructions on the following page to setup SSL for Tomcat: SSL Configuration HOW-TO. This is needed for the server to present the client with a set of credentials, a Certificate, to secure the connection between server and client. The Code Service Let’s create a simple Spring REST Service: RestService.java @Controller @RequestMapping("/") public class RestService { @RequestMapping(method = RequestMethod.GET) @ResponseBody public String get() { return "Called the get Rest Service"; } } And we also need some wiring for this to work: RestConfig.java @Configuration @EnableWebMvc @ComponentScan(basePackages = "com.radcortez.rest.ssl") public class RestConfig {} web.xml <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.1" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd"><servlet> <servlet-name>rest</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextClass</param-name> <param-value>org.springframework.web.context.support.AnnotationConfigWebApplicationContext</param-value> </init-param> <init-param> <param-name>contextConfigLocation</param-name> <param-value>com.radcortez.rest.ssl</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet><servlet-mapping> <servlet-name>rest</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping><security-constraint> <web-resource-collection> <web-resource-name>Rest Application</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <!-- Needed for our application to respond to https requests --> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> Please, note the elements security-constraint, user-data-constraint and <transport-guarantee>CONFIDENTIAL</transport-guarantee>. These are needed to specify that the application requires a secure connection. Check Securing Web Applications for Java Applications. Running the Service Just deploy the application on the TomEE Server using your favourite IDE environment and access https://localhost:8443/. You should get the following (you might need to accept the server certificate first):Note that the browser protocol is https and the port is 8443 (assuming that you kept the default settings in SSL Configuration HOW-TO. Client Now, if you try to call this REST service with a Java client, most likely you are going to get the following message and Exception (or similar): Message: I/O error on GET request for “https://localhost:8443/”:sun.security.validator.ValidatorException: Exception: Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target This happens because the running JDK does not have a valid certificate for your server. You can import it, and get rid of the problem, but let’s do something more interesting. We are going to programatically supply a trusted keystore with our server certificate. This is especially useful if:you are running your code into multiple environments you don’t have to manually import the certificate into the JDK every time if you upgrade the JDK you have to remember about the certificates for some odd reason you don’t have access to the JDK itself to import the certificateLet’s write some code: RestClientConfig.java @Configuration @PropertySource("classpath:config.properties") public class RestClientConfig { @Bean public RestOperations restOperations(ClientHttpRequestFactory clientHttpRequestFactory) throws Exception { return new RestTemplate(clientHttpRequestFactory); }@Bean public ClientHttpRequestFactory clientHttpRequestFactory(HttpClient httpClient) { return new HttpComponentsClientHttpRequestFactory(httpClient); }@Bean public HttpClient httpClient(@Value("${keystore.file}") String file, @Value("${keystore.pass}") String password) throws Exception { KeyStore trustStore = KeyStore.getInstance(KeyStore.getDefaultType()); FileInputStream instream = new FileInputStream(new File(file)); try { trustStore.load(instream, password.toCharArray()); } finally { instream.close(); }SSLContext sslcontext = SSLContexts.custom().loadTrustMaterial(trustStore, new TrustSelfSignedStrategy()).build(); SSLConnectionSocketFactory sslsf = new SSLConnectionSocketFactory(sslcontext, new String[]{"TLSv1.2"}, null, BROWSER_COMPATIBLE_HOSTNAME_VERIFIER); return HttpClients.custom().setSSLSocketFactory(sslsf).build(); }@Bean public static PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() { return new PropertySourcesPlaceholderConfigurer(); } } Here we use Spring RestOperations interface which specified a basic set of RESTful operations. Next we use Apache HTTP Components SSLConnectionSocketFactory which gives us the ability to validate the identity of the server against a list of trusted certificates. The certificate is loaded from the same file used on the server by KeyStore. RestServiceClientIT.java @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(classes = RestClientConfig.class) public class RestServiceClientIT { @Autowired private RestOperations rest;@Test public void testRestRequest() throws Exception { ResponseEntity response = rest.getForEntity("https://localhost:8443/", String.class); System.out.println("response = " + response); System.out.println("response.getBody() = " + response.getBody()); } } A simple test class. We also need a properties file with the keystore file location and password: config.properties keystore.file=${user.home}/.keystore keystore.pass=changeit This should work fine if you used all the defaults. Running the Test If you now run the test which invokes the REST service within a Java client, you should get the following output: Response: <200 OK,Called the get Rest Service,{Server=[Apache-Coyote/1.1], Cache-Control=[private], Expires=[Thu, 01 Jan 1970 01:00:00 WET], Content-Type=, Content-Length=[27], Date=[Tue, 23 Dec 2014 01:29:20 GMT]}> Body: Called the get Rest Service Conclusion That’s it! You can now call your REST service with your client in a secured manner. If you prefer to add the certificate to the JDK keystore, please check this post. Stay tuned for an equivalent for Java EE JAX-RS equivalent. Resources You can clone a full working copy from my github repository: REST SSL.Reference: SSL / TLS REST Server – Client with Spring and TomEE from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....
spring-interview-questions-answers

How to mock Spring bean without Springockito

I work with Spring several years. But I was always frustrated with how messy can XML configuration become. As various annotations and possibilities of Java configuration were popping up, I started to enjoy programming with Spring. That is why I strongly entourage using Java configuration. In my opinion, XML configuration is suitable only when you need to have visualized Spring Integration or Spring Batch flow. Hopefully Spring Tool Suite will be able to visualize Java configurations for these frameworks also. One of the nasty aspects of XML configuration is that it often leads to huge XML configuration files. Developers therefore often create test context configuration for integration testing. But what is the purpose of integration testing, when there isn’t production wiring tested? Such integration test has very little value. So I was always trying to design my production contexts in testable fashion. I except that when you are creating new project / module you would avoid XML configuration as much as possible.  So with Java configuration you can create Spring configuration per module / package and scan them in main context (@Configuration is also candidate for component scanning). This way you can naturally create islands Spring beans. These islands can be easily tested in isolation. But I have to admit that it’s not always possible to test production Java configuration as is. Rarely you need to amend behavior or spy on certain beans. There is library for it called Springockito. To be honest I didn’t use it so far, because I always try to design Spring configuration to avoid need for mocking. Looking at Springockito pace of development and number of open issues, I would be little bit worried to introduce it into my test suite stack. Fact that last release was done before Spring 4 release brings up questions like “Is it possible to easily integrate it with Spring 4?”. I don’t know, because I didn’t try it. I prefer pure Spring approach if I need to mock Spring bean in integration test. Spring provides @Primary  annotation for specifying which bean should be preferred in the case when two beans with same type are registered. This is handy because you can override production bean with fake bean in integration test. Let’s explore this approach and some pitfalls on examples. I chose this simplistic / dummy production code structure for demonstration: @Repository public class AddressDao { public String readAddress(String userName) { return "3 Dark Corner"; } }@Service public class AddressService {     private AddressDao addressDao;          @Autowired     public AddressService(AddressDao addressDao) {         this.addressDao = addressDao;     }          public String getAddressForUser(String userName){         return addressDao.readAddress(userName);     } }@Service public class UserService {     private AddressService addressService;    @Autowired     public UserService(AddressService addressService) {         this.addressService = addressService;     }          public String getUserDetails(String userName){         String address = addressService.getAddressForUser(userName);         return String.format("User %s, %s", userName, address);     } } AddressDao singleton bean instance is injected into AddressService. AddressService is similarly used in UserService. I have to warn you at this stage. My approach is slightly invasive to production code. To be able to fake existing production beans, we have to register fake beans in integration test. But these fake beans are usually in the same package sub-tree as production beans (assuming you are using standard Maven files structure: “src/main/java” and “src/test/java”). So when they are in the same package sub-tree, they would be scanned during integration tests. But we don’t want to use all bean fakes in all integration tests. Fakes could break unrelated integration tests. So we need to have mechanism, how to tell the test to use only certain fake beans. This is done by excluding fake beans from component scanning completely. Integration test explicitly define which fake/s are being used (will show this later). Now let’s take a look at mechanism of excluding fake beans from component scanning. We define our own marker annotation: public @interface BeanMock { } And exclude @BeanMock annotation from  component scanning in main Spring configuration. @Configuration @ComponentScan(excludeFilters = @Filter(BeanMock.class)) @EnableAutoConfiguration public class Application { } Root package of component scan is current package of Application class. So all above production beans needs to be in same package or sub-package. We are now need to create integration test for UserService. Let’s spy on address service bean. Of course such testing doesn’t  make practical sense with this production code, but this is just example. So here is our spying bean: @Configuration @BeanMock public class AddressServiceSpy { @Bean @Primary public AddressService registerAddressServiceSpy(AddressService addressService) { return spy(addressService); } } Production AddressService bean is autowired from production context, wrapped into Mockito‘s spy and registered as primary bean for AddressService type. @Primary annotation makes sure that our fake bean will be used in integration test instead of production bean. @BeanMock annotation ensures that this bean can’t be scanned by Application component scanning. Let’s take a look at the integration test now: @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = { Application.class, AddressServiceSpy.class }) public class UserServiceITest {     @Autowired     private UserService userService;    @Autowired     private AddressService addressService;    @Test     public void testGetUserDetails() {         // GIVEN - spring context defined by Application class        // WHEN         String actualUserDetails = userService.getUserDetails("john");        // THEN         Assert.assertEquals("User john, 3 Dark Corner", actualUserDetails);         verify(addressService, times(1)).getAddressForUser("john");     } } @SpringApplicationConfigration annotation has two parameters. First (Application.class) declares Spring configuration under test. Second parameter (AddressServiceSpy.class) specifies fake bean that will be loaded for our testing into Spring IoC container. It’s obvious that we can use as many bean fakes as needed, but you don’t want to have many bean fakes. This approach should be used rarely and if you observe yourself using such mocking often, you are probably having serious problem with tight coupling in your application or within your development team in general. TDD methodology should help you target this problem. Bear in mind: “Less mocking is always better!”. So consider production design changes that allow for lower usage of mocks. This applies also for unit testing. Within integration test we can autowire  this spy bean and use it for various verifications. In this case we verified if testing method userService.getUserDetails called method addressService.getAddressForUser with parameter “john”. I have one more example. In this case we wouldn’t spy on production bean. We will mock it: @Configuration @BeanMock public class AddressDaoMock { @Bean @Primary public AddressDao registerAddressDaoMock() { return mock(AddressDao.class); } } Again we override production bean, but this time we replace it with Mockito’s mock. We can than record behavior for mock in our integration test: @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = { Application.class, AddressDaoMock.class }) public class AddressServiceITest { @Autowired private AddressService addressService;@Autowired private AddressDao addressDao;@Test public void testGetAddressForUser() { // GIVEN when(addressDao.readAddress("john")).thenReturn("5 Bright Corner");// WHEN String actualAddress = addressService.getAddressForUser("john");// THEN Assert.assertEquals("5 Bright Corner", actualAddress); }@After public void resetMock() { reset(addressDao); } } We load mocked bean via @SpringApplicationConfiguration‘s parameter. In test method, we stub addressDao.readAddress method to return “5 Bright Corner” string when “john” is passed to it as parameter. But bear in mind that recorded behavior can be carried to different integration test via Spring context. We don’t want tests affecting each other. So you can avoid future problems in your test suite by reseting mocks after test. This is done in method resetMock.Source code is on Github.Reference: How to mock Spring bean without Springockito from our JCG partner Lubos Krnac at the Lubos Krnac Java blog blog....
java-logo

Book Review: Mastering Lambdas: Java Programming in a Multicore World

λ-programming (lambda-programming) has finally been introduced in the Java world as of version 8. It is the feature that will mostly change the way Java developers program and a new ‘weapon’ against boilerplate code. Java 8 has mostly applied functional programming in the Collections API by introducing the new Stream API. Additionally, this new feature promises to offer us a painless step into the multicore world without the need to bother about threads, fork-joins and the like. The integration of lambda features into a traditionally object-oriented programming language has been a challenge, but Oracle seems to have done a very good job providing a semi-functional-object-oriented language, to exaggerate a bit. A number of books have already been published on Java 8 making the life of Java developers easier. However, only few books tackle only λ-programming in Java 8. Maurice’s book is one of them. Maurice Naftalin is the author of another famous book “Java Generics and Collections” and maintains the lambda FAQ from which he has gained a lot of experience about the new λ-API (JSR-335). The result of this long experience has been this new book on Java 8 lambdas. While other books on the topic go into describing the new API providing simple examples, Maurice’s book follows a more pragmatic way, describing “Best practices for Using Lambda expressions and Streams”. The book tackles difficult topics and doesn’t provide simple examples to just demonstrate the API usage like other books do. The author tries to introduce the reader on the new way of thinking in a functional way by using his experience in solving complex problems. In more detail: In Chapter 1 Maurice lays the pillars for the reasoning about these new Java 8 features that will be analysed in the rest of his book:from internal to external iteration from collections to streams from sequential to parallelHe goes into the details of explaining the reasoning behind the design solutions and selected syntax of the above topics convincing the reader of how naturally these changes were introduced into the language. Chapter 2 is devoted to lambda expressions. It compares lambdas to anonymous inner classes, he talks about variable capture (a.k.a. closures), and moves on to functional interfaces. The sub-chapters are as concise as they should be. He explains the difference between static, bound and unbound instance method references (do you know the difference?) and ends up with (function) type checking and the rules of overloading resolution (both with lambda expressions and method references). So why e.g. String::concat is unbound while str::replace is bound? According to Maurice, bound method references are so-called because the method reference comes with the receiver already fixed. So str::replace is equivalent to (x,y) -> str.replace(x,y). You don’t get any choice about what the receiver is. It is bound to its receiver (str). Unbound method references are undecided about the receiver until they are called. So String::concat is equivalent to (receiver,str) -> receiver.concat(str). It expects to get a receiver supplied as its first parameter. Chapter 3 provides an introduction to streams by comparing them to pipelines. He describes how to start a stream (pipeline), how to transform it (e.g. filtering, mapping, sorting, truncating etc.) and how to end it (e.g. reduce, collect, search etc.). He touches parallelism and debugging. He provides useful and pragmatic examples. Chapter 4 talks about how to end streams, i.e. about reductions and collections. He also goes into the pain of explaining how to write your own collector. Everything is explained with diagrams on how they work as well as with working examples. Chapter 5 talks about how to create streams, i.e. sources and spliterators. Here he introduces the worked example of a recursive grep command and describes what errors one has to tackle to achieve it. Maurice, through-out his book, doesn’t offer the solution in your plate. Often, in his reasoning, you will find statements like the following: “Stop reading for a moment and think about the design of this data structure” or “If you have not already worked it out, stop now to write or outline the code”. Chapter 6 tackles stream performance. He uses jmh to microbenchmark sequential and parallel streams, sorting, distinct, spliterators, collectors etc. Parallel streams aren’t always faster than sequential streams but they must satisfy some conditions to perform better than sequential streams. Chapter 7, finally, talks about static and default methods in interfaces and on how default methods allow the API to evolve keeping backwards compatibility at the same time. He talks about where default methods are being used and compares interfaces with default methods to abstract classes. He covers inheritance providing two easy to remember rules: a) “instance methods are chosen in preference to default methods” b) “if more than one competing default method is inherited by a class, the non-overridden default method is selected.” This chapter contains the most complete coverage on the topic of default and static methods that I have seen. Five reasons why you should buy this book:It is small and concise; personally, I have more chances to finish a book of 175 pages (and I did) than a book of 500 or 1000 pages. The author tries to get you into thinking as a functional programmer; he doesn’t offer you the solution in your plate. It is well structured and easy to find what you want It is a book that you will return to again and again It tackles performance issues and provides useful advice on performance pitfalls and anti-patterns.“Expensive perfumes come in small bottles” they say. Small and to the point, all in all a very useful book on the subject, one that you ‘ll revisit again and again while programming with the new lambdas and stream APIs in Java 8. Maurice’s new book should be under the pillow of every Java developer that wants to learn about λ and streams in Java 8. Book Link: Mastering Lambdas: Java Programming in a Multicore World ...
java-interview-questions-answers

Doing microservices with micro-infra-spring

Hi! We’ve been working at 4financeit for last couple of months on some open source solutions for microservices. I will be publishing some articles related to microservices and our tools and this is the first of (hopefully) many that I will write in the upcoming weeks (months?) on Too much coding blog. This article will be an introduction to the micro-infra-spring library showing how you can quickly set up a microservice using our tools.     Introduction Before you start it is crucial to remember that it’s not enough to just use our tools to have a microservice. You can check out my slides about microservices and issues that we have dealt with while adopting them at 4financeit. 4financeit microservices 12.2014 at Lodz JUG from Marcin Grzejszczak Here you can find my video where I talk about microservice at 4finance (it’s from 19.09.2014 so it’s pretty outdated)Also it’s worth checking out the articles of Martin Fowler about microservices, Todd Hoff’s – Microservices not a free lunch! or The Strengths and Weaknesses of Microservices by Abel Avram’s. Is monolith bad? No it isn’t! The most important thing to remember when starting with microservices is that it will complicate your life in terms of operations, metrics, deployment and testing. Of course it does bring plenty of benefits but if you are unsure of what to pick – monolith or microservices then my advice to use is to go the monolith way. All the benefits of microservices like code autonomy, doing one thing well, getting ridd of pacakge dependencies can be also achieved in the monolithic code thus try to write your applications with such approaches and your life will get simpler for sure. How to achieve that? That’s complicated but here are a couple of hints that I can give you:try to do DDD. No, you don’t have DDD when your entities have methods. Try to use concepts of aggregate roots try not to make dependencies on packages from different roots. If you have two different bounded context like com.blogspot.toomuchcoding.client and com.blogspot.toomuchcoding.loan – go via tight cohesion and low coupling – emit events, call REST endpoint, send JMS messages or talk via strictly defined API. Do not reuse internals of those packages – take a look at the next point that deals with encapsulation take your highscool notes and read about encapsulation again. Most of us make the mistake of thinking that if we make a field private and add an accessor to it then we have encapsulation. That’s not true! I really like the example of Slawek Sobotka (article in polish) who shows an example of common approach to encapsulation:human.getStomach().getBowls().getContent().add(new Sausage())instead ofhuman.eat(new Sausauge()) add to your IDE class generation template that you want your new classes to be package scoped by default – what should be publicly available are interfaces and really limited number of classes start doing what’s crucial in terms of tracking microservice requests and measuring business and technical data in your own application! Gather metrics, set up correlation ids for your messages, add service discovery if you have multiple monoliths.I’m a hipster – I want microservices! Let’s assume that you know what you are doing, you evaluated all pros and cons and you want to go down the microservice way. You have a devops culture there in your company and people are eager to start working on multiple codebases. How to start? Pick our tools and you won’t regret it! Clone a repo and get to work We have set up a working template on Github with UI – boot-microservice-gui and without it – boot-microservice. If you clone our repo and start working with it you get a service that:uses micro-infra-spring library is written in Groovy uses Spring Boot is built with Gradle (set up for 4finance – but that’s really easy to change) is JDK8 compliant contains an example of a business scenariowhat you just have to do is:check out the slides above to see our approach to microservices remove the packages com/ofg/twitter from src/main and src/test alter microservice.json to support your requirements write your code!Why should you use our repo?you don’t have to set up anything – we’ve already done it for you the time required to start developing a feature is close to zeroAren’t we duplicating Spring Cloud? In fact we’re not. We’re using it in our libraries ourselves (right now for property storage in Git repository). We have some different approaches to service discovery for instance but in general we are extending Spring Cloud’s features by:correlation id setting (we support different libraries) which is crucial for log aggregation approaches metrics path prepending (accoriding to a defnied pattern) giving you Swagger API and Swagger UI allowing you to pick properties from a repository on your drive soon we will be able to sketch graphs of dependencies between services we give you Consumer Driven Contract approach testing our solutions on production – as a company we’re giving short term loans so this in our system where we test our open source ideasConclusions If you want to go down the microservice way you have to be well aware of the issues related to that approach. If you know what you’re doing you can use our libraries and our microservice templates to have a fast start into feature development. What’s next I’ll write about different features of the micro-infra-spring library with more emphasis on configuration on specific features that are not that well known but equally cool as the rest! Also I’ll write some articles on how we approached splitting the monolith but you’ll have to wait for that some time!Reference: Doing microservices with micro-infra-spring from our JCG partner Marcin Grzejszczak at the Blog for coding addicts blog....
gradle-logo

Gradle Goodness: Rename Ant Task Names When Importing Ant Build File

Migrating from Ant to Gradle is very easy with the importBuild method from AntBuilder. We only have to add this single line and reference our existing Ant build XML file and all Ant tasks can now be executed as Gradle tasks. We can automatically rename the Ant tasks if we want to avoid task name collisions with Gradle task names. We use a closure argument with the importBuild method and return the new task names. The existing Ant task name is the first argument of the closure. Let’s first create a simple Ant build.xml file:   <project><target name="showMessage" description="Show simple message"><echo message="Running Ant task 'showMessage'"/></target><target name="showAnotherMessage" depends="showMessage" description="Show another simple message"><echo message="Running Ant task 'showAnotherMessage'"/></target></project> The build file contains two targets: showMessage and showAnotherMessage with a task dependency. We have the next example Gradle build file to use these Ant tasks and prefix the original Ant task names with ant-: // Import Ant build and // prefix all task names with // 'ant-'. ant.importBuild('build.xml') { antTaskName -> "ant-${antTaskName}".toString() }// Set group property for all // Ant tasks. tasks.matching { task -> task.name.startsWith('ant-') }*.group = 'Ant' We can run the tasks task to see if the Ant tasks are imported and renamed: $ gradle tasks --all ... Ant tasks --------- ant-showAnotherMessage - Show another simple message [ant-showMessage] ant-showMessage - Show simple message ... $ We can execute the ant-showAnotherMessage task and we get the following output: $ gradle ant-showAnotherMessage :ant-showMessage [ant:echo] Running Ant task 'showMessage' :ant-showAnotherMessage [ant:echo] Running Ant task 'showAnotherMessage'BUILD SUCCESSFULTotal time: 3.953 secs $ Written with Gradle 2.2.1Reference: Gradle Goodness: Rename Ant Task Names When Importing Ant Build File from our JCG partner Hubert A. Klein Ikkink at the JDriven blog....
grails-logo

Grails Tutorial for Beginners – Grails Service Layer

This tutorial will discuss the importance of the service layer in Grails and how to work with it. It also explains transaction management and how to utilize it. Introduction Separation Of Concerns Consider this analogy: Imagine a company where employees are assigned tasks on different nature of work. For example, say there is an employee named John Doe with the following responsibilities:  Handle accounting and releasing of check payments Take calls from customers for product support Handle administrative tasks such as booking flights plus accommodation of executives Manage the schedule of the CEOAs you could see, John’s work is too complicated because he needs to multi-task on very different type of tasks. He needs to change his frame of mind when switching from one task to another. He is more likely to be stressed out and commit many mistakes. His errors could cost a lot of money in the long run. Another problem is it’s not easy to replace John later on as he is too involved in a complicated setup. Likewise in software engineering, it is not good idea to write a class that has different nature of responsibilities. The general consensus of experts is that a single class or source file should only be involved in only one nature of task. This is called the “separation of concerns“. If there is a lot of things going on, this will only introduce bugs and problems later as the application will be very complicated to maintain. Although this concept is so simple to state, the effect on a project is enormous! Consider a controller in Grails. Inside a controller, we can do the ff:Handle routing logic Invoke GORM operations to manipulate data in the database Render text and show it to the user.However, it is not advisable that we do all those things inside a controller. Grails allows a developer to do all these things together for flexibility, but it should be avoided. The real purpose of a controller is to deal with routing logic- which means:Receive requests from users Invoke the most appropriate business logic Invoke the view to display the resultView logic should be taken care of inside Groovy Server Pages (GSP) files. Read this previous tutorial about GSPs if you are unfamiliar with it. For business logic, they should be implemented inside the service layer. Grails has a default support and handling for the service layer. Don’t Repeast Yourself (DRY) Principle Another benefit of using a service layer is you could reuse a business logic in multiple places without code duplication. Having a single copy of a particular business logic will make a project shorter (in terms of lines of codes) and easier to maintain. Changing the business logic will just require a change in only one particular place. Not having to duplicate code is a part of another best practice called Don’t Repeast Yourself (DRY) Principle Create a Service To create a service class, invoke the create-service command from the root folder of your project. For example, use the following command inside a command line or terminal: grails create-service asia.grails.sample.Test You can also create a service class inside the GGTS IDE. Just right click the project, select New and then Service.  Provide the name and click Finish:  Below is the resulting class created. A default method is provided as an example. This is where we will write business logic. package asia.grails.sample class TestService { def serviceMethod() { } } Just add as many functions as needed that pertains to business logic and GORM operations. Here is an example: package asia.grails.sample class StudentService { Student createStudent(String lastName, String firstName) { Student student = new Student() student.lastName = lastName student.firstName = firstName student.referenceNumber = generateReferenceNumber() student.save() return student } private String generateReferenceNumber() { // some specific logic for generating reference number return referenceNumber } } Injecting a Service The service is automatically injected inside a controller by just defining a variable with the proper name. (use studentService variable to inject StudentService instance). Example: package asia.grails.sample class MyController { def studentService def displayForm() { } def handleFormSubmit() { def student = studentService.createStudent(params.lastName, params.firstName) [student:student] } } If you are not familiar with Spring and the concept of injection, what it means above is that you don’t need to do anything special. Just declare the variable studentService and the Grails framework will automatically assign an instance to it. Just declare and use right away. You can also inject a service to another service. For example: package asia.grails.sample class PersonService { Person createPerson(String lastName, String firstName) { Person p = new Person() p.lastName = lastName p.firstName = firstName p.save() return p } } package asia.grails.sample class EmployeeService { def personService Employee createEmployee(String lastName, String firstName) { Person person = personService.createPerson(lastName, firstName) Employee emp = new Employee() emp.person = person emp.employmentDate = new Date() emp.save() return emp } } You can also inject a service inside Bootstrap.Grovy. For example: class BootStrap { def studentService def init = { servletContext -> if ( Student.count() == 0 ) { // if no students in the database, create some test data studentService.createStudent("Doe", "John") studentService.createStudent("Smith", "Jame") } } } You can also inject a service inside a tag library. For example: package asia.grails.sample class StudentService { def listStudents() { return Student.list() } } class MyTagLib { StudentService studentService static namespace = "my" def renderStudents = { def students = studentService.listStudents() students.each { student -> out << "<div>Hi ${student.firstName} ${student.lastName}, welcome!</div>" } } } Transaction Management If you are are new to working with databases, transaction is a very important concept. Usually, we wish certain sequence of database changes to be all successful. If not possible, we want no operation to happen at all. For example, consider this code to transfer funds between two bank accounts: class AccountService { def transferFunds(long accountFromID, long accountToID, long amount) { Account accountFrom = Account.get(accountFromID) Account accountTo = Account.get(accountToID) accountFrom.balance = accountFrom.balance - amount accountTo.balance = accountTo.balance + amount } } This code deducts money from one account (accountFrom.balance = accountFrom.balance – amount), and adds money to another account (accountTo.balance = accountTo.balance + amount). Imagine if something happened (an Exception) after deducting from the source account and the destination account was not updated. Money will be lost and not accounted for. For this type of codes, we want an “all or nothing” behavior. This concept is also called atomicity. For the scenario given above, transaction management is required to achieve the desired behavior. The program will start a conversation with the database where any update operations are just written on a temporary space (like a scratch paper). The program will later on need to tell the database if it wants to make the changes final, or to scratch out all it did earlier. Since Grails supports transactions, it automatically do these things to us when we declare a service to be transactional:If all db operations are successful, reflect the changes to the database (this is also called commit) If one db operation result in exception, return to the original state and forget/undo all the previous operations (this is also called rollback)Transaction Declaration Grails supports transaction management inside services. By default, all services are transactional. So these 3 declarations have the same effect. class CountryService { } class CountryService { static transactional = true } @Transactional class CountryService { } For readability, I suggest declaring the static transactional at the top of each service. Note that not all applications needs transaction. Non sensitive programs like a blog software can survive without it. Transactions are usually required when dealing with sensitive data such as financial information. Using transactions introduce overhead. Here is an example of how to disable it in a service: class CountryService { static transactional = false } How To Force A Rollback One of the most important thing to remember is what code to write to force Grails to rollback a current succession of operations. To do that, just raise a RuntimeException or a descendant of it. For example, this will rollback the operation accountFrom.balance = accountFrom.balance – amount class AccountService { def transferFunds(long accountFromID, long accountToID, long amount) { Account accountFrom = Account.get(accountFromID) Account accountTo = Account.get(accountToID) accountFrom.balance = accountFrom.balance - amount throw new RuntimeException("testing only") accountTo.balance = accountTo.balance + amount } } But this code will not: class AccountService { def transferFunds(long accountFromID, long accountToID, long amount) { Account accountFrom = Account.get(accountFromID) Account accountTo = Account.get(accountToID) accountFrom.balance = accountFrom.balance - amount throw new Exception("testing only") accountTo.balance = accountTo.balance + amount } } Meaning the line with subtraction will be committed, but the line with addition will not be reached. Remarks This is not yet a complete discussion on Grails’ service layer. Refer to the official documentation at http://grails.org/doc/latest/guide/services.html, or read future posts on this blog.Reference: Grails Tutorial for Beginners – Grails Service Layer from our JCG partner Jonathan Tan at the Grails cookbook blog....
spring-interview-questions-answers

How to encapsulate Spring bean

As far as I know Spring Framework doesn’t provide any mechanism to encapsulate Spring beans other than having separate contexts. So when you have public class registered in Spring’s Inversion of Control container, it can be autowired in any Spring bean from same context configuration. This is very powerful but it is also very dangerous. Developers can easily couple beans together. With lack of discipline team can easily shoot themselves in foot. Unfortunately I was working on one monolithic project where team was shooting themselves into foot with submachine gun. Wiring was breaking layering rules often. Nobody could easily follow what is dependent on what. Bean dependency graph was just crazy. This is serious concern in bigger applications.   Luckily there is one simple way how to encapsulate Spring bean. Spring works nicely with default access modifier on class level. So you can create package private bean, which can be used only within current package. Simple and powerful. Let’s take a look at example: package net.lkrnac.blog.spring.encapsulatebean.service;import org.springframework.stereotype.Service;@Service class AddressService { public String getAddress(String userName){ return "3 Dark Corner"; } } This simple bean is wired into another one within same package: package net.lkrnac.blog.spring.encapsulatebean.service;import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service;@Service public class UserService { private AddressService addressService;@Autowired public UserService(AddressService addressService) { this.addressService = addressService; } public String getUserDetails(String userName){ String address = addressService.getAddress(userName); return String.format("User: %s, %s", userName, address); } } Main context just scans both beans: package net.lkrnac.blog.spring.encapsulatebean;import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration;@Configuration @ComponentScan @EnableAutoConfiguration public class Application { } Here is test to prove it works fine: package net.lkrnac.blog.spring.encapsulatebean;import net.lkrnac.blog.spring.encapsulatebean.service.UserService;import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.SpringApplicationConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;@RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = Application.class) public class ApplicationTests { @Autowired private UserService userService; @Test public void isPackagePrivateBeanCalled(){ //GIVEN - spring context defined by Application class //WHEN String actualUserDetails = userService.getUserDetails("john"); //THEN Assert.assertEquals("User: john, 3 Dark Corner", actualUserDetails); } } I believe everybody should consider using default access modifier for every new bean. Obviously there would need to be some public bean within each package. But at not every bean. Source code is on GitHub.Reference: How to encapsulate Spring bean from our JCG partner Lubos Krnac at the Lubos Krnac Java blog blog....
software-development-2-logo

Database Threat Models

I finally have a breather and can start working through my backlog of ideas. I start with some background that will make the motivation for subsequent posts clearer. What are the threat models for the persistence layer of an application, specificially the threats against the database itself? Remember that a ‘threat’ is an adverse act, whether intentional (by an attacker) or incidental (e.g., by a distracted employee). I refer to an ‘attacker’ below but that’s for convenience – in many cases it’s more likely to be a non-malicious employee making a mistake.       Attacker with access to the application Scenario: A casual attacker with access to the web application as a visitor or user. Vulnerability: SQL injection. Countermeasure: use positional parameters instead of explicitly constructing SQL statements directly. This is true even if you use sanitization, either ad hoc or in a library provided by the database vendor. Parameterization isn’t always possible but the exceptions should be carefully vetted. This countermeasure should always be used. Attacker with access to the database as the application database user Scenario: the attacker has full access to the database as the application database user, either via SQL Injection or another mechanism. Vulnerability: DDL commands – the attacker can create, alter or drop tables. Countermeasure: the application’s database user should own the data but a separate database user should own the schema. This prevents the compromised account from modifying the tables. This countermeasure should always be used. Vulnerability: Data corruption (prevention). Countermeasure: this is a little-used approach that is sometimes used for the most sensitive information. The application user has SELECT permissions on the table but does not have INSERT, UPDATE or DELETE permission. A second database account with INSERT, UPDATE and DELETE permission to the table creates stored procedures to update the data. These stored procedures are configured to run with the permissions of the owner, not the caller. The application can read the data as usual but must use the stored procedures to update it. This is not as onerous as it sounds since data is read far more often than it is modified and reads are not affected. Data insertion and modification is more complex but in a well-written application it will be fully encapsulated by the peristance layer. A variant is to use one database user to read the data (with only SELECT permission) and a second database user to modify it (with INSERT, UPDATE and DELETE permission). The application uses the first account freely but only uses carefully vetted code with the second account. The main drawback is that modern persistence frameworks make heavy use of caching and care needs to be taken to ensure that the caches are consistent. This countermeasure is typically only used with high-value content such as user authentication information. Vulnerability: Data corruption (detection). Countermeasure: the database contains shadow columns that are updated on writes and verified on reads. The updates should be performed programmatically – if this is done via triggers all updates will appear to be valid. There are several variants. The shadow column can be: · a simple copy – easy but an attacker can easily spoof it. · a cryptographic hash – slightly harder to spoof unless salted but the hash can cover multiple columns. Vulnerable to an attacker copying values from an existing row unless the hash incorporates the row’s primary key. This should be considered the minimum acceptable approach. · a cryptographic message digest (HMAC) – impossible to spoof since it requires private key, but as before it should incorporate the row’s primary key to prevent an attacker from copying values from an existing row. This approach is the most computationally expensive and requires care with key management but it also provides the best assurance that the contents have not been modified. There are also three separate locations for the shadow column: · in the table itself – easiest but also obvious to any attacker. · in a separate table – much less obvious to an attacker but harder to implement. · in a separate schema – least obvious but requires the most effort. Vulnerability: Data corruption (detection). Countermeasure: the database table has INSERT, UPDATE and DELETE triggers added. The functions called capture the existing and modified data (if any), timestamp, primary key, database user, and anything else of potential interest. (E.g., some databases may provide the remote IP address.) This information is written to a separate log table. This allows the organization to reconstruct a timeline of when values were modified. This could be particularly important if the attacker changed a value temporarily, performed an unauthorized action, and then restored the original value in an attempt to cover his tracks. Vulnerability: Unauthorized disclosure (detection). Countermeasure: this is another little-used approach that is sometimes used for the most sensitive information. The application user does not have any permissions on the table. A second database account with SELECT permission to the table creates stored procedures to access the data. These stored procedures are configured to run with the permissions of the owner, not the caller. The application must use the stored procedure to access the data. The stored procedure records some or all access to a log table. This approach can be used to flag access to high-value data. This could be any classified data, criminal or financial records for public figures such as politicians or celebrities, etc. The application can and should also flag this access but the application alone cannot detect direct database access. Vulnerability: Unauthorized disclosure (detection). Countermeasure: this is an idea that has been floating around for a long time but gotten little attention until recently. The database is seeded with sentinel values in columns with value to the underground – information useful in identity theft, credit card fraud, etc. These values will never be accessed via the application since they’re linked to nonexistent email addresses but an attacker has no way to determine this. (Few attackers will even know this is a possiblity.) Underground forums can then be monitored for the appearance of these values. If a sentinel value appears the organization will know that the database has been compromised. A variant is to provide the sentinels with limited value, e.g., a “stolen credit card” may have a credit limit of $100 so the details of any use will be known and criminal authorities will be able to demonstrate actual use and damages. In most cases the organization will know nothing more than the fact that a compromise occurred but that alone should be enough to trigger action. Attacker with full access to database backups Scenario: the attacker has full access to database backups. Vulnerability: Unauthorized disclosure. Countermeasures: Backups should be encrypted as a whole (e.g., with PGP/GPG) or columns could be encrypted within the database. Both approaches introduce key management issues, e.g., you may wish to rotate PGP/GPG keys on a monthly basis to limit exposure if a key is compromised. Vulnerability: Data corruption if the attacker is able to modify the backup and force the database to be restored from that modified backup. Countermeasures: Backups should be encrypted as a whole (e.g., with PGP/GPG) or columns could be encrypted within the database. In addition we can add HMAC checks on sensitive rows. As before all approaches introduce key management issues. Attacker with full access to the database Scenario: An attacker with full access to the database as the dataase user of his choice. Vulnerability: Everything. Countermeasures: Limited to non-existent, depending upon the sophistication of the attacker. One noteworthy exception is database-specific extensions using Unix/Linux shared libraries or Windows DLLs. In these cases the user-defined functions can access data outside of the scope of the database, e.g., cryptographic keys stored in the filesystem or (better) retrieved from a web service during initialization, but no database user can access that information unless a mechanism is explicitly provided. An attacker can replace the user-defined function but the replacement would lose functionality.Reference: Database Threat Models from our JCG partner Bear Giles at the Invariant Properties blog....
software-development-2-logo

Are You Binding Your Oracle DATEs Correctly? I Bet You Aren’t

Oracle database has its ways. In my SQL talks at conferences, I love to confuse people with the following Oracle facts:                  … and the answer is, of course:Isn’t it horrible to make empty string the same thing as NULL? Please, Oracle… The only actually reasonable slide to follow the previous two is this one:But the DATE type is much more subtle So you think VARCHAR2 is weird? Well, we all know that Oracle’s DATE is not really a date as in the SQL standard, or as in all the other databases, or as in java.sql.Date. Oracle’s DATE type is really a TIMESTAMP(0), i.e. a timestamp with a fractional second precision of zero. Most legacy databases actually use DATE precisely for that, to store timestamps with no fractional seconds, such as:1970-01-01 00:00:00 2000-02-20 20:00:20 1337-01-01 13:37:00So, it’s always a safe bet to use java.sql.Timestamp types in Java, when you’re operating with Oracle DATE. But things can go very wrong when you bind such variables via JDBC as can be seen in this Stack Overflow question here. Let’s assume you have a range predicate like so: // execute_at is of type DATE and there's an index PreparedStatement stmt = connection.prepareStatement( "SELECT * " + "FROM my_table " + "WHERE execute_at > ? AND execute_at < ?"); Now, naturally, we’d expect any index on execute_at to be a sensible choice to use for filtering out records from my_table, and that’s also what happens when we bind java.sql.Date stmt.setDate(1, start); stmt.setDate(2, end); The execution plan is optimal: ----------------------------------------------------- | Id | Operation | Name | ----------------------------------------------------- | 0 | SELECT STATEMENT | | |* 1 | FILTER | | | 2 | TABLE ACCESS BY INDEX ROWID| my_table | |* 3 | INDEX RANGE SCAN | my_index | -----------------------------------------------------Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(:1:1 AND ""EXECUTE_AT""<:2) But let’s check out what happens if we assume execute_at to be a date with hours/minutes/seconds, i.e. an Oracle DATE. We’ll be binding java.sql.Timestamp stmt.setTimestamp(1, start); stmt.setTimestamp(2, end); and the execution plan suddenly becomes very bad: ----------------------------------------------------- | Id | Operation | Name | ----------------------------------------------------- | 0 | SELECT STATEMENT | | |* 1 | FILTER | | | 2 | TABLE ACCESS BY INDEX ROWID| my_table | |* 3 | INDEX FULL SCAN | my_index | -----------------------------------------------------Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(:1:1 AND INTERNAL_FUNCTION(""EXECUTE_AT"")<:2)) What’s this INTERNAL_FUNCTION() INTERNAL_FUNCTION() is Oracle’s way of silently converting values into other values in ways that are completely opaque. In fact, you cannot even place a function-based index on this pseudo-function to help the database choose a RANGE SCAN on it again. The following is not possible: CREATE INDEX oracle_why_oh_why ON my_table(INTERNAL_FUNCTION(execute_at)); Nope. What the function really does, it widens the less precise DATE type of the execute_at column to the more precise TIMESTAMP type of the bind variable. Just in case. Why? Because with exclusive range boundaries (> and <), chances are that the fractional seconds in your Timestamp may lead to the timestamp being stricly greater than the lower bound of the range, which would include it, when the same Timestamp with no fractional sections (i.e. an Oracle DATE) would have been excluded. Duh. But we don’t care, we’re only using Timestamp as a silly workaround in the first place! Now that we know this, you might think that adding a function-based index on an explicit conversion would work, but that’s not the case either: CREATE INDEX nope ON my_table(CAST(execute_at AS TIMESTAMP)); Perhaps, if you magically found the exact right precision of the implicitly used TIMESTAMP(n) type it could work, but that all feels shaky, and besides, I don’t want a second index on that same column! The solution The solution given by user APC is actually very simple (and it sucks). Again, you could bind a java.sql.Date, but that would make you lose all hour/minute/second information. No, you have to explicitly cast the bind variable to DATE in the database. Exactly! PreparedStatement stmt = connection.prepareStatement( "SELECT * " + "FROM my_table " + "WHERE execute_at > CAST(? AS DATE) " + "AND execute_at < CAST(? AS DATE)"); You have to do that every time you bind a java.sql.Timestamp variable to an Oracle DATE value, at least when used in predicates. How to implement that? If you’re using JDBC directly, you’re pretty much doomed. Of course, you could run AWR reports to find the worst statements in production and fix only those, but chances are that you won’t be able to fix your statements so easily and deploy them so quickly, so you might want to get it right in advance. And of course, this is production. Tomorrow, another statement would suddenly pop up in your DBA’s reports. If you’re using JPA / Hibernate, you can only hope that they got it right, because you probably won’t be able to fix those queries, otherwise. If you’re using jOOQ 3.5 or later, you can take advantage of jOOQ’s new custom type binding feature, which works out of the box with Oracle, and transparently renders that CAST(? AS DATE) for you, only on those columns that are really relevant.Other databases If you think that this is an Oracle issue, think again. Oracle is actually very lenient and nice to use when it comes to bind variables. Oracle can infer a lot of types for your bind variables, such that casting is almost never necessary. With other databases, that’s a different story. Read our article about RDBMS bind variable casting madness for more information.Reference: Are You Binding Your Oracle DATEs Correctly? I Bet You Aren’t from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
hazelcast-logo

The Beginner’s Guide to Hazelcast Part 5

This is a continuation of a series of posts I have written about Hazelcast.  I highly suggest you read the other ones: Part 1, Part 2, Part 3 and Part 4. Things That Makes One Go “Huh?” This post will have no Hazelcast specific code in it. Let me repeat that. This post will have no Hazelcast specific code in it. That is because the fine folks at Hazelcast produced a product that implements different standards. This allows for a choice of clients. One of those standards that Hazelcast implements is memcached.     What about JCache? JCache (JSR 107) is just for Java. Memcached protocol clients have been implemented across several languages so one is not nailed down to one language. Implementing the memcached protocol was a smart move in my opinion because it makes Hazelcast more than a “Java thing.” Why Use Hazelcast? Excellent question! If one can use any memcached server, why use Hazelcast. Well, to tell you the truth, unless one is sharing a database between several servers, one may not even need caching! If one does need a caching solution, here is why I would choose Hazelcast:Automatic, real time backups – I have not read of one Hazelcast datatype that is not backed up at least once.  Just stand up two instances, one off machine from the other, to get the full benefit. Security – If the servers that need to cache are across different networks, then the firewall rules can be easier with Hazelcast. Lets say n servers are needing to cache data and n/2 of them are on the 192.168.1.x network and the other n/2 are on the 10.10.1.x network.  By setting one Hazelcast instance on either network, all n machines can be sharing a cache. The Hazelcast instances can be configured to talk to just the instance on the other side. That makes the firewall rule writer job easier because there only has to be a rule made for two servers rather than n machines  then the 192.168.1.x machines just talk to their Hazelcast node and the 10.10.1.x machines just talk to their Hazelcast node and let the Hazelcast instances do the rest of the work.Example I never like to show just a “ho hum” kind of example so I am going to show how a Java client can share data with a Python client. Setup I am using Java 1.7 and Python 3.4. Unfortunately, neither language has memcached support out of the box so I went looking for already written clients. Java I found Spymemcached for Java.  I will be just skimming the surface of its abilities. It can be grabbed from Maven. Here is the pom.xml file for the project: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.darylmathison</groupId> <artifactId>Memcache</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <properties> <maven.compiler.source>1.7</maven.compiler.source> <maven.compiler.target>1.7</maven.compiler.target> </properties> <build> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <showDeprecation>true</showDeprecation> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.3.2</version> <executions> <execution> <goals> <goal>java</goal> </goals> </execution> </executions> <configuration> <mainClass>com.darylmathison.memcache.Main</mainClass> </configuration> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>net.spy</groupId> <artifactId>spymemcached</artifactId> <version>2.11.5</version> </dependency> </dependencies> </project><dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-servlet</artifactId> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-metrics</artifactId> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-swagger</artifactId> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jackson</artifactId> </dependency> Python Next, I found python3-memcached for Python. It uses the classic setup.py procedure to install. Server Not much of a cache if the server is missing. One can download Hazelcast at hazelcast.org/download, extract the contents, cd into the bin directory and run the server.bat or server script according to one’s OS. As setting up servers go, that is the easiest one I have ever done. Situation The “expensive” operation that I am trying to make cheaper is Fibonacci numbers. Because Python and Java can both understand unicode, the values are stored as unicode strings. The key is a unicode string of the number of the sequence or the number of rounds it takes to get there. Code Java package com.darylmathison.memcache;import java.io.IOException; import java.net.InetSocketAddress; import net.spy.memcached.MemcachedClient;/** * * @author Daryl */ public class Main {/** * @param args the command line arguments */ public static void main(String[] args) { try { MemcachedClient client = new MemcachedClient(new InetSocketAddress("localhost", 5701)); for(int i = 2; i < 20; i++) { System.out.println("value of round " + i + " is " + fibonacci(i, client)); } client.shutdown(); } catch(IOException ioe) { ioe.printStackTrace(); } } private static long fibonacci(int rounds, MemcachedClient client) { String cached = (String)client.get(String.valueOf(rounds)); if(cached != null) { System.out.print("cached "); return Long.parseLong(cached); } long[] lastTwo = new long[] {1, 1}; for(int i = 0; i < rounds; i++) { long last = lastTwo[1]; lastTwo[1] = lastTwo[0] + lastTwo[1]; lastTwo[0] = last; } client.set(String.valueOf(rounds), 360, String.valueOf(lastTwo[1])); return lastTwo[1]; } } Python Here is the Python client. As a pythonian, I tried to be as pythonic as possible. import memcacheclient = memcache.Client(['localhost:5701'])def fibonacci(round): f = [1, 1, 1] for i in range(round): f[-1] = sum(f[:2]) f[0], f[1] = f[1], f[2] return f[2]def retrievefib(round): fib = client.get(str(round)) if not fib: fib = fibonacci(round) client.set(str(round), str(fib)) else: print("cached") return fibdef main(): store = [ x for x in range(20) if x % 2 == 0] for i in store: retrievefib(i) for i in range(20): print(retrievefib(i))if __name__ == "__main__": main() Conclusion Well, here is an example of Hazelcast as being the powerhouse behind the scenes. This is a place where I think it shines the most. One doesn’t have to create whole new crafty, distributed applications to take advantage of Hazelcast. All one has to do is use known practices and let Hazelcast do the hard work. The source for this post can be found here for the Java code and here for the Python code. Referenceshttp://en.wikipedia.org/wiki/Fibonacci_number https://code.google.com/p/spymemcached/ https://pypi.python.org/pypi/python3-memcached/1.51Reference: The Beginner’s Guide to Hazelcast Part 5 from our JCG partner Daryl Mathison at the Daryl Mathison’s Java Blog blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close