Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Unit test life?

You can not program without testing. You write unit tests first and then you write your code. (Well, I know you don’t but just let’s focus on best practice.) When there is an error in the code, first you write a new unit test that demonstrates the bug and then you fix it. After the unit test runs fine the same bug should never ever happen again without being immediately signaled by the unit test. Later the integration and user tests come. They test the application and in case there is some error the developer fixes the code. If possible and feasible we create new unit tests to cover the case that was not discovered earlier so that the same bug will not slip to more expensive integration and user tests. In other cases unit testing is not possible or would just be too cumbersome and not worth paying the cost when the bug is strongly related to integration or user experience. This is a working practice that was developed for software creation during the last twenty or so years. Real life is, however, not that simple. There may be no time to create the new unit tests after a bug was discovered during integration or user tests. You fix the bug and test the functionality of the application manually and omit the new unit, or non-unit but still automated test because creating that is too expensive, would require too long time and project constraints are tight. The bug you fix is serious, high level, show-stopper and this is already the last few days of the testing period. So you focus on doing the right thing: get the job done, fix the bug.Have you started to fix the bug we reported yesterday? No. We had no time to deal with it. We had to attend to higher level issues. Higher Level? What are you talking about? There can no fix be more important than this! Then why did you report it as a cosmetic in the first place? Cosmetic????? COSMIC!!!!Sometimes even drop-down lists do not prevent erroneous user input.Later, over a calm weekend perhaps, you start to think about the case. How come that a serious bug is only discovered at the end of the testing period? Isn’t there a bug in the testing process? Unless the bug was introduced during the recent bugfixes, in which case there can also be some issue with how the developers fix bugs, there is a bug in the testing process. The tests cover all significant functionality of the application and the test cases that assert the correct behavior are ordered by severity. The test cases that are supposed to discover severe issues should be executed sooner, and less important, cosmetic issues should be tested later. If this was not the case: it is a bug. How do you fix this bug? Move the test case that was executed late to it’s proper position. It will not fix the bug manifestation that has already happened, the very same way as a bug fix does not remedy the lost money caused by program malfunction. Fix will just prevent causing more damage. During the next release period (well, yes, think about good old waterfall) the regression testing will discover the same bug sooner. But why do not we create a unit test for the testing process? Why don’t we have unit tests for the processes of the corporate? How could we generalize the idea and practice of unit testing over all aspects of the life?Reference: Unit test life? from our JCG partner Peter Verhas at the Java Deep blog....
devops-logo

DevOps Is The New Agile

In The Structure of Scientific Revolutions, Thomas Kuhn argues that science is not a steady accumulation of facts and theories, but rather an sequence of stable periods, interrupted by revolutions. During such revolutions, the dominant paradigm breaks down under the accumulated weight of anomalies it can’t explain until a new paradigm emerges that can. We’ve seen similar paradigm shifts in the field of information technology. For hardware, we’re now at the Third Platform. For software, we’ve had several generations of programming languages and we’ve seen different programming paradigms, with reactive programming gaining popularity lately. The Rise of Agile We’ve seen a revolution in software development methodology as well, where the old Waterfall paradigm was replaced by Agile. The anomalies in this case were summarized as the software crisis, as documented by the Chaos Report. The Agile Manifesto was clearly a revolutionary pamphlet: We are uncovering better ways of developing software by doing it and helping others do it.It was written in 2001 and originally signed by 17 people. It’s interesting to see what software development methods they were involved with:Adaptive Software Development: Jim Highsmith Crystal: Alistair Cockburn Dynamic Systems Development Method: Arie van Bennekum eXtreme Programming: Kent Beck, Ward Cunningham, Martin Fowler, James Grenning, Ron Jeffries, Robert Martin Feature-Driven Development: Jon Kern Object-Oriented Analysis: Stephen Mellor Scrum: Mike Beedle, Ken Schwaber, Jeff Sutherland Andrew Hunt, Brian Marick, and Dave Thomas were not associated with a specific methodOnly two of the seven methods were represented by more than one person: eXtreme Programming (XP) and Scrum. Coincidentally, these are the only ones we still hear about today. Agile Becomes Synonymous with Scrum Scrum is the clear winner in terms of market share, to the point where many people don’t know the difference between Agile and Scrum. I think there are at least two reasons for that: naming and ease of adoption. Decision makers in environments where nobody ever gets fired for buying IBM are usually not looking for something that is “extreme”. And “programming” is for, well, other people. On the other hand, Scrum is a term borrowed from sports, and we all know how executives love using sport metaphors. [BTW, the term “extreme” in XP comes from the idea of turning the dials of some useful practices up to 10. For example, if code reviews are good, let’s do it all the time (pair programming). But Continuous Integration is not nearly as extreme as Continuous Delivery and iterations (time-boxed pushes) are mild compared to pull systems like Kanban. XP isn’t so extreme after all.] Scrum is easy to get started with: you can certifiably master it in two days. Part of this is that Scrum has fewer mandated practices than XP. That’s also a danger: Scrum doesn’t prescribe any technical practices, even though technical practices are important. The technical practices support the management practices and are the foundation for a culture of technical excellence. The software craftsmanship movement can be seen as a reaction to the lack of attention for the technical side. For me, paying attention to obviously important technical practices is simply being a good software professional. The (Water)Fall Of Scrum The jury is still out on whether management-only Scrum is going to win completely, or whether the software craftsmanship movement can bring technical excellence back into the picture. This may be more important than it seems at first. Since Scrum focuses only on management issues, developers may largely keep doing what they were doing in their Waterfall days. This ScrumFall seems to have become the norm in enterprises. No wonder that many Scrum projects don’t produce the expected benefits. The late majority and laggards may take that opportunity to completely revert back to the old ways and the Agile Revolution may fail. In fact, several people have already proclaimed that Agile is dead and are talking about a post-Agile world. Some think that software craftsmanship should be the new paradigm, but I’m not buying that. Software craftsmanship is all about the code and too many people simply don’t care enough about code. Beautiful code that never gets deployed, for example, is worthless. Beyond Agile with DevOps Speaking of deploying software, the DevOps movement may be a more likely candidate to take over the baton from Agile. It’s based on Lean principles, just like Agile was. Actually, DevOps is a continuation of Agile outside the development team. I’ve even seen the term agile DevOps. So what makes me think DevOps won’t share the same fate as Agile? First, DevOps looks at the whole software delivery value stream, whereas Agile confined itself to software development. This means DevOps can’t remain in the developer’s corner; for DevOps to work, it has to have support from way higher up the corporate food chain. And executive support is a prerequisite for real, lasting change. Second, the DevOps movement from the beginning has placed a great deal of emphasis on culture, which is where I think Agile failed most. We’ll have to see whether DevOps can really do better, but at least the topic is on the agenda. Third, DevOps puts a lot of emphasis on metrics, which makes it easier to track its success and helps to break down silos. Fourth, the Third Platform virtually requires DevOps, because Systems of Engagement call for much more rapid software delivery than Systems of Record. Fifth, with the number of security breaches spiraling out of control, the ability to quickly deploy fixes becomes the number one security measure. The combination of DevOps and Security is referred to as Rugged DevOps or DevOpsSec. What do you think? Will DevOps succeed where Agile failed? Please leave a comment.Reference: DevOps Is The New Agile from our JCG partner Remon Sinnema at the Secure Software Development blog....
grails-logo

Grails Spring Security Core Plugin – Registering Callback Closures

I was searching for a way to hook business logic after successful user login while using Spring Security Core plugin. The simplest way to do this is to register callback closures. It let’s you hook your custom code inside Config.groovy after certain Spring Security events. If you are unfamiliar with this plugin, visit the previous tutorial on Spring Security Core Plugin.         Registering Callback Closures When you want to hook business logic on certain Spring Security Core events, you can edit grails-app/conf/Config.groovy and add something like the following: grails.plugins.springsecurity.useSecurityEventListener = truegrails.plugins.springsecurity. onInteractiveAuthenticationSuccessEvent = { e, appCtx -> // add custom code here }grails.plugins.springsecurity. onAbstractAuthenticationFailureEvent = { e, appCtx -> // add custom code here }grails.plugins.springsecurity. onAuthenticationSuccessEvent = { e, appCtx -> // add custom code here }grails.plugins.springsecurity. onAuthenticationSwitchUserEvent = { e, appCtx -> // add custom code here }grails.plugins.springsecurity. onAuthorizationEvent = { e, appCtx -> // add custom code here } When using this, it is important to set grails.plugins.springsecurity.useSecurityEventListener to true as shown in the first line above. As per the documentation in the plugin docs:When a user authenticates, Spring Security initially fires an AuthenticationSuccessEvent (onAuthenticationSuccessEvent callback). This event fires before the Authentication is registered in the SecurityContextHolder, which means that the springSecurityService methods that access the logged-in user will not work. Later in the processing a second event is fired, an InteractiveAuthenticationSuccessEvent (onInteractiveAuthenticationSuccessEvent callback), and when this happens the SecurityContextHolder will have the Authentication. This means if you don’t need to retrieve the logged in user, you can hook to onAuthenticationSuccessEvent. Otherwise, hook with onInteractiveAuthenticationSuccessEvent.Example This example will log all successful logins to a database table. Model I assume that the model for the user is SecUser as discussed here. The logs will be saved in SecUserLog data model. class SecUserLog { String username String actionstatic constraints = { username blank: false, nullable: false action blank: false, nullable: false } }Service I created a service to wrap all business logic to the SecUserLog domain. I used a service to show later how to access this inside Config.groovy. class SecUserLogService { static transactional = true def securityServicedef addUserLog(String action) { SysUser user = securityService.getLoggedUser() SysUserActionLog log = new SysUserActionLog(username:user.username, action:action) log.save() } } Note that the service securityService is from the Spring Security Core Plugin. Config.groovy entries This is the code I added in Config.groovy: grails.plugins.springsecurity.useSecurityEventListener = truegrails.plugins.springsecurity.onInteractiveAuthenticationSuccessEvent = { e, appCtx -> def secUserLogService = appCtx.getBean('secUserLogService') secUserLogService.addUserLog("login") } The appCtx object is the application context, which can be used to look up Spring managed beans – that includes the services. After that, you can use the service as you please.Reference: Grails Spring Security Core Plugin – Registering Callback Closures from our JCG partner Jonathan Tan at the Grails cookbook blog....
spring-interview-questions-answers

Spring RestTemplate with a linked resource

Spring Data REST is an awesome project that provides mechanisms to expose the resources underlying a Spring Data based repository as REST resources. Exposing a service with a linked resource Consider two simple JPA based entities, Course and Teacher:           @Entity @Table(name = "teachers") public class Teacher { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id") private Long id;@Size(min = 2, max = 50) @Column(name = "name") private String name;@Column(name = "department") @Size(min = 2, max = 50) private String department; ... }@Entity @Table(name = "courses") public class Course { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "id") private Long id;@Size(min = 1, max = 10) @Column(name = "coursecode") private String courseCode;@Size(min = 1, max = 50) @Column(name = "coursename") private String courseName;@ManyToOne @JoinColumn(name = "teacher_id") private Teacher teacher; .... } essentially the relation looks like this:Now, all it takes to expose these entities as REST resources is adding a @RepositoryRestResource annotation on their JPA based Spring Data repositories this way, first for the “Teacher” resource: import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.data.rest.core.annotation.RepositoryRestResource; import univ.domain.Teacher;@RepositoryRestResource public interface TeacherRepo extends JpaRepository<Teacher, Long> { } and for exposing the Course resource: @RepositoryRestResource public interface CourseRepo extends JpaRepository<Course, Long> { } With this done and assuming a few teachers and a few courses are already in the datastore, a GET on courses would yield a response of the following type: { "_links" : { "self" : { "href" : "http://localhost:8080/api/courses{?page,size,sort}", "templated" : true } }, "_embedded" : { "courses" : [ { "courseCode" : "Course1", "courseName" : "Course Name 1", "version" : 0, "_links" : { "self" : { "href" : "http://localhost:8080/api/courses/1" }, "teacher" : { "href" : "http://localhost:8080/api/courses/1/teacher" } } }, { "courseCode" : "Course2", "courseName" : "Course Name 2", "version" : 0, "_links" : { "self" : { "href" : "http://localhost:8080/api/courses/2" }, "teacher" : { "href" : "http://localhost:8080/api/courses/2/teacher" } } } ] }, "page" : { "size" : 20, "totalElements" : 2, "totalPages" : 1, "number" : 0 } } and a specific course looks like this: { "courseCode" : "Course1", "courseName" : "Course Name 1", "version" : 0, "_links" : { "self" : { "href" : "http://localhost:8080/api/courses/1" }, "teacher" : { "href" : "http://localhost:8080/api/courses/1/teacher" } } } If you are wondering what the “_links”, “_embedded” are – Spring Data REST uses Hypertext Application Language(or HAL for short) to represent the links, say the one between a course and a teacher. HAL Based REST service – Using RestTemplate Given this HAL based REST service, the question that I had in my mind was how to write a client to this service. I am sure there are better ways of doing this, but what follows worked for me and I welcome any cleaner ways of writing the client. First, I modified the RestTemplate to register a custom Json converter that understands HAL based links: public RestTemplate getRestTemplateWithHalMessageConverter() { RestTemplate restTemplate = new RestTemplate(); List<HttpMessageConverter<?>> existingConverters = restTemplate.getMessageConverters(); List<HttpMessageConverter<?>> newConverters = new ArrayList<>(); newConverters.add(getHalMessageConverter()); newConverters.addAll(existingConverters); restTemplate.setMessageConverters(newConverters); return restTemplate; }private HttpMessageConverter getHalMessageConverter() { ObjectMapper objectMapper = new ObjectMapper(); objectMapper.registerModule(new Jackson2HalModule()); MappingJackson2HttpMessageConverter halConverter = new TypeConstrainedMappingJackson2HttpMessageConverter(ResourceSupport.class); halConverter.setSupportedMediaTypes(Arrays.asList(HAL_JSON)); halConverter.setObjectMapper(objectMapper); return halConverter; } The Jackson2HalModule is provided by the Spring HATEOS project and understands HAL representation. Given this shiny new RestTemplate, first let us create a Teacher entity: Teacher teacher1 = new Teacher(); teacher1.setName("Teacher 1"); teacher1.setDepartment("Department 1"); URI teacher1Uri = testRestTemplate.postForLocation("http://localhost:8080/api/teachers", teacher1); Note that when the entity is created, the response is a http status code of 201 with the Location header pointing to the uri of the newly created resource, Spring RestTemplate provides a neat way of posting and getting hold of this Location header through an API. So now we have a teacher1Uri representing the newly created teacher. Given this teacher URI, let us now retrieve the teacher, the raw json for the teacher resource looks like the following: { "name" : "Teacher 1", "department" : "Department 1", "version" : 0, "_links" : { "self" : { "href" : "http://localhost:8080/api/teachers/1" } } } and to retrieve this using RestTemplate: ResponseEntity<Resource<Teacher>> teacherResponseEntity = testRestTemplate.exchange("http://localhost:8080/api/teachers/1", HttpMethod.GET, null, new ParameterizedTypeReference<Resource<Teacher>>() { });Resource<Teacher> teacherResource = teacherResponseEntity.getBody();Link teacherLink = teacherResource.getLink("self"); String teacherUri = teacherLink.getHref();Teacher teacher = teacherResource.getContent(); Jackson2HalModule is the one which helps unpack the links this cleanly and to get hold of the Teacher entity itself. I have previously explained ParameterizedTypeReference here. Now, to a more tricky part, creating a Course. Creating a course is tricky as it has a relation to the Teacher and representing this relation using HAL is not that straightforward. A raw POST to create the course would look like this: { "courseCode" : "Course1", "courseName" : "Course Name 1", "version" : 0, "teacher" : "http://localhost:8080/api/teachers/1" } Note how the reference to the teacher is a URI, this is how HAL represents an embedded reference specifically for a POST’ed content, so now to get this form through RestTemplate. First to create a Course: Course course1 = new Course(); course1.setCourseCode("Course1"); course1.setCourseName("Course Name 1"); At this point, it will be easier to handle providing the teacher link by dealing with a json tree representation and adding in the teacher link as the teacher uri: ObjectMapper objectMapper = getObjectMapperWithHalModule(); ObjectNode jsonNodeCourse1 = (ObjectNode) objectMapper.valueToTree(course1); jsonNodeCourse1.put("teacher", teacher1Uri.getPath()); and posting this should create the course with the linked teacher: URI course1Uri = testRestTemplate.postForLocation(coursesUri, jsonNodeCourse1); and to retrieve this newly created Course: ResponseEntity<Resource<Course>> courseResponseEntity = testRestTemplate.exchange(course1Uri, HttpMethod.GET, null, new ParameterizedTypeReference<Resource<Course>>() { });Resource<Course> courseResource = courseResponseEntity.getBody(); Link teacherLinkThroughCourse = courseResource.getLink("teacher"); This concludes how to use the RestTemplate to create and retrieve a linked resource, alternate ideas are welcome.If you are interested in exploring this further, the entire sample is available at this github repo –  and the test is here.Reference: Spring RestTemplate with a linked resource from our JCG partner Biju Kunjummen at the all and sundry blog....
scala-logo

Spark: Write to CSV file

A couple of weeks ago I wrote how I’d been using Spark to explore a City of Chicago Crime data set and having worked out how many of each crime had been committed I wanted to write that to a CSV file. Spark provides a saveAsTextFile function which allows us to save RDD’s so I refactored my code into the following format to allow me to use that:           import au.com.bytecode.opencsv.CSVParser import org.apache.spark.rdd.RDD import org.apache.spark.SparkContext._   def dropHeader(data: RDD[String]): RDD[String] = { data.mapPartitionsWithIndex((idx, lines) => { if (idx == 0) { lines.drop(1) } lines }) }   // https://data.cityofchicago.org/Public-Safety/Crimes-2001-to-present/ijzp-q8t2 val crimeFile = "/Users/markneedham/Downloads/Crimes_-_2001_to_present.csv"   val crimeData = sc.textFile(crimeFile).cache() val withoutHeader: RDD[String] = dropHeader(crimeData)   val file = "/tmp/primaryTypes.csv" FileUtil.fullyDelete(new File(file))   val partitions: RDD[(String, Int)] = withoutHeader.mapPartitions(lines => { val parser = new CSVParser(',') lines.map(line => { val columns = parser.parseLine(line) (columns(5), 1) }) })   val counts = partitions. reduceByKey {case (x,y) => x + y}. sortBy {case (key, value) => -value}. map { case (key, value) => Array(key, value).mkString(",") }   counts.saveAsTextFile(file) If we run that code from the Spark shell we end up with a folder called /tmp/primaryTypes.csv containing multiple part files: $ ls -lah /tmp/primaryTypes.csv/ total 496 drwxr-xr-x 66 markneedham wheel 2.2K 30 Nov 07:17 . drwxrwxrwt 80 root wheel 2.7K 30 Nov 07:16 .. -rw-r--r-- 1 markneedham wheel 8B 30 Nov 07:16 ._SUCCESS.crc -rw-r--r-- 1 markneedham wheel 12B 30 Nov 07:16 .part-00000.crc -rw-r--r-- 1 markneedham wheel 12B 30 Nov 07:16 .part-00001.crc -rw-r--r-- 1 markneedham wheel 12B 30 Nov 07:16 .part-00002.crc -rw-r--r-- 1 markneedham wheel 12B 30 Nov 07:16 .part-00003.crc ... -rwxrwxrwx 1 markneedham wheel 0B 30 Nov 07:16 _SUCCESS -rwxrwxrwx 1 markneedham wheel 28B 30 Nov 07:16 part-00000 -rwxrwxrwx 1 markneedham wheel 17B 30 Nov 07:16 part-00001 -rwxrwxrwx 1 markneedham wheel 23B 30 Nov 07:16 part-00002 -rwxrwxrwx 1 markneedham wheel 16B 30 Nov 07:16 part-00003 ... If we look at some of those part files we can see that it’s written the crime types and counts as expected: $ cat /tmp/primaryTypes.csv/part-00000 THEFT,859197 BATTERY,757530   $ cat /tmp/primaryTypes.csv/part-00003 BURGLARY,257310 This is fine if we’re going to pass those CSV files into another Hadoop based job but I actually want a single CSV file so it’s not quite what I want. One way to achieve this is to force everything to be calculated on one partition which will mean we only get one part file generated: val counts = partitions.repartition(1). reduceByKey {case (x,y) => x + y}. sortBy {case (key, value) => -value}. map { case (key, value) => Array(key, value).mkString(",") }     counts.saveAsTextFile(file) part-00000 now looks like this: $ cat !$ cat /tmp/primaryTypes.csv/part-00000 THEFT,859197 BATTERY,757530 NARCOTICS,489528 CRIMINAL DAMAGE,488209 BURGLARY,257310 OTHER OFFENSE,253964 ASSAULT,247386 MOTOR VEHICLE THEFT,197404 ROBBERY,157706 DECEPTIVE PRACTICE,137538 CRIMINAL TRESPASS,124974 PROSTITUTION,47245 WEAPONS VIOLATION,40361 PUBLIC PEACE VIOLATION,31585 OFFENSE INVOLVING CHILDREN,26524 CRIM SEXUAL ASSAULT,14788 SEX OFFENSE,14283 GAMBLING,10632 LIQUOR LAW VIOLATION,8847 ARSON,6443 INTERFERE WITH PUBLIC OFFICER,5178 HOMICIDE,4846 KIDNAPPING,3585 INTERFERENCE WITH PUBLIC OFFICER,3147 INTIMIDATION,2471 STALKING,1985 OFFENSES INVOLVING CHILDREN,355 OBSCENITY,219 PUBLIC INDECENCY,86 OTHER NARCOTIC VIOLATION,80 NON-CRIMINAL,12 RITUALISM,12 OTHER OFFENSE ,6 NON - CRIMINAL,2 NON-CRIMINAL (SUBJECT SPECIFIED),2 This works but it’s quite a bit slower than when we were doing the aggregation across partitions so it’s not ideal. Instead, what we can do is make use of one of Hadoop’s merge functions which squashes part files together into a single file. First we import Hadoop into our SBT file: libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % "2.5.2" Now let’s bring our merge function into the Spark shell: import org.apache.hadoop.conf.Configuration import org.apache.hadoop.fs._   def merge(srcPath: String, dstPath: String): Unit = { val hadoopConfig = new Configuration() val hdfs = FileSystem.get(hadoopConfig) FileUtil.copyMerge(hdfs, new Path(srcPath), hdfs, new Path(dstPath), false, hadoopConfig, null) } And now let’s make use of it: val file = "/tmp/primaryTypes.csv" FileUtil.fullyDelete(new File(file))   val destinationFile= "/tmp/singlePrimaryTypes.csv" FileUtil.fullyDelete(new File(destinationFile))   val counts = partitions. reduceByKey {case (x,y) => x + y}. sortBy {case (key, value) => -value}. map { case (key, value) => Array(key, value).mkString(",") }   counts.saveAsTextFile(file)   merge(file, destinationFile) And now we’ve got the best of both worlds: $ cat /tmp/singlePrimaryTypes.csv THEFT,859197 BATTERY,757530 NARCOTICS,489528 CRIMINAL DAMAGE,488209 BURGLARY,257310 OTHER OFFENSE,253964 ASSAULT,247386 MOTOR VEHICLE THEFT,197404 ROBBERY,157706 DECEPTIVE PRACTICE,137538 CRIMINAL TRESPASS,124974 PROSTITUTION,47245 WEAPONS VIOLATION,40361 PUBLIC PEACE VIOLATION,31585 OFFENSE INVOLVING CHILDREN,26524 CRIM SEXUAL ASSAULT,14788 SEX OFFENSE,14283 GAMBLING,10632 LIQUOR LAW VIOLATION,8847 ARSON,6443 INTERFERE WITH PUBLIC OFFICER,5178 HOMICIDE,4846 KIDNAPPING,3585 INTERFERENCE WITH PUBLIC OFFICER,3147 INTIMIDATION,2471 STALKING,1985 OFFENSES INVOLVING CHILDREN,355 OBSCENITY,219 PUBLIC INDECENCY,86 OTHER NARCOTIC VIOLATION,80 RITUALISM,12 NON-CRIMINAL,12 OTHER OFFENSE ,6 NON - CRIMINAL,2 NON-CRIMINAL (SUBJECT SPECIFIED),2The full code is available as a gist if you want to play around with it.Reference: Spark: Write to CSV file from our JCG partner Mark Needham at the Mark Needham Blog blog....
spring-interview-questions-answers

Spring request-level memoization

Introduction Memoization is a method-level caching technique for speeding-up consecutive invocations. This post will demonstrate how you can achieve request-level repeatable reads for any data source, using Spring AOP only.         Spring Caching Spring offers a very useful caching abstracting, allowing you do decouple the application logic from the caching implementation details. Spring Caching uses an application-level scope, so for a request-only memoization we need to take a DIY approach. Request-level Caching A request-level cache entry life-cycle is always bound to the current request scope. Such cache is very similar to Hibernate Persistence Context that offers session-level repeatable reads. Repeatable reads are mandatory for preventing lost updates, even for NoSQL solutions. Step-by-step implementation First we are going to define a Memoizing marker annotation: @Target(ElementType.METHOD) @Retention(RetentionPolicy.RUNTIME) public @interface Memoize { } This annotation is going to explicitly mark all methods that need to be memoized. To distinguish different method invocations we are going to encapsulate the method call info into the following object type: public class InvocationContext {public static final String TEMPLATE = "%s.%s(%s)";private final Class targetClass; private final String targetMethod; private final Object[] args;public InvocationContext(Class targetClass, String targetMethod, Object[] args) { this.targetClass = targetClass; this.targetMethod = targetMethod; this.args = args; }public Class getTargetClass() { return targetClass; }public String getTargetMethod() { return targetMethod; }public Object[] getArgs() { return args; }@Override public boolean equals(Object that) { return EqualsBuilder.reflectionEquals(this, that); }@Override public int hashCode() { return HashCodeBuilder.reflectionHashCode(this); }@Override public String toString() { return String.format(TEMPLATE, targetClass.getName(), targetMethod, Arrays.toString(args)); } } Few know about the awesomeness of Spring Request/Session bean scopes. Because we require a request-level memoization scope, we can simplify our design with a Spring request scope that hides the actual HttpSession resolving logic: @Component @Scope(proxyMode = ScopedProxyMode.TARGET_CLASS, value = "request") public class RequestScopeCache {public static final Object NONE = new Object();private final Map<InvocationContext, Object> cache = new HashMap<InvocationContext, Object>();public Object get(InvocationContext invocationContext) { return cache.containsKey(invocationContext) ? cache.get(invocationContext) : NONE; }public void put(InvocationContext methodInvocation, Object result) { cache.put(methodInvocation, result); } } Since a mere annotation means nothing without a runtime processing engine, we must therefore define a Spring Aspect implementing the actual memoization logic: @Aspect public class MemoizerAspect {@Autowired private RequestScopeCache requestScopeCache;@Around("@annotation(com.vladmihalcea.cache.Memoize)") public Object memoize(ProceedingJoinPoint pjp) throws Throwable { InvocationContext invocationContext = new InvocationContext( pjp.getSignature().getDeclaringType(), pjp.getSignature().getName(), pjp.getArgs() ); Object result = requestScopeCache.get(invocationContext); if (RequestScopeCache.NONE == result) { result = pjp.proceed(); LOGGER.info("Memoizing result {}, for method invocation: {}", result, invocationContext); requestScopeCache.put(invocationContext, result); } else { LOGGER.info("Using memoized result: {}, for method invocation: {}", result, invocationContext); } return result; } } Testing time Let’s put all this to a test. For simplicity sake, we are going to emulate the request-level scope memoization requirements with a Fibonacci number calculator: @Component public class FibonacciServiceImpl implements FibonacciService {@Autowired private ApplicationContext applicationContext;private FibonacciService fibonacciService;@PostConstruct private void init() { fibonacciService = applicationContext.getBean(FibonacciService.class); }@Memoize public int compute(int i) { LOGGER.info("Calculate fibonacci for number {}", i); if (i == 0 || i == 1) return i; return fibonacciService.compute(i - 2) + fibonacciService.compute(i - 1); } } If we are to calculate the 10th Fibonnaci number, we’ll get the following result: Calculate fibonacci for number 10 Calculate fibonacci for number 8 Calculate fibonacci for number 6 Calculate fibonacci for number 4 Calculate fibonacci for number 2 Calculate fibonacci for number 0 Memoizing result 0, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([0]) Calculate fibonacci for number 1 Memoizing result 1, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([1]) Memoizing result 1, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([2]) Calculate fibonacci for number 3 Using memoized result: 1, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([1]) Using memoized result: 1, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([2]) Memoizing result 2, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([3]) Memoizing result 3, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([4]) Calculate fibonacci for number 5 Using memoized result: 2, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([3]) Using memoized result: 3, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([4]) Memoizing result 5, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([5]) Memoizing result 8, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([6]) Calculate fibonacci for number 7 Using memoized result: 5, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([5]) Using memoized result: 8, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([6]) Memoizing result 13, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([7]) Memoizing result 21, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([8]) Calculate fibonacci for number 9 Using memoized result: 13, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([7]) Using memoized result: 21, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([8]) Memoizing result 34, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([9]) Memoizing result 55, for method invocation: com.vladmihalcea.cache.FibonacciService.compute([10]) Conclusion Memoization is a cross-cutting concern and Spring AOP allows you to decouple the caching details from the actual application logic code.Code available on GitHub.Reference: Spring request-level memoization from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
spring-interview-questions-answers

Spring Data JPA Tutorial: Introduction

Creating repositories that use the Java Persistence API is a cumbersome process that takes a lot of time and requires a lot of boilerplate code. We can eliminate some boilerplate code by following these steps:                Create an abstract base repository class that provides CRUD operations for entities. Create the concrete repository class that extends the abstract base repository class.The problem of this approach is that we still have to write the code that creates our database queries and invokes them. To make matters worse, we have to do this every time when we want to create a new database query. This is a waste of time. What would you say if I would tell you that we can create JPA repositories without writing any boilerplate code? The odds are that you might not believe me, but Spring Data JPA helps us to do just that. The website of the Spring Data JPA project states that: Implementing a data access layer of an application has been cumbersome for quite a while. Too much boilerplate code has to be written to execute simple queries as well as perform pagination, and auditing. Spring Data JPA aims to significantly improve the implementation of data access layers by reducing the effort to the amount that’s actually needed. As a developer you write your repository interfaces, including custom finder methods, and Spring will provide the implementation automaticallyThis blog post provides an introduction to Spring Data JPA. We will learn what Spring Data JPA really is and take a quick look at the Spring Data repository interfaces. Let’s get started. What Spring Data JPA Is? Spring Data JPA is not a JPA provider. It is a library / framework that adds an extra layer of abstraction on the top of our JPA provider. If we decide to use Spring Data JPA, the repository layer of our application contains three layers that are described in the following:Spring Data JPA provides support for creating JPA repositories by extending the Spring Data repository interfaces. Spring Data Commons provides the infrastructure that is shared by the datastore specific Spring Data projects. The JPA Provider implements the Java Persistence API.The following figure illustrates the structure of our repository layer:Additional Reading:Spring Data JPA versus JPA: What’s the difference?At first it seems that Spring Data JPA makes our application more complicated, and in a way that is true. It does add an additional layer to our repository layer, but at the same time it frees us from writing any boilerplate code. That sounds like a good tradeoff. Right? Introduction to Spring Data Repositories The power of Spring Data JPA lies in the repository abstraction that is provided by the Spring Data Commons project and extended by the datastore specific sub projects. We can use Spring Data JPA without paying any attention to the actual implementation of the repository abstraction, but we have to be familiar with the Spring Data repository interfaces. These interfaces are described in the following: First, the Spring Data Commons project provides the following interfaces:The Repository<T, ID extends Serializable> interface is a marker interface that has two purposes:It captures the type of the managed entity and the type of the entity’s id. It helps the Spring container to discover the “concrete” repository interfaces during classpath scanning.The CrudRepository<T, ID extends Serializable> interface provides CRUD operations for the managed entity. The PagingAndSortingRepository<T, ID extends Serializable> interface declares the methods that are used to sort and paginate entities that are retrieved from the database. The QueryDslPredicateExecutor<T> interface is not a “repository interface”. It declares the methods that are used to retrieve entities from the database by using QueryDsl Predicate objects.Second, the Spring Data JPA project provides the following interfaces:The JpaRepository<T, ID extends Serializable> interface is a JPA specific repository interface that combines the methods declared by the common repository interfaces behind a single interface. The JpaSpecificationExecutor<T> interface is not a “repository interface”. It declares the methods that are used to retrieve entities from the database by using Specification<T> objects that use the JPA criteria API.The repository hierarchy looks as follows:That is nice, but how can we use them? That is a fair question. The next parts of this tutorial will answer to that question, but essentially we have to follow these steps:Create a repository interface and extend one of the repository interfaces provided by Spring Data. Add custom query methods to the created repository interface (if we need them that is). Inject the repository interface to another component and use the implementation that is provided automatically by Spring.Let’s move on and summarize what we learned from this blog post. Summary This blog post has taught us two things:Spring Data JPA is not a JPA provider. It simply “hides” the Java Persistence API (and the JPA provider) behind its repository abstraction. Spring Data provides multiple repository interfaces that are used for different purposes.The next part of this tutorial describes how we can get the required dependencies. If you want to know more about Spring Data JPA, you should read my Spring Data JPA Tutorial.Reference: Spring Data JPA Tutorial: Introduction from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
software-development-2-logo

Can MicroServices Architecture Solve All Your Problems?

IT is one field where you can find new things coming everyday. Theses days the whole developer community websites are flooded with MicroServices and Docker related stuff. Among them the idea of MicroServices is very exciting and encourages better way of building software systems. But as with any architectural style there will be pros and cons to every approach. Before discussing what are good and bad sides of MicroServices approach, first let me say what I understood about MicroServices.    MicroServices architecture encourage to build small, focused subsystems which can be integrated into the whole system preferably using REST protocol.Now lets discuss on various aspects of MicroServices architecture. The dream of every software architect/developer : First of all the idea of MicroServices is not new at all. From the very beginning our elders suggest to write classes focusing on one Single Responsibility and write methods to do one particular thing and do it well. Also we were encouraged to build separate modules which can perform some functionally related tasks. Then we bundle all these separate modules together and build an application delegating the appropriate tasks to respective modules. This is what we try to do for many year. But the idea of MicroServices took this approach to next level where you can deploy each module as an individual deployable unit and each service can communicate with any other Service based on some agreed protocol (preferably REST, another trendy cool thing!). So what are the advantages of this MicroServices architectures? There are plenty:You will have many small services with manageable codebases which is easy to read and understand. You can confidently refactor or rewrite entire service because there won’t be any impact on other services. Each microservice can be deployed independently so that adding new features or upgrading any existing software/hardware platform won’t affect other services. You can easily adopt the next cool technology. If one of microservices is very critical service and performance is the highest priority then we can write that particular service using Scala in order to leverage your multi-core hardware support. If you are a service provider company you can sell each service separately possibly making better money compared to selling whole monolithic product. And most important factor is, the term MicroService is cool!What is the other side of MicroServices architecture? As with any approach, MicroServices also has some down sides and associated cost.“Great power comes with great responsibility”. – Uncle Ben Let us see what are the challenges to implement a system using MicroServices architecture. The idea of MicroServices is very simple but very complex to implement in reality. In a monolithic system, the communication between various subsystems are mostly direct object communication. But in MicroServices based system, in order to communicate with other services you may use REST services which means additional HTTP call overhead and its inherent issues like network latency, possible communication failures etc. So we need to consider various aspects while implementing inter-service communication logic such as retry, fail-over and service down scenarios etc. How good is your DevOps infrastructure? In order to go with MicroServices architecture, organization should have a good DevOps team to properly maintain the dozens of MicroService applications. Do your organization has DevOps culture? Or your organization has the problem of blame game between Devs and Ops? If your organization doesn’t have a good DevOps culture and software/hardware resources then MicroServices architecture will be much more difficult to adopt. Are we fixing the actual problem at all? Now many people are saying MicroServices architecture is better than Monolithic architecture. But is Monolithic architecture is the actual reason why many projects are failing? Will MicroServices architecture save the projects from failing? I guess NO. Think, what were the reasons for your previously failed projects. Are those projects failed because of technology issues or people issues? I have never seen a project which is failed because of the wrong technology selection, or wrong architectural approach. But I have seen many projects failing just because of problems with people. I feel there are more severe issues than architecture issues which are causing projects to be failed such as:Having developers without sufficient skills Having developers who don’t want to learn anything new Having developers who don’t have courage to say “NO, we can’t do that in that time” Having Architects who abandoned coding years ago Having Architects who think they know everything and don’t need to listen to their developers pain Having Managers who just blame the developers for not meeting the imposed deadlines without ever asking the  developers for time-linesThese are the real problems which are really causing the project failures. Now do you think just moving to MicroServices architecture saves the IT without fixing these problems? Continuously innovating new ways of doing things is awesome and is required to move ahead. At the same time assuming “the next cool methodology/technology will fix all the problems is also wrong”.So those of you who are just about to jump on MicroServices boat..THINK. FIX THE REAL PROBLEMS FIRST. You can’t fill a bottle which has a hole at it’s bottom.Reference: Can MicroServices Architecture Solve All Your Problems? from our JCG partner Siva Reddy at the My Experiments on Technology blog....
software-development-2-logo

What I look for in frameworks

In every project the discussion comes up over and over again: should we use framework X? or Y? or no framework at all? Even when you limit yourself to the frameworks for web development in the Java space the choices are so plentiful, nobody can know them all. So I need a quick way do identify which frameworks sound promising to me and which I keep for weekend projects.            Stay away from the new kid on the block. While it might be fun to play with the coolest newest thing, I work on projects that have a life cycle of 10-30 years. I wouldn’t want to support an application using some library that was cool between March and July in 1996. Therefore I try not to put that kind of burden on others. Do one thing and do it well. A bad example for this is Hibernate/JPA. It does (or tries to) provide all of the followingmapping between a relational and an object-oriented model caching change detection caching query dslIt is kind of ok for a framework or library to provide multiple services, if you can decide on each service separately if you want to use it or not. But if it controls to many aspects of your project, the chance that it doesn’t do anything well gets huge. And you won’t be able to exchange it easily, because now you have to replace half a dozen libraries at once. Method calls are cool. Annotations are ok. Byte code manipulation is scary. Code generation a reason to run for the hills. In the list only method calls can be abstracted over properly. All the other stuff tends to get in your way. Annotations are kind of harmless, but it is easy to get in situations where you have more annotations than actual code. Byte code manipulation starts to put some serious constraints on what you can do in your code. And code generation additional slows down your build process. Keep the fingers of my domain model. The domain model is really the important part of an application. I can change the persistence or the ui of an application, but if I have to rework the domain model, everything changes and I’m essential rewriting the application. Also I need all the flexibility the programming language of choice offers to design the domain model. I don’t want to get restricted by some stupid framework that requires default constructors or getters and setters for all fields. Can we handle it? There are many things that sound really awesome, but they require a so different style of coding, that many developers will have a hard time tackling it. And just because I think I can handle it, doesn’t necessarily mean I actually can. So better stay simple and old-fashioned.Reference: What I look for in frameworks from our JCG partner Jens Schauder at the Schauderhaft blog....
scala-logo

Spark: Write to CSV file with header using saveAsFile

In my last blog post I showed how to write to a single CSV file using Spark and Hadoop and the next thing I wanted to do was add a header row to the resulting row. Hadoop’s FileUtil#copyMerge function does take a String parameter but it adds this text to the end of each partition file which isn’t quite what we want. However, if we copy that function into our own FileUtil class we can restructure it to do what we want:       import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.*; import org.apache.hadoop.io.IOUtils; import java.io.IOException;   public class MyFileUtil { public static boolean copyMergeWithHeader(FileSystem srcFS, Path srcDir, FileSystem dstFS, Path dstFile, boolean deleteSource, Configuration conf, String header) throws IOException { dstFile = checkDest(srcDir.getName(), dstFS, dstFile, false); if(!srcFS.getFileStatus(srcDir).isDir()) { return false; } else { FSDataOutputStream out = dstFS.create(dstFile); if(header != null) { out.write((header + "\n").getBytes("UTF-8")); }   try { FileStatus[] contents = srcFS.listStatus(srcDir);   for(int i = 0; i < contents.length; ++i) { if(!contents[i].isDir()) { FSDataInputStream in = srcFS.open(contents[i].getPath());   try { IOUtils.copyBytes(in, out, conf, false);   } finally { in.close(); } } } } finally { out.close(); }   return deleteSource?srcFS.delete(srcDir, true):true; } }   private static Path checkDest(String srcName, FileSystem dstFS, Path dst, boolean overwrite) throws IOException { if(dstFS.exists(dst)) { FileStatus sdst = dstFS.getFileStatus(dst); if(sdst.isDir()) { if(null == srcName) { throw new IOException("Target " + dst + " is a directory"); }   return checkDest((String)null, dstFS, new Path(dst, srcName), overwrite); }   if(!overwrite) { throw new IOException("Target " + dst + " already exists"); } } return dst; } } We can then update our merge function to call this instead: def merge(srcPath: String, dstPath: String, header:String): Unit = { val hadoopConfig = new Configuration() val hdfs = FileSystem.get(hadoopConfig) MyFileUtil.copyMergeWithHeader(hdfs, new Path(srcPath), hdfs, new Path(dstPath), false, hadoopConfig, header) } We call merge from our code like this: merge(file, destinationFile, "type,count") I wasn’t sure how to import my Java based class into the Spark shell so I compiled the code into a JAR and submitted it as a job instead: $ sbt package [info] Loading global plugins from /Users/markneedham/.sbt/0.13/plugins [info] Loading project definition from /Users/markneedham/projects/spark-play/playground/project [info] Set current project to playground (in build file:/Users/markneedham/projects/spark-play/playground/) [info] Compiling 3 Scala sources to /Users/markneedham/projects/spark-play/playground/target/scala-2.10/classes... [info] Packaging /Users/markneedham/projects/spark-play/playground/target/scala-2.10/playground_2.10-1.0.jar ... [info] Done packaging. [success] Total time: 8 s, completed 30-Nov-2014 08:12:26   $ time ./bin/spark-submit --class "WriteToCsvWithHeader" --master local[4] /path/to/playground/target/scala-2.10/playground_2.10-1.0.jar Spark assembly has been built with Hive, including Datanucleus jars on classpath Using Spark's default log4j profile: org/apache/spark/log4j-defaults.propertie ... 14/11/30 08:16:15 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 14/11/30 08:16:15 INFO SparkContext: Job finished: saveAsTextFile at WriteToCsvWithHeader.scala:49, took 0.589036 s   real 0m13.061s user 0m38.977s sys 0m3.393s And if we look at our destination file: $ cat /tmp/singlePrimaryTypes.csv type,count THEFT,859197 BATTERY,757530 NARCOTICS,489528 CRIMINAL DAMAGE,488209 BURGLARY,257310 OTHER OFFENSE,253964 ASSAULT,247386 MOTOR VEHICLE THEFT,197404 ROBBERY,157706 DECEPTIVE PRACTICE,137538 CRIMINAL TRESPASS,124974 PROSTITUTION,47245 WEAPONS VIOLATION,40361 PUBLIC PEACE VIOLATION,31585 OFFENSE INVOLVING CHILDREN,26524 CRIM SEXUAL ASSAULT,14788 SEX OFFENSE,14283 GAMBLING,10632 LIQUOR LAW VIOLATION,8847 ARSON,6443 INTERFERE WITH PUBLIC OFFICER,5178 HOMICIDE,4846 KIDNAPPING,3585 INTERFERENCE WITH PUBLIC OFFICER,3147 INTIMIDATION,2471 STALKING,1985 OFFENSES INVOLVING CHILDREN,355 OBSCENITY,219 PUBLIC INDECENCY,86 OTHER NARCOTIC VIOLATION,80 RITUALISM,12 NON-CRIMINAL,12 OTHER OFFENSE ,6 NON - CRIMINAL,2 NON-CRIMINAL (SUBJECT SPECIFIED),2 Happy days!The code is available as a gist if you want to see all the details.Reference: Spark: Write to CSV file with header using saveAsFile from our JCG partner Mark Needham at the Mark Needham Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close