Featured FREE Whitepapers

What's New Here?

javascript-logo

AngularJS: Introducing modules, controllers, services

In my previous post AngularJS Tutorial: Getting Started with AngularJS we have seen how to setup an application using SpringBoot + AngularJS + WebJars. But it’s a kind of quick start tutorial where I haven’t explained much about AngularJS modules, controllers and services. Also it is a single screen (only one route) application. In this part-2 tutorial, we will take a look at what are Angular modules, controllers and services and how to configure and use them. Also we will look into how to use ngRoute to build multi-screen application. If we take a look at the code that we developed in previous post, especially in controllers.js, we clubbed the client side controller logic and business logic(of-course we don’t have any biz logic here !) in our Controllers which is not good. As java developers we get used to have dozen layers and we love making things complex and complain Java is complex. But here in AngularJS things looks simpler, let’s make things little bit complex. I am just kidding ! Even if you put all your logic in single place as we did in controllers.js, it will work and acceptable for simple applications. But if you are going to develop large enterprise application (who said enterprise applications should be large…hmm..ok..continue..) then things quickly become messy. And believe me working with a messy large JavaScript codebase is lot more painful than messy large Java codebase. So it is a good idea to separate the business logic from controller logic. In AngularJS we can organize application logic into modules and make them work together using dependency injection. Lets see how to create a module in AngularJS. var myModule = angular.module('moduleName',['dependency1','dependency2']); This is how we can create a module by using angular.module() function by passing the module name and specifying a list of dependencies if there are any. Once we define a module we can get handle of the module as follows: var myModule = angular.module('moduleName'); Observe that there is no second argument here which means we are getting the reference of a predefined angular module. If you include the second argument, which is an array, then it means you are defining the new module. Once we define a new module we can create controllers in that module as follows: module.controller('ControllerName',['dependency1','dependency2', function(dependency1, dependency2){ //logic }]); For example, lets see how we to create TodoController. var myApp = angular.module('myApp',['ngRoute']); myApp.controller('TodoController',['$scope','$http',function($scope,$http){ //logic }]); Here we are creating TodoController and providing $scope and $http as dependencies which are built-in angularjs services. We can also create the same controller as follows: myApp.controller('TodoController',function($scope,$http){ //logic });Observe that we are directly passing a function as a second argument instead of an array which has an array of dependencies followed by a function which takes the same dependencies as arguments and it works exactly same as array based declaration. But why do we need to do more typing when both do the same thing?? AngularJS injects the dependencies by name, that means when you define $http as a dependency then AngularJS looks for a registered service with name ‘$http‘. But majority of the real world applications use JavaScript code minification tools to reduce the size. Those tools may rename your variables to short variable names. For example: myApp.controller('TodoController',function($scope,$http){ //logic }); The preceding code might be minified into: myApp.controller('TodoController',function($s,$h){ //logic }); Then AngularJS tries to look for registered services with names $s and $h instead of $scope and $http and eventually it will fail. To overcome this issue we define the names of services as string literals in array and specify the same names as function arguments. With this even after JavaScript minifies the function argument names, string literals remains same and AngularJS picks right services to inject. That means you can write the controller as follows: myApp.controller('TodoController',['$scope','$http',function($s,$h){ //here $s represents $scope and $h represents $http services }]); So always prefer to use array based dependencies approach. Ok, now we know how to create controllers. Lets see how we can add some functionality to our controllers. myApp.controller('TodoController',['$scope','$http',function($scope,$http){ var todoCtrl = this; todoCtrl.todos = []; todoCtrl.loadTodos = function(){ $http.get('/todos.json').success(function(data){ todoCtrl.todos = data; }).error(function(){ alert('Error in loading Todos'); }); }; todoCtrl.loadTodos(); }]); Here in our TodoController we defined a variable todos which initially holds an empty array and we defined loadTodos() function which loads todos from RESTful services using $http.get() and once response received we are setting the todos array to our todos variable. Simple and straight forward. Why can’t we directly assign the response of $http.get() to our todos variable like todoCtrl.todos = $http.get(‘/todos.json’);?? Because $http.get(‘/todos.json’) returns a promise, not actual response data. So you have to get data from success handler function. Also note that if you want to perform any logic after receiving data from $http.get() you should put your logic inside success handler function only. For example if you are deleting a Todo item and then reload the todos you should NOT do as follows: $http.delete('/todos.json/1').success(function(data){ //hurray, deleted }).error(function(){ alert('Error in deleting Todo'); }); todoCtrl.loadTodos(); Here you might assume that after delete is done it will loadTodos() and the deleted Todo item won’t show up, but that won’t work like that. You should do it as follows: $http.delete('/todos.json/1').success(function(data){ //hurray, deleted todoCtrl.loadTodos(); }).error(function(){ alert('Error in deleting Todo'); }); Lets move on to how to create AngularJS services. Creating services is also similar to controllers but AngularJS provides multiple ways for creating services. There are 3 ways to create AngularJS services:Using module.factory() Using module.service() Using module.provider()Using module.factory() We can create a service using module.factory() as follows: angular.module('myApp') .factory('UserService', ['$http',function($http) { var service = { user: {}, login: function(email, pwd) { $http.get('/auth',{ username: email, password: pwd}).success(function(data){ service.user = data; }); }, register: function(newuser) { return $http.post('/users', newuser); } }; return service; }]); Using module.service() We can create a service using module.service() as follows: angular.module('myApp') .service('UserService', ['$http',function($http) { var service = this; this.user = {}; this.login = function(email, pwd) { $http.get('/auth',{ username: email, password: pwd}).success(function(data){ service.user = data; }); }; this.register = function(newuser) { return $http.post('/users', newuser); }; }]); Using module.provider() We can create a service using module.provider() as follows: angular.module('myApp') .provider('UserService', function() { return { this.$get = function($http) { var service = this; this.user = {}; this.login = function(email, pwd) { $http.get('/auth',{ username: email, password: pwd}).success(function(data){ service.user = data; }); }; this.register = function(newuser) { return $http.post('/users', newuser); }; } } }); You can find good documentation on which method is appropriate in which scenario at http://www.ng-newsletter.com/advent2013/#!/day/1. Let us create a TodoService in our services.js file as follows: var myApp = angular.module('myApp');myApp.factory('TodoService', function($http){ return { loadTodos : function(){ return $http.get('todos'); }, createTodo: function(todo){ return $http.post('todos',todo); }, deleteTodo: function(id){ return $http.delete('todos/'+id); } } }); Now inject our TodoService into our TodoController as follows: myApp.controller('TodoController', [ '$scope', 'TodoService', function ($scope, TodoService) { $scope.newTodo = {}; $scope.loadTodos = function(){ TodoService.loadTodos(). success(function(data, status, headers, config) { $scope.todos = data; }) .error(function(data, status, headers, config) { alert('Error loading Todos'); }); }; $scope.addTodo = function(){ TodoService.createTodo($scope.newTodo). success(function(data, status, headers, config) { $scope.newTodo = {}; $scope.loadTodos(); }) .error(function(data, status, headers, config) { alert('Error saving Todo'); }); }; $scope.deleteTodo = function(todo){ TodoService.deleteTodo(todo.id). success(function(data, status, headers, config) { $scope.loadTodos(); }) .error(function(data, status, headers, config) { alert('Error deleting Todo'); }); }; $scope.loadTodos(); }]); Now we have separated our controller logic and business logic using AngularJS controllers and services and make them work together using Dependency Injection. In the beginning of the post I said we will be developing a multi-screen application demonstrating ngRoute functionality. In addition to Todos, let us add PhoneBook feature to our application where we can maintain list of contacts. First, let us build the back-end functionality for PhoneBook REST services. Create Person JPA entity, its Spring Data JPA repository and Controller. @Entity public class Person implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy=GenerationType.AUTO) private Integer id; private String email; private String password; private String firstname; private String lastname; @Temporal(TemporalType.DATE) private Date dob; //setters and getters }public interface PersonRepository extends JpaRepository<Person, Integer>{}@RestController @RequestMapping("/contacts") public class ContactController { @Autowired private PersonRepository personRepository; @RequestMapping("") public List<Person> persons() { return personRepository.findAll(); } } Now let us create AngularJS service and controller for Contacts. Observe that we will be using module.service() approach this time. myApp.service('ContactService', ['$http',function($http){ this.getContacts = function(){ var promise = $http.get('contacts') .then(function(response){ return response.data; },function(response){ alert('error'); }); return promise; } } }]);myApp.controller('ContactController', [ '$scope', 'ContactService', function ($scope, ContactService) { ContactService.getContacts().then(function(data) { $scope.contacts = data; }); } ]); Now we need to configure our application routes in app.js file. var myApp = angular.module('myApp',['ngRoute']);myApp.config(['$routeProvider','$locationProvider', function($routeProvider, $locationProvider) { $routeProvider .when('/home', { templateUrl: 'templates/home.html', controller: 'HomeController' }) .when('/contacts', { templateUrl: 'templates/contacts.html', controller: 'ContactController' }) .when('/todos', { templateUrl: 'templates/todos.html', controller: 'TodoController' }) .otherwise({ redirectTo: 'home' }); }]); Here we have configured our application routes on $routeProvider inside myApp.config() function. When url matches with any of the routes then corresponding template content will be rendered in <div ng-view></div> div in our index.html. If the url doesn’t match with any of the configured urls then it will be routed to ‘home‘ as specified in otherwise() configuration. Our templates/home.html won’t have anything for now and templates/todos.html file will be same as home.html in previous post. The new templates/contacts.html will just have a table listing contacts as follows: <table class="table table-striped table-bordered table-hover"> <thead> <tr> <th>Name</th> <th>Email</th> </tr> </thead> <tbody> <tr ng-repeat="contact in contacts"> <td>{{contact.firstname + ' '+ (contact.lastname || '')}}</td> <td>{{contact.email}}</td> </tr> </tbody> </table> Now let us create navigation links to Todos, Contacts pages in our index.html page <body>. <div class="container"> <div class="row"> <div class="col-md-3 sidebar"> <div class="list-group"> <a href="#home" class="list-group-item"> <i class="fa fa-home fa-lg"></i> Home </a> <a href="#contacts" class="list-group-item"> <i class="fa fa-user fa-lg"></i> Contacts </a> <a href="#todos" class="list-group-item"> <i class="fa fa-indent fa-lg"></i> ToDos </a> </div> </div> <div class="col-md-9 col-md-offset-3"> <div ng-view></div> </div> </div> </div> By now we have a multi-screen application and we understood how to use modules, controllers and services. You can find the code for this article at https://github.com/sivaprasadreddy/angularjs-samples/tree/master/angularjs-series/angularjs-part2 Our next article would be on how to use $resource instead of $http to consume REST services. Also we will look update our application to use more powerful ui-router module instead of ngRoute. Stay tuned !Reference: AngularJS: Introducing modules, controllers, services from our JCG partner Siva Reddy at the My Experiments on Technology blog....
spring-logo

Spring Batch Tutorial with Spring Boot and Java Configuration

I’ve been working on migrating some batch jobs for Podcastpedia.org to Spring Batch. Before, these jobs were developed in my own kind of way, and I thought it was high time to use a more “standardized” approach. Because I had never used Spring with java configuration before, I thought this were a good opportunity to learn about it, by configuring the Spring Batch jobs in java. And since I am all into trying new things with Spring, why not also throw Spring Boot into the boat… Note: Before you begin with this tutorial I recommend you read first Spring’s Getting started – Creating a Batch Service, because  the structure and the code presented here builds on that original. 1. What I’ll build So, as mentioned, in this post I will present Spring Batch in the context of configuring it and developing with it some batch jobs for Podcastpedia.org. Here’s a short description of the two jobs that are currently part of the Podcastpedia-batch project:addNewPodcastJobreads podcast metadata (feed url, identifier, categories etc.) from a flat file transforms (parses and prepares episodes to be inserted with Http Apache Client) the data and in the last step, insert it to the Podcastpedia database and inform the submitter via email about itnotifyEmailSubscribersJob – people can subscribe to their favorite podcasts on Podcastpedia.org via email. For those who did it is checked on a regular basis (DAILY, WEEKLY, MONTHLY) if new episodes are available, and if they are the subscribers are informed via email about those; read from database, expand read data via JPA, re-group it and notify subscriber via emailSource code: The source code for this tutorial is available on GitHub – Podcastpedia-batch. Note: Before you start I also highly recommend you read the Domain Language of Batch,  so that terms like “Jobs”, “Steps” or “ItemReaders” don’t sound strange to you. 2. What you’ll needA favorite text editor or IDE JDK 1.7 or later Maven 3.0+ 3. Set up the project The project is built with Maven. It uses Spring Boot, which makes it easy to create stand-alone Spring based Applications that you can “just run”.  You can learn more about the Spring Boot by visiting the project’s website. 3.1. Maven build file Because it uses Spring Boot it will have the spring-boot-starter-parent as its parent, and a couple of other spring-boot-starters that will get for us some libraries required in the project: pom.xml of the podcastpedia-batch project <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion><groupId>org.podcastpedia.batch</groupId> <artifactId>podcastpedia-batch</artifactId> <version>0.1.0</version> <properties> <sprinb.boot.version>1.1.6.RELEASE</sprinb.boot.version> <java.version>1.7</java.version> </properties> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.1.6.RELEASE</version> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-batch</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.3.5</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpcore</artifactId> <version>4.3.2</version> </dependency> <!-- velocity --> <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity</artifactId> <version>1.7</version> </dependency> <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity-tools</artifactId> <version>2.0</version> <exclusions> <exclusion> <groupId>org.apache.struts</groupId> <artifactId>struts-core</artifactId> </exclusion> </exclusions> </dependency> <!-- Project rome rss, atom --> <dependency> <groupId>rome</groupId> <artifactId>rome</artifactId> <version>1.0</version> </dependency> <!-- option this fetcher thing --> <dependency> <groupId>rome</groupId> <artifactId>rome-fetcher</artifactId> <version>1.0</version> </dependency> <dependency> <groupId>org.jdom</groupId> <artifactId>jdom</artifactId> <version>1.1</version> </dependency> <!-- PID 1 --> <dependency> <groupId>xerces</groupId> <artifactId>xercesImpl</artifactId> <version>2.9.1</version> </dependency> <!-- MySQL JDBC connector --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.31</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-freemarker</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-remote-shell</artifactId> <exclusions> <exclusion> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4.7</version> </dependency> <dependency> <groupId>javax.inject</groupId> <artifactId>javax.inject</artifactId> <version>1</version> </dependency> <dependency> <groupId>org.twitter4j</groupId> <artifactId>twitter4j-core</artifactId> <version>[4.0,)</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> </dependency> </dependencies><build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> Note: One big advantage of using the spring-boot-starter-parent as the project’s parent is that you only have to upgrade the version of the parent and it will get the “latest” libraries for you. When I started the project spring boot was in version 1.1.3.RELEASE and by the time of finishing to write this post is already at 1.1.6.RELEASE. 3.2. Project directory structure I structured the project in the following way: Project directory structure └── src └── main └── java └── org └── podcastpedia └── batch └── common └── jobs └── addpodcast └── notifysubscribers Note:the org.podcastpedia.batch.jobs package contains sub-packages having specific classes to particular jobs.  the org.podcastpedia.batch.jobs.common package contains classes used by all the jobs, like for example the JPA entities that both the current jobs require.4. Create a batch Job configuration I will start by presenting the Java configuration class for the first batch job: Batch Job configuration package org.podcastpedia.batch.jobs.addpodcast;import org.podcastpedia.batch.common.configuration.DatabaseAccessConfiguration; import org.podcastpedia.batch.common.listeners.LogProcessListener; import org.podcastpedia.batch.common.listeners.ProtocolListener; import org.podcastpedia.batch.jobs.addpodcast.model.SuggestedPodcast; import org.springframework.batch.core.Job; import org.springframework.batch.core.Step; import org.springframework.batch.core.configuration.annotation.EnableBatchProcessing; import org.springframework.batch.core.configuration.annotation.JobBuilderFactory; import org.springframework.batch.core.configuration.annotation.StepBuilderFactory; import org.springframework.batch.item.ItemProcessor; import org.springframework.batch.item.ItemReader; import org.springframework.batch.item.ItemWriter; import org.springframework.batch.item.file.FlatFileItemReader; import org.springframework.batch.item.file.LineMapper; import org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper; import org.springframework.batch.item.file.mapping.DefaultLineMapper; import org.springframework.batch.item.file.transform.DelimitedLineTokenizer; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Import; import org.springframework.core.io.ClassPathResource;import com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException;@Configuration @EnableBatchProcessing @Import({DatabaseAccessConfiguration.class, ServicesConfiguration.class}) public class AddPodcastJobConfiguration {@Autowired private JobBuilderFactory jobs; @Autowired private StepBuilderFactory stepBuilderFactory; // tag::jobstep[] @Bean public Job addNewPodcastJob(){ return jobs.get("addNewPodcastJob") .listener(protocolListener()) .start(step()) .build(); } @Bean public Step step(){ return stepBuilderFactory.get("step") .<SuggestedPodcast,SuggestedPodcast>chunk(1) //important to be one in this case to commit after every line read .reader(reader()) .processor(processor()) .writer(writer()) .listener(logProcessListener()) .faultTolerant() .skipLimit(10) //default is set to 0 .skip(MySQLIntegrityConstraintViolationException.class) .build(); } // end::jobstep[] // tag::readerwriterprocessor[] @Bean public ItemReader<SuggestedPodcast> reader(){ FlatFileItemReader<SuggestedPodcast> reader = new FlatFileItemReader<SuggestedPodcast>(); reader.setLinesToSkip(1);//first line is title definition reader.setResource(new ClassPathResource("suggested-podcasts.txt")); reader.setLineMapper(lineMapper()); return reader; }@Bean public LineMapper<SuggestedPodcast> lineMapper() { DefaultLineMapper<SuggestedPodcast> lineMapper = new DefaultLineMapper<SuggestedPodcast>(); DelimitedLineTokenizer lineTokenizer = new DelimitedLineTokenizer(); lineTokenizer.setDelimiter(";"); lineTokenizer.setStrict(false); lineTokenizer.setNames(new String[]{"FEED_URL", "IDENTIFIER_ON_PODCASTPEDIA", "CATEGORIES", "LANGUAGE", "MEDIA_TYPE", "UPDATE_FREQUENCY", "KEYWORDS", "FB_PAGE", "TWITTER_PAGE", "GPLUS_PAGE", "NAME_SUBMITTER", "EMAIL_SUBMITTER"}); BeanWrapperFieldSetMapper<SuggestedPodcast> fieldSetMapper = new BeanWrapperFieldSetMapper<SuggestedPodcast>(); fieldSetMapper.setTargetType(SuggestedPodcast.class); lineMapper.setLineTokenizer(lineTokenizer); lineMapper.setFieldSetMapper(suggestedPodcastFieldSetMapper()); return lineMapper; }@Bean public SuggestedPodcastFieldSetMapper suggestedPodcastFieldSetMapper() { return new SuggestedPodcastFieldSetMapper(); }/** configure the processor related stuff */ @Bean public ItemProcessor<SuggestedPodcast, SuggestedPodcast> processor() { return new SuggestedPodcastItemProcessor(); } @Bean public ItemWriter<SuggestedPodcast> writer() { return new Writer(); } // end::readerwriterprocessor[] @Bean public ProtocolListener protocolListener(){ return new ProtocolListener(); } @Bean public LogProcessListener logProcessListener(){ return new LogProcessListener(); }} The @EnableBatchProcessing annotation adds many critical beans that support jobs and saves us configuration work. For example you will also be able to @Autowired some useful stuff into your context:a JobRepository (bean name “jobRepository”) a JobLauncher (bean name “jobLauncher”) a JobRegistry (bean name “jobRegistry”) a PlatformTransactionManager (bean name “transactionManager”) a JobBuilderFactory (bean name “jobBuilders”) as a convenience to prevent you from having to inject the job repository into every job, as in the examples above a StepBuilderFactory (bean name “stepBuilders”) as a convenience to prevent you from having to inject the job repository and transaction manager into every stepThe first part focuses on the actual job configuration: Batch Job and Step configuration @Bean public Job addNewPodcastJob(){ return jobs.get("addNewPodcastJob") .listener(protocolListener()) .start(step()) .build(); }@Bean public Step step(){ return stepBuilderFactory.get("step") .<SuggestedPodcast,SuggestedPodcast>chunk(1) //important to be one in this case to commit after every line read .reader(reader()) .processor(processor()) .writer(writer()) .listener(logProcessListener()) .faultTolerant() .skipLimit(10) //default is set to 0 .skip(MySQLIntegrityConstraintViolationException.class) .build(); } The first method defines a job and the second one defines a single step. As you’ve read in The Domain Language of Batch,  jobs are built from steps, where each step can involve a reader, a processor, and a writer. In the step definition, you define how much data to write at a time (in our case 1 record at a time). Next you specify the reader, processor and writer. 5. Spring Batch processing units Most of the batch processing can be described as reading data, doing some transformation on it and then writing the result out. This mirrors somehow the Extract, Transform, Load (ETL) process, in case you know more about that. Spring Batch provides three key interfaces to help perform bulk reading and writing: ItemReader, ItemProcessor and ItemWriter. 5.1. Readers ItemReader is an abstraction providing the mean to retrieve data from many different types of input: flat files, xml files, database, jms etc., one item at a time. See the Appendix A. List of ItemReaders and ItemWriters for a complete list of available item readers. In the Podcastpedia batch jobs I use the following specialized ItemReaders: 5.1.1. FlatFileItemReader which, as the name implies, reads lines of data from a flat file that typically describe records with fields of data defined by fixed positions in the file or delimited by some special character (e.g. Comma). This type of ItemReader is being used in the first batch job, addNewPodcastJob. The input file used is named suggested-podcasts.in, resides in the classpath (src/main/resources) and looks something like the following: Input file for FlatFileItemReader FEED_URL; IDENTIFIER_ON_PODCASTPEDIA; CATEGORIES; LANGUAGE; MEDIA_TYPE; UPDATE_FREQUENCY; KEYWORDS; FB_PAGE; TWITTER_PAGE; GPLUS_PAGE; NAME_SUBMITTER; EMAIL_SUBMITTER http://www.5minutebiographies.com/feed/; 5minutebiographies; people_society, history; en; Audio; WEEKLY; biography, biographies, short biography, short biographies, 5 minute biographies, five minute biographies, 5 minute biography, five minute biography; https://www.facebook.com/5minutebiographies; https://twitter.com/5MinuteBios; ; Adrian Matei; adrianmatei@gmail.com http://notanotherpodcast.libsyn.com/rss; NotAnotherPodcast; entertainment; en; Audio; WEEKLY; Comedy, Sports, Cinema, Movies, Pop Culture, Food, Games; https://www.facebook.com/notanotherpodcastusa; https://twitter.com/NAPodcastUSA; https://plus.google.com/u/0/103089891373760354121/posts; Adrian Matei; adrianmatei@gmail.com As you can see the first line defines the names of the “columns”, and the following lines contain the actual data (delimited by “;”), that needs translating to domain objects relevant in the context. Let’s see now how to configure the FlatFileItemReader: FlatFileItemReader example @Bean public ItemReader<SuggestedPodcast> reader(){ FlatFileItemReader<SuggestedPodcast> reader = new FlatFileItemReader<SuggestedPodcast>(); reader.setLinesToSkip(1);//first line is title definition reader.setResource(new ClassPathResource("suggested-podcasts.in")); reader.setLineMapper(lineMapper()); return reader; } You can specify, among other things, the input resource, the number of lines to skip, and a line mapper. 5.1.1.1. LineMapper The LineMapper is an interface for mapping lines (strings) to domain objects, typically used to map lines read from a file to domain objects on a per line basis.  For the Podcastpedia job I used the DefaultLineMapper, which is two-phase implementation consisting of tokenization of the line into a FieldSet followed by mapping to item: LineMapper default implementation example @Bean public LineMapper<SuggestedPodcast> lineMapper() { DefaultLineMapper<SuggestedPodcast> lineMapper = new DefaultLineMapper<SuggestedPodcast>(); DelimitedLineTokenizer lineTokenizer = new DelimitedLineTokenizer(); lineTokenizer.setDelimiter(";"); lineTokenizer.setStrict(false); lineTokenizer.setNames(new String[]{"FEED_URL", "IDENTIFIER_ON_PODCASTPEDIA", "CATEGORIES", "LANGUAGE", "MEDIA_TYPE", "UPDATE_FREQUENCY", "KEYWORDS", "FB_PAGE", "TWITTER_PAGE", "GPLUS_PAGE", "NAME_SUBMITTER", "EMAIL_SUBMITTER"}); BeanWrapperFieldSetMapper<SuggestedPodcast> fieldSetMapper = new BeanWrapperFieldSetMapper<SuggestedPodcast>(); fieldSetMapper.setTargetType(SuggestedPodcast.class); lineMapper.setLineTokenizer(lineTokenizer); lineMapper.setFieldSetMapper(suggestedPodcastFieldSetMapper()); return lineMapper; }the DelimitedLineTokenizer  splits the input String via the “;” delimiter. if you set the strict flag to false then lines with less tokens will be tolerated and padded with empty columns, and lines with more tokens will simply be truncated. the columns names from the first line are set lineTokenizer.setNames(...); and the fieldMapper is set (line 14)Note: The FieldSet is an “interface used by flat file input sources to encapsulate concerns of converting an array of Strings to Java native types. A bit like the role played by ResultSet in JDBC, clients will know the name or position of strongly typed fields that they want to extract.“ 5.1.1.2. FieldSetMapper The FieldSetMapper is an interface that is used to map data obtained from a FieldSet into an object. Here’s my implementation which maps the fieldSet to the SuggestedPodcast domain object that will be further passed to the processor: FieldSetMapper implementation public class SuggestedPodcastFieldSetMapper implements FieldSetMapper<SuggestedPodcast> {@Override public SuggestedPodcast mapFieldSet(FieldSet fieldSet) throws BindException { SuggestedPodcast suggestedPodcast = new SuggestedPodcast(); suggestedPodcast.setCategories(fieldSet.readString("CATEGORIES")); suggestedPodcast.setEmail(fieldSet.readString("EMAIL_SUBMITTER")); suggestedPodcast.setName(fieldSet.readString("NAME_SUBMITTER")); suggestedPodcast.setTags(fieldSet.readString("KEYWORDS")); //some of the attributes we can map directly into the Podcast entity that we'll insert later into the database Podcast podcast = new Podcast(); podcast.setUrl(fieldSet.readString("FEED_URL")); podcast.setIdentifier(fieldSet.readString("IDENTIFIER_ON_PODCASTPEDIA")); podcast.setLanguageCode(LanguageCode.valueOf(fieldSet.readString("LANGUAGE"))); podcast.setMediaType(MediaType.valueOf(fieldSet.readString("MEDIA_TYPE"))); podcast.setUpdateFrequency(UpdateFrequency.valueOf(fieldSet.readString("UPDATE_FREQUENCY"))); podcast.setFbPage(fieldSet.readString("FB_PAGE")); podcast.setTwitterPage(fieldSet.readString("TWITTER_PAGE")); podcast.setGplusPage(fieldSet.readString("GPLUS_PAGE")); suggestedPodcast.setPodcast(podcast);return suggestedPodcast; } } 5.2. JdbcCursorItemReader In the second job, notifyEmailSubscribersJob, in the reader, I only read email subscribers from a single database table, but further in the processor a more detailed read(via JPA) is executed to retrieve all the new episodes of the podcasts the user subscribed to. This is a common pattern employed in the batch world. Follow this link for more Common Batch Patterns. For the initial read, I chose the JdbcCursorItemReader, which is a simple reader implementation that opens a JDBC cursor and continually retrieves the next row in the ResultSet: JdbcCursorItemReader example @Bean public ItemReader<User> notifySubscribersReader(){ JdbcCursorItemReader<User> reader = new JdbcCursorItemReader<User>(); String sql = "select * from users where is_email_subscriber is not null"; reader.setSql(sql); reader.setDataSource(dataSource); reader.setRowMapper(rowMapper());return reader; } Note I had to set the sql, the datasource to read from and a RowMapper. 5.2.1. RowMapper The RowMapper is an interface used by JdbcTemplate for mapping rows of a Result’set on a per-row basis. My implementation of this interface, , performs the actual work of mapping each row to a result object, but I don’t need to worry about exception handling: RowMapper implementation public class UserRowMapper implements RowMapper<User> {@Override public User mapRow(ResultSet rs, int rowNum) throws SQLException { User user = new User(); user.setEmail(rs.getString("email")); return user; }}  5.2. Writers ItemWriter is an abstraction that represents the output of a Step, one batch or chunk of items at a time. Generally, an item writer has no knowledge of the input it will receive next, only the item that was passed in its current invocation. The writers for the two jobs presented are quite simple. They just use external services to send email notifications and post tweets on Podcastpedia’s account. Here is the implementation of the ItemWriter for the first job – addNewPodcast: Writer implementation of ItemWriter package org.podcastpedia.batch.jobs.addpodcast;import java.util.Date; import java.util.List;import javax.inject.Inject; import javax.persistence.EntityManager;import org.podcastpedia.batch.common.entities.Podcast; import org.podcastpedia.batch.jobs.addpodcast.model.SuggestedPodcast; import org.podcastpedia.batch.jobs.addpodcast.service.EmailNotificationService; import org.podcastpedia.batch.jobs.addpodcast.service.SocialMediaService; import org.springframework.batch.item.ItemWriter; import org.springframework.beans.factory.annotation.Autowired;public class Writer implements ItemWriter<SuggestedPodcast>{@Autowired private EntityManager entityManager; @Inject private EmailNotificationService emailNotificationService; @Inject private SocialMediaService socialMediaService; @Override public void write(List<? extends SuggestedPodcast> items) throws Exception {if(items.get(0) != null){ SuggestedPodcast suggestedPodcast = items.get(0); //first insert the data in the database Podcast podcast = suggestedPodcast.getPodcast(); podcast.setInsertionDate(new Date()); entityManager.persist(podcast); entityManager.flush(); //notify submitter about the insertion and post a twitt about it String url = buildUrlOnPodcastpedia(podcast); emailNotificationService.sendPodcastAdditionConfirmation( suggestedPodcast.getName(), suggestedPodcast.getEmail(), url); if(podcast.getTwitterPage() != null){ socialMediaService.postOnTwitterAboutNewPodcast(podcast, url); } }}private String buildUrlOnPodcastpedia(Podcast podcast) { StringBuffer urlOnPodcastpedia = new StringBuffer( "http://www.podcastpedia.org"); if (podcast.getIdentifier() != null) { urlOnPodcastpedia.append("/" + podcast.getIdentifier()); } else { urlOnPodcastpedia.append("/podcasts/"); urlOnPodcastpedia.append(String.valueOf(podcast.getPodcastId())); urlOnPodcastpedia.append("/" + podcast.getTitleInUrl()); } String url = urlOnPodcastpedia.toString(); return url; }} As you can see there’s nothing special here, except that the write method has to be overriden and this is where the injected external services EmailNotificationService and SocialMediaService are used to inform via email the podcast submitter about the addition to the podcast directory, and if a Twitter page was submitted a tweet will be posted on the Podcastpedia’s wall. You can find detailed explanation on how to send email via Velocity and how to post on Twitter from Java in the following posts:How to compose html emails in Java with Spring and Velocity How to post to Twittter from Java with Twitter4J in 10 minutes 5.3. Processors ItemProcessor is an abstraction that represents the business processing of an item. While the ItemReader reads one item, and the ItemWriter writes them, the ItemProcessor provides access to transform or apply other business processing. When using your own Processors you have to implement the ItemProcessor<I,O> interface, with its only method O process(I item) throws Exception, returning a potentially modified or a new item for continued processing. If the returned result is null, it is assumed that processing of the item should not continue. While the processor of the first job requires a little bit of more logic, because I have to set the etag and last-modified header attributes, the feed attributes, episodes, categories and keywords of the podcast: ItemProcessor implementation for the job addNewPodcast public class SuggestedPodcastItemProcessor implements ItemProcessor<SuggestedPodcast, SuggestedPodcast> {private static final int TIMEOUT = 10;@Autowired ReadDao readDao; @Autowired PodcastAndEpisodeAttributesService podcastAndEpisodeAttributesService; @Autowired private PoolingHttpClientConnectionManager poolingHttpClientConnectionManager; @Autowired private SyndFeedService syndFeedService;/** * Method used to build the categories, tags and episodes of the podcast */ @Override public SuggestedPodcast process(SuggestedPodcast item) throws Exception { if(isPodcastAlreadyInTheDirectory(item.getPodcast().getUrl())) { return null; } String[] categories = item.getCategories().trim().split("\\s*,\\s*");item.getPodcast().setAvailability(org.apache.http.HttpStatus.SC_OK); //set etag and last modified attributes for the podcast setHeaderFieldAttributes(item.getPodcast()); //set the other attributes of the podcast from the feed podcastAndEpisodeAttributesService.setPodcastFeedAttributes(item.getPodcast()); //set the categories List<Category> categoriesByNames = readDao.findCategoriesByNames(categories); item.getPodcast().setCategories(categoriesByNames); //set the tags setTagsForPodcast(item); //build the episodes setEpisodesForPodcast(item.getPodcast()); return item; } ...... } the processor from the second job uses the ‘Driving Query’ approach, where I expand the data retrieved from the Reader with another “JPA-read” and I group the items on podcasts with episodes so that it looks nice in the emails that I am sending out to subscribers: ItemProcessor implementation of the second job – notifySubscribers @Scope("step") public class NotifySubscribersItemProcessor implements ItemProcessor<User, User> {@Autowired EntityManager em; @Value("#{jobParameters[updateFrequency]}") String updateFrequency; @Override public User process(User item) throws Exception { String sqlInnerJoinEpisodes = "select e from User u JOIN u.podcasts p JOIN p.episodes e WHERE u.email=?1 AND p.updateFrequency=?2 AND" + " e.isNew IS NOT NULL AND e.availability=200 ORDER BY e.podcast.podcastId ASC, e.publicationDate ASC"; TypedQuery<Episode> queryInnerJoinepisodes = em.createQuery(sqlInnerJoinEpisodes, Episode.class); queryInnerJoinepisodes.setParameter(1, item.getEmail()); queryInnerJoinepisodes.setParameter(2, UpdateFrequency.valueOf(updateFrequency)); List<Episode> newEpisodes = queryInnerJoinepisodes.getResultList(); return regroupPodcastsWithEpisodes(item, newEpisodes); } ....... } Note: If you’d like to find out more how to use the Apache Http Client, to get the etag and last-modified headers, you can have a look at my post – How to use the new Apache Http Client to make a HEAD request 6. Execute the batch application Batch processing can be embedded in web applications and WAR files, but I chose in the beginning the simpler approach that creates a standalone application, that can be started by the Java main() method: Batch processing Java main() method package org.podcastpedia.batch; //imports ...;@ComponentScan @EnableAutoConfiguration public class Application {private static final String NEW_EPISODES_NOTIFICATION_JOB = "newEpisodesNotificationJob"; private static final String ADD_NEW_PODCAST_JOB = "addNewPodcastJob";public static void main(String[] args) throws BeansException, JobExecutionAlreadyRunningException, JobRestartException, JobInstanceAlreadyCompleteException, JobParametersInvalidException, InterruptedException { Log log = LogFactory.getLog(Application.class); SpringApplication app = new SpringApplication(Application.class); app.setWebEnvironment(false); ConfigurableApplicationContext ctx= app.run(args); JobLauncher jobLauncher = ctx.getBean(JobLauncher.class); if(ADD_NEW_PODCAST_JOB.equals(args[0])){ //addNewPodcastJob Job addNewPodcastJob = ctx.getBean(ADD_NEW_PODCAST_JOB, Job.class); JobParameters jobParameters = new JobParametersBuilder() .addDate("date", new Date()) .toJobParameters(); JobExecution jobExecution = jobLauncher.run(addNewPodcastJob, jobParameters); BatchStatus batchStatus = jobExecution.getStatus(); while(batchStatus.isRunning()){ log.info("*********** Still running.... **************"); Thread.sleep(1000); } log.info(String.format("*********** Exit status: %s", jobExecution.getExitStatus().getExitCode())); JobInstance jobInstance = jobExecution.getJobInstance(); log.info(String.format("********* Name of the job %s", jobInstance.getJobName())); log.info(String.format("*********** job instance Id: %d", jobInstance.getId())); System.exit(0); } else if(NEW_EPISODES_NOTIFICATION_JOB.equals(args[0])){ JobParameters jobParameters = new JobParametersBuilder() .addDate("date", new Date()) .addString("updateFrequency", args[1]) .toJobParameters(); jobLauncher.run(ctx.getBean(NEW_EPISODES_NOTIFICATION_JOB, Job.class), jobParameters); } else { throw new IllegalArgumentException("Please provide a valid Job name as first application parameter"); } System.exit(0); } } The best explanation for  SpringApplication-, @ComponentScan- and @EnableAutoConfiguration-magic you get from the source – Getting Started – Creating a Batch Service: “The main() method defers to the SpringApplication helper class, providing Application.class as an argument to its run() method. This tells Spring to read the annotation metadata from Application and to manage it as a component in the Spring application context. The @ComponentScan annotation tells Spring to search recursively through the org.podcastpedia.batch package and its children for classes marked directly or indirectly with Spring’s @Component annotation. This directive ensures that Spring finds and registers BatchConfiguration, because it is marked with @Configuration, which in turn is a kind of @Component annotation. The @EnableAutoConfiguration annotation switches on reasonable default behaviors based on the content of your classpath. For example, it looks for any class that implements the CommandLineRunner interface and invokes its run() method.” Execution construction steps:the JobLauncher, which is a simple interface for controlling jobs,  is retrieved from the ApplicationContext. Remember this is automatically made available via the @EnableBatchProcessing annotation. now based on the first parameter of the application (args[0]), I will retrieve the corresponding Job from the ApplicationContext then the JobParameters are prepared, where I use the current date – .addDate("date", new Date()), so that the job executions are always unique. once everything is in place, the job can be executed: JobExecution jobExecution = jobLauncher.run(addNewPodcastJob, jobParameters); you can use the returned jobExecution to gain access to BatchStatus, exit code, or job name and id.Note: I highly recommend you read and understand the Meta-Data Schema for Spring Batch. It will also help you better understand the Spring Batch Domain objects. 6.1. Running the application on dev and prod environments To be able to run the Spring Batch / Spring Boot application on different environments I make use of the Spring Profiles capability. By default the application runs with development data (database). But if I want the job to use the production database I have to do the following:provide the following environment argument  -Dspring.profiles.active=prod have the production database properties configured in the application-prod.properties file in the classpath, right besides the default application.properties fileSummary In this tutorial we’ve learned how to configure a Spring Batch project with Spring Boot and Java configuration, how to use some of the most common readers in batch processing, how to configure some simple jobs, and how to start Spring Batch jobs from a main method.Reference: Spring Batch Tutorial with Spring Boot and Java Configuration from our JCG partner Adrian Matei at the Codingpedia.org blog....
jboss-hibernate-logo

Hibernate bytecode enhancement

Introduction Now that you know the basics of Hibernate dirty checking, we can dig into enhanced dirty checking mechanisms. While the default graph-traversal algorithm might be sufficient for most use-cases, there might be times when you need an optimized dirty checking algorithm and instrumentation is much more convinient than building your own custom strategy.         Using Ant Hibernate Tools Traditionally, The Hibernate Tools have been focused on Ant and Eclipse. Bytecode instrumentation has been possible since Hibernate 3, but it required an Ant task to run the CGLIB or Javassist bytecode enhancement routines. Maven supports running Ant tasks through the maven-antrun-plugin: <build> <plugins> <plugin> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <id>Instrument domain classes</id> <configuration> <tasks> <taskdef name="instrument" classname="org.hibernate.tool.instrument.javassist.InstrumentTask"> <classpath> <path refid="maven.dependency.classpath"/> <path refid="maven.plugin.classpath"/> </classpath> </taskdef> <instrument verbose="true"> <fileset dir="${project.build.outputDirectory}"> <include name="**/flushing/*.class"/> </fileset> </instrument> </tasks> </configuration> <phase>process-classes</phase> <goals> <goal>run</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>${hibernate.version}</version> </dependency> <dependency> <groupId>org.javassist</groupId> <artifactId>javassist</artifactId> <version>${javassist.version}</version> </dependency> </dependencies> </plugin> </plugins> </build> So for the following entity source class: @Entity public class EnhancedOrderLine {@Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id;private Long number;private String orderedBy;private Date orderedOn;public Long getId() { return id; }public Long getNumber() { return number; }public void setNumber(Long number) { this.number = number; }public String getOrderedBy() { return orderedBy; }public void setOrderedBy(String orderedBy) { this.orderedBy = orderedBy; }public Date getOrderedOn() { return orderedOn; }public void setOrderedOn(Date orderedOn) { this.orderedOn = orderedOn; } } During build-time the following class is generated: @Entity public class EnhancedOrderLine implements FieldHandled {@Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; private Long number; private String orderedBy; private Date orderedOn; private transient FieldHandler $JAVASSIST_READ_WRITE_HANDLER;public Long getId() { return $javassist_read_id(); }public Long getNumber() { return $javassist_read_number(); }public void setNumber(Long number) { $javassist_write_number(number); }public String getOrderedBy() { return $javassist_read_orderedBy(); }public void setOrderedBy(String orderedBy) { $javassist_write_orderedBy(orderedBy); }public Date getOrderedOn() { return $javassist_read_orderedOn(); }public void setOrderedOn(Date orderedOn) { $javassist_write_orderedOn(orderedOn); }public FieldHandler getFieldHandler() { return this.$JAVASSIST_READ_WRITE_HANDLER; }public void setFieldHandler(FieldHandler paramFieldHandler) { this.$JAVASSIST_READ_WRITE_HANDLER = paramFieldHandler; }public Long $javassist_read_id() { if (getFieldHandler() == null) return this.id; }public void $javassist_write_id(Long paramLong) { if (getFieldHandler() == null) { this.id = paramLong; return; } this.id = ((Long)getFieldHandler().writeObject(this, "id", this.id, paramLong)); }public Long $javassist_read_number() { if (getFieldHandler() == null) return this.number; }public void $javassist_write_number(Long paramLong) { if (getFieldHandler() == null) { this.number = paramLong; return; } this.number = ((Long)getFieldHandler().writeObject(this, "number", this.number, paramLong)); }public String $javassist_read_orderedBy() { if (getFieldHandler() == null) return this.orderedBy; }public void $javassist_write_orderedBy(String paramString) { if (getFieldHandler() == null) { this.orderedBy = paramString; return; } this.orderedBy = ((String)getFieldHandler().writeObject(this, "orderedBy", this.orderedBy, paramString)); }public Date $javassist_read_orderedOn() { if (getFieldHandler() == null) return this.orderedOn; }public void $javassist_write_orderedOn(Date paramDate) { if (getFieldHandler() == null) { this.orderedOn = paramDate; return; } this.orderedOn = ((Date)getFieldHandler().writeObject(this, "orderedOn", this.orderedOn, paramDate)); } } Although the org.hibernate.bytecode.instrumentation.spi.AbstractFieldInterceptor manages to intercept dirty fields, this info is never really enquired during dirtiness tracking. The InstrumentTask bytecode enhancement can only tell whether an entity is dirty, lacking support for indicating which properties have been modified, therefore making the InstrumentTask more suitable for “No-proxy” LAZY fetching strategy. hibernate-enhance-maven-plugin Hibernate 4.2.8 added support for a dedicated Maven bytecode enhancement plugin. The Maven bytecode enhancement plugin is easy to configure: <build> <plugins> <plugin> <groupId>org.hibernate.orm.tooling</groupId> <artifactId>hibernate-enhance-maven-plugin</artifactId> <executions> <execution> <phase>compile</phase> <goals> <goal>enhance</goal> </goals> </execution> </executions> </plugin> </plugins> </build> During project build-time, the following class is being generated: @Entity public class EnhancedOrderLine implements ManagedEntity, PersistentAttributeInterceptable, SelfDirtinessTracker {@Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; private Long number; private String orderedBy; private Date orderedOn;@Transient private transient PersistentAttributeInterceptor $$_hibernate_attributeInterceptor;@Transient private transient Set $$_hibernate_tracker;@Transient private transient CollectionTracker $$_hibernate_collectionTracker;@Transient private transient EntityEntry $$_hibernate_entityEntryHolder;@Transient private transient ManagedEntity $$_hibernate_previousManagedEntity;@Transient private transient ManagedEntity $$_hibernate_nextManagedEntity;public Long getId() { return $$_hibernate_read_id(); }public Long getNumber() { return $$_hibernate_read_number(); }public void setNumber(Long number) { $$_hibernate_write_number(number); }public String getOrderedBy() { return $$_hibernate_read_orderedBy(); }public void setOrderedBy(String orderedBy) { $$_hibernate_write_orderedBy(orderedBy); }public Date getOrderedOn() { return $$_hibernate_read_orderedOn(); }public void setOrderedOn(Date orderedOn) { $$_hibernate_write_orderedOn(orderedOn); }public PersistentAttributeInterceptor $$_hibernate_getInterceptor() { return this.$$_hibernate_attributeInterceptor; }public void $$_hibernate_setInterceptor(PersistentAttributeInterceptor paramPersistentAttributeInterceptor) { this.$$_hibernate_attributeInterceptor = paramPersistentAttributeInterceptor; }public void $$_hibernate_trackChange(String paramString) { if (this.$$_hibernate_tracker == null) this.$$_hibernate_tracker = new HashSet(); if (!this.$$_hibernate_tracker.contains(paramString)) this.$$_hibernate_tracker.add(paramString); }private boolean $$_hibernate_areCollectionFieldsDirty() { return ($$_hibernate_getInterceptor() != null) && (this.$$_hibernate_collectionTracker != null); }private void $$_hibernate_getCollectionFieldDirtyNames(Set paramSet) { if (this.$$_hibernate_collectionTracker == null) return; }public boolean $$_hibernate_hasDirtyAttributes() { return ((this.$$_hibernate_tracker == null) || (this.$$_hibernate_tracker.isEmpty())) && ($$_hibernate_areCollectionFieldsDirty()); }private void $$_hibernate_clearDirtyCollectionNames() { if (this.$$_hibernate_collectionTracker == null) this.$$_hibernate_collectionTracker = new CollectionTracker(); }public void $$_hibernate_clearDirtyAttributes() { if (this.$$_hibernate_tracker != null) this.$$_hibernate_tracker.clear(); $$_hibernate_clearDirtyCollectionNames(); }public Set<String> $$_hibernate_getDirtyAttributes() { if (this.$$_hibernate_tracker == null) this.$$_hibernate_tracker = new HashSet(); $$_hibernate_getCollectionFieldDirtyNames(this.$$_hibernate_tracker); return this.$$_hibernate_tracker; }private Long $$_hibernate_read_id() { if ($$_hibernate_getInterceptor() != null) this.id = ((Long) $$_hibernate_getInterceptor().readObject(this, "id", this.id)); return this.id; }private void $$_hibernate_write_id(Long paramLong) { if (($$_hibernate_getInterceptor() == null) || ((this.id == null) || (this.id.equals(paramLong)))) break label39; $$_hibernate_trackChange("id"); label39: Long localLong = paramLong; if ($$_hibernate_getInterceptor() != null) localLong = (Long) $$_hibernate_getInterceptor().writeObject(this, "id", this.id, paramLong); this.id = localLong; }private Long $$_hibernate_read_number() { if ($$_hibernate_getInterceptor() != null) this.number = ((Long) $$_hibernate_getInterceptor().readObject(this, "number", this.number)); return this.number; }private void $$_hibernate_write_number(Long paramLong) { if (($$_hibernate_getInterceptor() == null) || ((this.number == null) || (this.number.equals(paramLong)))) break label39; $$_hibernate_trackChange("number"); label39: Long localLong = paramLong; if ($$_hibernate_getInterceptor() != null) localLong = (Long) $$_hibernate_getInterceptor().writeObject(this, "number", this.number, paramLong); this.number = localLong; }private String $$_hibernate_read_orderedBy() { if ($$_hibernate_getInterceptor() != null) this.orderedBy = ((String) $$_hibernate_getInterceptor().readObject(this, "orderedBy", this.orderedBy)); return this.orderedBy; }private void $$_hibernate_write_orderedBy(String paramString) { if (($$_hibernate_getInterceptor() == null) || ((this.orderedBy == null) || (this.orderedBy.equals(paramString)))) break label39; $$_hibernate_trackChange("orderedBy"); label39: String str = paramString; if ($$_hibernate_getInterceptor() != null) str = (String) $$_hibernate_getInterceptor().writeObject(this, "orderedBy", this.orderedBy, paramString); this.orderedBy = str; }private Date $$_hibernate_read_orderedOn() { if ($$_hibernate_getInterceptor() != null) this.orderedOn = ((Date) $$_hibernate_getInterceptor().readObject(this, "orderedOn", this.orderedOn)); return this.orderedOn; }private void $$_hibernate_write_orderedOn(Date paramDate) { if (($$_hibernate_getInterceptor() == null) || ((this.orderedOn == null) || (this.orderedOn.equals(paramDate)))) break label39; $$_hibernate_trackChange("orderedOn"); label39: Date localDate = paramDate; if ($$_hibernate_getInterceptor() != null) localDate = (Date) $$_hibernate_getInterceptor().writeObject(this, "orderedOn", this.orderedOn, paramDate); this.orderedOn = localDate; }public Object $$_hibernate_getEntityInstance() { return this; }public EntityEntry $$_hibernate_getEntityEntry() { return this.$$_hibernate_entityEntryHolder; }public void $$_hibernate_setEntityEntry(EntityEntry paramEntityEntry) { this.$$_hibernate_entityEntryHolder = paramEntityEntry; }public ManagedEntity $$_hibernate_getPreviousManagedEntity() { return this.$$_hibernate_previousManagedEntity; }public void $$_hibernate_setPreviousManagedEntity(ManagedEntity paramManagedEntity) { this.$$_hibernate_previousManagedEntity = paramManagedEntity; }public ManagedEntity $$_hibernate_getNextManagedEntity() { return this.$$_hibernate_nextManagedEntity; }public void $$_hibernate_setNextManagedEntity(ManagedEntity paramManagedEntity) { this.$$_hibernate_nextManagedEntity = paramManagedEntity; } } It’s easy to realize that the new bytecode enhancement logic is different than the one generated by the previous InstrumentTask. Like the custom dirty checking mechanism, the new bytecode enhancement version records what properties have changed, not just a simple dirty boolean flag. The enhancement logic marks dirty fields upon changing. This approach is much more efficient than having to compare all current property values against the load-time snapshot data. Are we there yet? Even if the entity class bytecode is being enhanced, somehow with Hibernate 4.3.6 there are still missing puzzle pieces. For instance, when calling setNumber(Long number) the following intercepting method gets executed: private void $$_hibernate_write_number(Long paramLong) { if (($$_hibernate_getInterceptor() == null) || ((this.number == null) || (this.number.equals(paramLong)))) break label39; $$_hibernate_trackChange("number"); label39: Long localLong = paramLong; if ($$_hibernate_getInterceptor() != null) localLong = (Long) $$_hibernate_getInterceptor().writeObject(this, "number", this.number, paramLong); this.number = localLong; } In my examples, $$_hibernate_getInterceptor() is always null, which bypasses the $$_hibernate_trackChange(“number”) call. Because of this, no dirty property is going to be recorded, forcing Hibernate to fall-back to the default deep-comparison dirty checking algorithm. So, even if Hibernate has made considerable progress in this particular area, the dirty checking enhancement still requires additional work to become readily available.Code available on GitHub.Reference: Hibernate bytecode enhancement from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
agile-logo

Agile Myth #6: “Agile Means No Upfront Design”

This is my 7th post in my 13-part series, “Agile Myths and Misconceptions”, It’s based on the talk I gave at the first PSIA Softech Philippine Software Engineering Conference. I am striving to correct 12 common misconceptions about Agile Software Development. First of all, let me correct the notion that Agile has little to no concern about design. Many if not most of the signatories of the Agile Manifesto are thought leaders in design. The two people who called the seminal “Lightweight Process Summit”in Snowbird resort in 2001, where the term “Agile Software Development” was coined and the Agile Manifesto was written, were Martin Fowler and Robert Martin. Martin Fowler is synonymous with Design Patterns. Robert Martin wrote one of the first books with “Agile” in the title – the book “Agile Software Development”. This is the book where he outlined the now famous “SOLID” principles, and most of the rest of the book dealt with Design Patterns, Refactoring, and Test-Driven Development. So now let’s discuss upfront design. Agile teams have been told that “Big Design Upfront” is bad, so some interpret that to mean that “No Design Upfront” must be good. The truth is somewhere in the middle – “Minimal Design Upfront”, supported by Spikes.  What’s Wrong with Big Upfront Design? The main problem with “Big Upfront Design” is that after all the time and energy spent in creating a design, we almost always find out that much of the design is wrong only when the team starts coding, or even worse, when the team starts performance testing towards the end of the project! For the Java developers, do you remember the dark days of EJB2? All of us lept like lemmings to adopt EJB into our projects, since it was the Sun Microsystems standard designed to make systems “scalable”. What we ended up was project after project that could only support a fraction of the users they were meant to support. I know of one payroll project that was supposed to support thousands of users, but when it was tested with just ten users the system crawled. The whole project was canceled, after two years of development and huge loses for both client and vendor. How about Healthcare.gov? A lot of problems were blamed on the use of a very new columnar database that promised performance and scalability, but caused more problems than it solved. And when I myself was a developer, I remember staring at a UML diagram that my boss was forcing me to implement, but was impossible to implement in code! The main problem with Big Upfront Design is that very little of it is validated. And by the time we find out that a design decision is wrong, so much code had already been written, that changing the design becomes expensive, wasteful, and risky. Agile Design is Incremental & Evidence-Based So how is design done in Agile? First is the idea of “Just Enough Design” – The team makes just enough design decisions in order to get going with the project. However, as in all decisions in Agile, decisions need to be empirical or evidence-based. Design decisions therefore need to be validated before a large amount of code is invested in the design. One of the best ways to validate a design decision is through a Spike Solution. The team writes one or more small prototypes that implement the design, often taking actual use cases or features from the project. If performance is a concern of the design, the team may subject the Spike Solution to performance tests. Other kinds of tests may also be applied, depending on what concerns the design is supposed to achieve (security, integrity, scalability, ease-of-use, etc.) So early on, the team finds out if a design approach is easy or hard to use, performs as expected, or is otherwise relevant to the problems the design is meant to solve. After some initial design decisions have been made, the rest of the design is done incrementally, with every Sprint. This is the concept of “Emergent Design”. With every Sprint, the team continues to do just enough design to implement the near-term work. Improvements or corrections to the design are discovered along the way and implemented, often through Refactoring. A Note on UML Tools While I’m advocating design, I’m not advocated going out and getting some UML software. I’ve tried a lot of tools in my years in software – UML tools start out handy for small designs, but as designs get more complex, and the need to collaborate on designs increases, the UML tools just tend to hold the team back rather than help the team forward. Arguably the best design tool for a team is a whiteboard. It radiates information to the entire team, it allows for impromptu discussions and collaboration, and it’s limited space prevents you from overcomplicating the design, so you get on with implementing the code.Don’t waste time detailing your design on some UML tool. Scribble just enough on a whiteboard for the team to get going. Finish your design in the code itself. Which Parts of the Design Are Upfront? So what specifically are the parts of the design done upfront? For all the projects I’ve observed, there are at least three aspects of design where some upfront work is done even before the first Sprint begins. These are Domain Model, Architecture, and User Interface. It’s pretty much impossible to get a team to work together efficiently unless at least some design in each of these three aspects has been decided on beforehand. Again, only just enough design for the team to get started is done. The rest of the design emerges with each Sprint. Domain Model The business logic of a system is the most important part, since it’s the very reason why the system exists. It’s also a part of the system that usually changes the most often. A lot of teams just cram their business logic into procedural routines called “Transaction Scripts”. This is fine for simple systems, but for anything moderately complex, this results in a lot of messy, convoluted, duplicated, hard-to-understand code. And since business logic changes a lot, this kind of confusing code can be a source of bugs. In addition to that, code that’s difficult to understand slows the team down. It’s therefore important that the business logic is written in a way that’s organized, readable, and easy and safe to change. The recommended way to achieve this is through what’s called a “Rich Domain Model”, meaning the entities of a particular business domain are modeled as classes, and their interactions with one another coded as method calls to one another. Designing domain models is a lengthy topic, and I probably lost some of you already, so let me just point you to a great starting point – Craig Larman’s “Applying UML & Patterns”, which is a step-by-step guide to analyzing a business domain to design a domain model. Supplement that with Len Silverston’s The Data Model Resource Book series, which is a catalog of industry-tested data models, which are starting points for your domain model designs. I’d suggest that as a team starts with a project, it draws some simple, partial, low-detail UML Class Diagrams to agree on their understanding of the business domain. It would be good if the team can bring in Product Owners or Customers to validate their understanding. This is one reason to be biased towards low-detail diagrams over high-detail diagrams – low-detail diagrams tend to be more understandable and less intimidating to non-technical people. Architecture “Architecture” is just a fancy term for the part of the design that deals with the “non-functional” requirements, or in other words, requirements that are not business logic. Examples are performance, uptime, security, and cost. Often, architectural mistakes are only discovered towards the end, or worse, after the system is deployed. Architectural mistakes could manifest themselves as a slow system, or maybe a security breach! It could manifest itself as expensive recurring costs or the cost of scaling the system is high. Since these mistakes are found towards the end, it’s very expensive and risky to change these architectural decisions, since so much code has already been invested on top of the chosen architecture. Why are architectural decisions so often wrong? Architectural decisions are usually based on vendor documentation, vendor demos, or popularity. A vendor may present a demo or a proof-of-concept to sell you on a product, but remember the vendor is biased – he built the proof-of-concept to sell you on the product, not really to test if the product works for your particular project. There’s also such a thing as “Resume-Driven Development”. This is where developers choose a technology not because they think it’s what’s best for their project, but because experience in the technology will look good in their resume. They’ll use the technology for a while, put it in their resume, and look for a better job. You’re now stuck with a technology that may or may not be the best choice. Oh, have I seen this several times in organizations where their code base is a mess. Architectural decisions should be based on tests, done by the team, and specific to the problems to be solved.In Agile, we build simple prototypes, called “Spike Solutions”, to resolve technical questions. These Spikes can be subjected to performance tests and other evaluations of suitability. Certain user stories or scenarios can be selected and implemented as a full stack using the technologies in question, and then subjected to various tests and other evaluations. User Interface High-level themes and layouts for the user interface of a system should be decided early on, for the purposes of consistency in the user interface. This is evolved and detailed based on feedback from the customer, with each iteration. Wrap-Up Good design, especially object-oriented design, is core to Agile. As such, some upfront thought needs to be given design to set the team in the right direction. Agile just emphasizes simplicity and evidence in design decisions.Reference: Agile Myth #6: “Agile Means No Upfront Design” from our JCG partner Calen Legaspi at the Calen Legaspi blog....
java-logo

Why NULL is Bad?

A simple example of NULL usage in Java:                     public Employee getByName(String name) { int id = database.find(name); if (id == 0) { return null; } return new Employee(id); } What is wrong with this method? It may return NULL instead of an object — that’s what is wrong. NULL is a terrible practice in an object-oriented paradigm and should be avoided at all costs. There have been a number of opinions about this published already, including Null References, The Billion Dollar Mistake presentation by Tony Hoare and the entire Object Thinking book by David West. Here, I’ll try to summarize all the arguments and show examples of how NULL usage can be avoided and replaced with proper object-oriented constructs. Basically, there are two possible alternatives to NULL. The first one is Null Object design pattern (the best way is to make it a constant): public Employee getByName(String name) { int id = database.find(name); if (id == 0) { return Employee.NOBODY; } return Employee(id); } The second possible alternative is to fail fast by throwing an Exception when you can’t return an object: public Employee getByName(String name) { int id = database.find(name); if (id == 0) { throw new EmployeeNotFoundException(name); } return Employee(id); } Now, let’s see the arguments against NULL. Besides Tony Hoare’s presentation and David West’s book mentioned above, I read these publications before writing this post: Clean Code by Robert Martin, Code Complete by Steve McConnell, Say “No” to “Null” by John Sonmez, Is returning null bad design? discussion at StackOverflow. Ad-hoc Error Handling Every time you get an object as an input you must check whether it is NULL or a valid object reference. If you forget to check, a NullPointerException (NPE) may break execution in runtime. Thus, your logic becomes polluted with multiple checks and if/then/else forks: // this is a terrible design, don't reuse Employee employee = dept.getByName("Jeffrey"); if (employee == null) { System.out.println("can't find an employee"); System.exit(-1); } else { employee.transferTo(dept2); } This is how exceptional situations are supposed to be handled in C and other imperative procedural languages. OOP introduced exception handling primarily to get rid of these ad-hoc error handling blocks. In OOP, we let exceptions bubble up until they reach an application-wide error handler and our code becomes much cleaner and shorter: dept.getByName("Jeffrey").transferTo(dept2); Consider NULL references an inheritance of procedural programming, and use 1) Null Objects or 2) Exceptions instead. Ambiguous Semantic In order to explicitly convey its meaning, the function getByName() has to be named getByNameOrNullIfNotFound(). The same should happen with every function that returns an object or NULL. Otherwise, ambiguity is inevitable for a code reader. Thus, to keep semantic unambiguous, you should give longer names to functions. To get rid of this ambiguity, always return a real object, a null object or throw an exception. Some may argue that we sometimes have to return NULL, for the sake of performance. For example, method get() of interface Map in Java returns NULL when there is no such item in the map: Employee employee = employees.get("Jeffrey"); if (employee == null) { throw new EmployeeNotFoundException(); } return employee; This code searches the map only once due to the usage of NULL in Map. If we would refactor Map so that its method get() will throw an exception if nothing is found, our code will look like this: if (!employees.containsKey("Jeffrey")) { // first search throw new EmployeeNotFoundException(); } return employees.get("Jeffrey"); // second search Obviously, this is method is twice as slow as the first one. What to do? The Map interface (no offense to its authors) has a design flaw. Its method get() should have been returning an Iterator so that our code would look like: Iterator found = Map.search("Jeffrey"); if (!found.hasNext()) { throw new EmployeeNotFoundException(); } return found.next(); BTW, that is exactly how C++ STL map::find() method is designed. Computer Thinking vs. Object Thinking Statement if (employee == null) is understood by someone who knows that an object in Java is a pointer to a data structure and that NULL is a pointer to nothing (0x00000000, in Intel x86 processors). However, if you start thinking as an object, this statement makes much less sense. This is how our code looks from an object point of view: - Hello, is it a software department? - Yes. - Let me talk to your employee "Jeffrey" please. - Hold the line please... - Hello. - Are you NULL? The last question in this conversation sounds weird, doesn’t it? Instead, if they hang up the phone after our request to speak to Jeffrey, that causes a problem for us (Exception). At that point, we try to call again or inform our supervisor that we can’t reach Jeffrey and complete a bigger transaction. Alternatively, they may let us speak to another person, who is not Jeffrey, but who can help with most of our questions or refuse to help if we need something “Jeffrey specific” (Null Object). Slow Failing Instead of failing fast, the code above attempts to die slowly, killing others on its way. Instead of letting everyone know that something went wrong and that an exception handling should start immediately, it is hiding this failure from its client. This argument is close to the “ad-hoc error handling” discussed above. It is a good practice to make your code as fragile as possible, letting it break when necessary. Make your methods extremely demanding as to the data they manipulate. Let them complain by throwing exceptions, if the provided data provided is not sufficient or simply doesn’t fit with the main usage scenario of the method. Otherwise, return a Null Object, that exposes some common behavior and throws exceptions on all other calls: public Employee getByName(String name) { int id = database.find(name); Employee employee; if (id == 0) { employee = new Employee() { @Override public String name() { return "anonymous"; } @Override public void transferTo(Department dept) { throw new AnonymousEmployeeException( "I can't be transferred, I'm anonymous" ); } }; } else { employee = Employee(id); } return employee; } Mutable and Incomplete Objects In general, it is highly recommended to design objects with immutability in mind. This means that an object gets all necessary knowledge during its instantiating and never changes its state during the entire lifecycle. Very often, NULL values are used in lazy loading, to make objects incomplete and mutable. For example: public class Department { private Employee found = null; public synchronized Employee manager() { if (this.found == null) { this.found = new Employee("Jeffrey"); } return this.found; } } This technology, although widely used, is an anti-pattern in OOP. Mostly because it makes an object responsible for performance problems of the computational platform, which is something an Employee object should not be aware of. Instead of managing a state and exposing its business-relevant behavior, an object has to take care of the caching of its own results — this is what lazy loading is about. Caching is not something an employee does in the office, does he? The solution? Don’t use lazy loading in such a primitive way, as in the example above. Instead, move this caching problem to another layer of your application. For example, in Java, you can use aspect-oriented programming aspects. For example, jcabi-aspects has @Cacheable annotation that caches the value returned by a method: import com.jcabi.aspects.Cacheable; public class Department { @Cacheable(forever = true) public Employee manager() { return new Employee("Jacky Brown"); } } I hope this analysis was convincing enough that you will stop NULL-ing your code! Related Posts You may also find these posts interesting:Typical Mistakes in Java Code OOP Alternative to Utility Classes Avoid String Concatenation Objects Should Be ImmutableReference: Why NULL is Bad? from our JCG partner Yegor Bugayenko at the About Programming blog....
java-logo

OOP Alternative to Utility Classes

A utility class (aka helper class) is a “structure” that has only static methods and encapsulates no state. StringUtils, IOUtils, FileUtils from Apache Commons; Iterables and Iterators from Guava, and Files from JDK7 are perfect examples of utility classes. This design idea is very popular in the Java world (as well as C#, Ruby, etc.) because utility classes provide common functionality used everywhere. Here, we want to follow the DRY principle and avoid duplication. Therefore, we place common code blocks into utility classes and reuse them when necessary:   // This is a terrible design, don't reuse public class NumberUtils { public static int max(int a, int b) { return a > b ? a : b; } } Indeed, this a very convenient technique!? Utility Classes Are Evil However, in an object-oriented world, utility classes are considered a very bad (some even may say “terrible”) practice. There have been many discussions of this subject; to name a few: Are Helper Classes Evil? by Nick Malik, Why helper, singletons and utility classes are mostly bad by Simon Hart, Avoiding Utility Classes by Marshal Ward, Kill That Util Class! by Dhaval Dalal, Helper Classes Are A Code Smell by Rob Bagby. Additionally, there are a few questions on StackExchange about utility classes: If a “Utilities” class is evil, where do I put my generic code?, Utility Classes are Evil. A dry summary of all their arguments is that utility classes are not proper objects; therefore, they don’t fit into object-oriented world. They were inherited from procedural programming, mostly because most were used to a functional decomposition paradigm back then. Assuming you agree with the arguments and want to stop using utility classes, I’ll show by example how these creatures can be replaced with proper objects. Procedural Example Say, for instance, you want to read a text file, split it into lines, trim every line and then save the results in another file. This is can be done with FileUtils from Apache Commons: void transform(File in, File out) { Collection<String> src = FileUtils.readLines(in, "UTF-8"); Collection<String> dest = new ArrayList<>(src.size()); for (String line : src) { dest.add(line.trim()); } FileUtils.writeLines(out, dest, "UTF-8"); } The above code may look clean; however, this is procedural programming, not object-oriented. We are manipulating data (bytes and bits) and explicitly instructing the computer from where to retrieve them and then where to put them on every single line of code. We’re defining a procedure of execution. Object-Oriented Alternative In an object-oriented paradigm, we should instantiate and compose objects, thus letting them manage data when and how they desire. Instead of calling supplementary static functions, we should create objects that are capable of exposing the behaviour we are seeking: public class Max implements Number { private final int a; private final int b; public Max(int x, int y) { this.a = x; this.b = y; } @Override public int intValue() { return this.a > this.b ? this.a : this.b; } } This procedural call: int max = NumberUtils.max(10, 5); Will become object-oriented: int max = new Max(10, 5).intValue(); Potato, potato? Not really; just read on… Objects Instead of Data Structures This is how I would design the same file-transforming functionality as above but in an object-oriented manner: void transform(File in, File out) { Collection<String> src = new Trimmed( new FileLines(new UnicodeFile(in)) ); Collection<String> dest = new FileLines( new UnicodeFile(out) ); dest.addAll(src); } FileLines implements Collection<String> and encapsulates all file reading and writing operations. An instance of FileLines behaves exactly as a collection of strings and hides all I/O operations. When we iterate it — a file is being read. When we addAll() to it — a file is being written. Trimmed also implements Collection<String> and encapsulates a collection of strings (Decorator pattern). Every time the next line is retrieved, it gets trimmed. All classes taking participation in the snippet are rather small: Trimmed, FileLines, and UnicodeFile. Each of them is responsible for its own single feature, thus following perfectly the single responsibility principle. On our side, as users of the library, this may be not so important, but for their developers it is an imperative. It is much easier to develop, maintain and unit-test class FileLines rather than using a readLines() method in a 80+ methods and 3000 lines utility class FileUtils. Seriously, look at its source code. An object-oriented approach enables lazy execution. The in file is not read until its data is required. If we fail to open out due to some I/O error, the first file won’t even be touched. The whole show starts only after we call addAll(). All lines in the second snippet, except the last one, instantiate and compose smaller objects into bigger ones. This object composition is rather cheap for the CPU since it doesn’t cause any data transformations. Besides that, it is obvious that the second script runs in O(1) space, while the first one executes in O(n). This is the consequence of our procedural approach to data in the first script. In an object-oriented world, there is no data; there are only objects and their behavior! Related Posts You may also find these posts interesting:Why NULL is Bad? Avoid String Concatenation Objects Should Be Immutable Typical Mistakes in Java CodeReference: OOP Alternative to Utility Classes from our JCG partner Yegor Bugayenko at the About Programming blog....
software-development-2-logo

Bad program structure: the complectation

Degrees of badness Many programmers consider source code dependencies either circular or non-circular, with circular dependencies representing The Greatest Imaginable Evil (which of course they do) and non-circular dependencies representing the acceptable if drab face of source code structure. This second representation is not quite true. The digital gods do not create all non-circular dependencies equal. Figure 1 shows six methods arranged in two independent transitive dependencies, the method chains: a()→ b()→ c() and d()→ e()→ f(). Straight lines show dependencies down the page and curved lines (of which there are none, yet) show dependencies up the page.  As soon as two transitive dependencies dangle close to one another in the real world, however, dependencies between the chains start popping up. In figure 2, for example, method e() has taken a shine to method c().  So far, so good. Not a circular dependency in sight. Now, consider the transitive dependency on the left forms a second dependency on its counterpart, with f() sprouting a dependency towards b().  Suddenly, something looks wrong. It seems as though the transitive dependency on the left has developed an unhealthy interest in the one on the right. The two interconnecting dependencies cross one another, evoking the braiding or intertwining of ropes. Rich Hickey famously channeled his inner Jane Austin to resurrect an archaic verb describing just such an intertwining: “To complect.” In his honour, we shall call the above type of dependency a, “Complectation” (we cannot use, “Complection,” as this means, “Appearance of the skin, especially of the face” – though, oddly, complectations tend to give programs a sickly complection). Nor is this merely an aesthetic point. Complectations admit mathematical definition and objective structural analysis. The reduction of ripple effect motivates all great structure and programmers can measure susceptibility to ripple effect by counting the number of methods on which each method depends; the higher this value, the worse the structure. This is the, “Impacted set,” of the program. Labeling each method thus, we can re-draw figure 3 as follows (ultimately d(), for example, depends on 4 other methods):  Figure shows an impacted set of 12. If, however, the program could be re-written slightly shallower, lifting the invocation of f() from e() to d(), then figure 5, which eliminates the complectation entirely, would obtain.  Figure 5 has an impacted set of 10, a 17% reduction compared to figure 4 by moving just one method dependency. This, of course, is a toy example, and often the programmer requires precisely what figure 4 shows. But at least complectations raise the question and weeding out unnecessary complectations can improve structure. Below, for example, are two methods (reduced for presentation purposes), createGraphicsContext() and colourBackground(), with the former calling the latter: private Graphics2D createGraphicsContext() { Graphics2D graphics2D = canvas.getBufferedImage().createGraphics(); colourBackground(graphics2D); graphics2D.setColor(options.getColour(ColourTag.FOREGROUND)); return graphics2D; }private void colourBackground(Graphics2D graphics2D) { BufferedImage bufferedImage = canvas.getBufferedImage(); Color background = options.getColour(ColourTag.BACKGROUND); graphics2D.setColor(background); graphics2D.fillRect(0, 0, bufferedImage.getWidth(), bufferedImage.getHeight()); } Figure 6 portrays the methods as they appear in the wild.  The complectation here arises from both createGraphicsContext() and colourBackground() calling getBufferedImage(), when it is trivial to re-write the methods such that createGraphicsContext() calls getBufferedImage() and passes the returned object to colourBackground() (thus reducing colourBackground()‘s expose to getBufferedImage()): private Graphics2D createGraphicsContext() { BufferedImage bufferedImage = canvas.getBufferedImage(); Graphics2D graphics2D = bufferedImage.createGraphics(); colourBackground(graphics2D, bufferedImage); graphics2D.setColor(options.getColour(ColourTag.FOREGROUND)); return graphics2D; }private void colourBackground(Graphics2D graphics2D, BufferedImage bufferedImage) { Color background = options.getColour(ColourTag.BACKGROUND); graphics2D.setColor(background); graphics2D.fillRect(0, 0, bufferedImage.getWidth(), bufferedImage.getHeight()); } Thus yielding the ever-so slightly improved figure 7.  Summary Complectations are not software development’s biggest problem. They are a minor nuisance, a minor opportunity for source code improvement. But they show the interesting nuggets to be found whilst panning the freezing waters of program structure.Reference: Bad program structure: the complectation from our JCG partner Edmund Kirwan at the A blog about software. blog....
career-logo

Your Worst Enemy Is Yourself

Here’s the thing… You could have been exactly where you want to be right now. You could have gotten the perfect job. You could have started that business you always wanted to start. You could have gotten those 6-pack abs. You could have even met the love of your life. There has only been one thing standing in your way, and there will always be one thing standing in your way–you! You are constantly at war with yourselfAsk most people what they need to do to solve their problems and they can give you a definitive answer. Most people know how reach their fitness goals, their financial goals, and any other goals they have. Knowledge is rarely the problem. Right now, I guarantee you that there is at least one goal that you’d like to achieve, that you know how to achieve, but you still haven’t been able to do–why? It’s not lack of knowledge that is holding you back, it is a war within yourself–a fierce battle that is raging between different parts of your conscious mind. One side is telling you what you need to do, the other side is justifying all kinds of short-term excuses in order to hold you back. One of my favorite authors, Steven Pressfield, calls this mysterious force that is fighting you resistance, in his excellent book, The War of Art. But, I’m going to level with you here and tell you that the truth is that the enemy you are fighting is actually yourself. As humans we are constantly involved in a self-sabotage that holds us back from achieving what we desire. That is one of the reasons that you’ll notice, if you follow this blog regularly or my YouTube videos or other productions, that I focus on the mental aspect of software development. I can give you all the knowledge in the world about how to write good code and how to succeed at your career, but ultimately it won’t do you any good if you can’t learn to conquer yourself. Conquering yourself How do you beat an enemy that knows everything about you? How can you possibly defeat an adversary that has the power to undermine any defense you prepare against him? Let me tell you a little story that might help to illustrate the answer. Yesterday, I took my family on a drive to the northern part of west Maui. We hadn’t gone around the northern part of Maui, but we knew that the road was a bit dangerous and scary. My wife and I discussed whether we should try and make the trip all the way around the north side of the island. We heard there was some killer banana bread at a little stand about halfway around. After some deliberation, we finally decided to do it. We got on the road and drove past a sign that said “end of state highway.” From there on out it was a single lane road–actually less than a single lane road for many parts–that wrapped around steep mountains and sheer cliffs with no guard-rails in sight.In all honesty, the road itself wasn’t all that scary. The biggest fear was that someone would be coming the other way and we’d have to drive backwards down the road until we could find a turn-out to let them pass. After about an hour and a half, we made it to the other side, world’s best banana bread in hand. Now, here is the interesting thing about that trip; for the scary parts of the road, there was no place to turn around. Once you were winding around the golf-cart sized lane going around the mountains, you had to keep going forward, because you didn’t have the option of going back. Had there been places to turn around, I might have chickened out and turned around, but because I didn’t have the option, I had to keep going forward. That is the key in conquering yourself–leaving no quarter. Pre-make decisions and commit to them If you want to be victorious in this battle with yourself, you have to realize that you can’t win. That’s right, you can’t win, so don’t even try… at least not in a battle of the wills. If you constantly put yourself in positions where you have to make judgement calls, you’ll constantly find yourself making the wrong calls and being defeated time and time again. When you are at the decision-making point of a judgement call, you’ll find your enemy has all kinds of tricks up its sleeve. You’ll suddenly feel hopeless. You’ll convince yourself that you don’t really want what you are seeking. You’ll tell yourself that one piece of cake won’t hurt anything. You’ll promise yourself to get back on the wagon tomorrow. There is no end to the excuses and the justifications you’ll come up with to stop yourself from achieving success. So, here is the trick: eliminate as many of the judgement calls as possible. When you want to to do something, spend time to carefully form a plan, take time to think things through, then commit to the plan and don’t allow yourself the opportunity to question the plan until after it has been executed. Basically, take the part of your brain that always defeats you hostage. Tie him up, throw him in the backseat of your car, and drive forward down that one-lane mountain road. Even if he ends up breaking free, he won’t be able to convince you to turn around, since you can’t. Rules, rules, rules… The key is to set up rules for yourself that will govern your actions in certain situations. If you want to lose weight, eating healthy is not a good plan. You need an actual diet that is planned out in advance. Making judgement calls every time you need to eat will eventually wear down your resolve and you’ll find yourself stuffing your face with foods that definitely aren’t healthy. If you want to write a blog post every week, you need to make a rule about it. Don’t give yourself the option of not doing it. Don’t write only when you feel like it. Want to improve your programming skills by practicing solving problems or reading technical books? Come up with a certain amount of time you have to devote to the task each day and don’t allow yourself to question whether or not you should do it, instead make it a rule you must obey every single day. This concept of applying so many rules to your life may not seem very appealing, but the truth is we all hate freedom. We just can’t handle it. We think we want freedom, but when we have it all we do is sit on the couch and do nothing all day. True freedom is doing exactly what you want to do and in order to do that you need discipline and discipline comes from following rules. The difference here is who the master is. Is someone else setting the rules for you to follow or are you setting the rules? If you can’t set and obey your own rules, you will always be subject to the rules of others. If you can’t learn to be your own master, you’ll always have another person as a master over you. The level of true, actual freedom you are afforded is directly related to your ability to obey the rules you set for yourself. Think of it another way: If you can’t control yourself to do what you intend to do then you don’t have any freedom at all. Paradoxically, the most free person is the person who is able to constrain themself the most, because they always do exactly what they intend to do. Sticking to the plan Now, just because you make rules and follow them, doesn’t mean that you can’t ever break those rules, or change your mind, but it is critical you stay the course long enough to achieve the results you are trying to achieve or to at least be sure you aren’t giving up prematurely. I have a standing rule that I would suggest you apply to your life right now. The rule is that I can’t ever quit anything at the time of making a decision of whether or not to do it. What this means is that if I set some rules for myself, like going for a run three times a week, I can’t break that rule or rewrite that rule when I wake up in the morning and don’t feel like running. I also can’t break or change that rule in the middle of the week. I could decide this week that next week I will drop the running down to two times a week or quit it all together, but this week, I’ll carry forward with the plan. By having this master “no quitters” rule in place, you protect yourself from the nasty self-sabotage of changing course midstream. It’s not that you can never change course–you can–you just have to limit how often you change courses–if you ever want to get anywhere–and you need to make sure you are not changing courses for the wrong reason. By making sure you never quit midstream, by making sure you always follow through with at least the current leg of the journey, you prevent yourself from making critical mistakes due to just having a bad day or having the wrong mental attitude. Some final advice So, if you want to defeat your worst enemy–yourself–here is my advice:Plan as many things in advance as possible. Always have some plan that will take you forward towards your goal. Set rules for yourself around your plan. Make these rules as strict as possible and obey them at all costs. Once you go down the slippery slope of breaking your own rules, they’ll have less and less power, so take these rules seriously. Implement the “no quitters rules.” Don’t quit or change your rules at the time of making a decision. Plan ahead when you’ll change the rules or break them. Don’t give up. You’ll fail and that is ok. But don’t ever accept failure or defeat. Get back on the horse, and keep fighting the battle.So, set some rules, strap your brain into the backseat and drive right right through those one-lane mountain roads of life. Who knows, there might even be some tasty banana bread waiting for you on the other side.Reference: Your Worst Enemy Is Yourself from our JCG partner John Sonmez at the Making the Complex Simple blog....
git-logo

Migrate your project from SVN to Git Stash in few steps

Step by step guide on how to migrate your SVN repository with all its history to the Stash, the Atlassian git manager.                   Only once :add the ssh key open a terminal create the authors.txt file in ~/Documents/ git config svn.authorsfile ~/Documents/authors.txtauthors.txt format : username = Name LastName <email> example : gordof = Gordon Flash <gordon.flash@superhero.com> marcoc = Marco Castigliego <marco.castigliego@superhero.com> For each project : For this example I will migrate a project called super-hero-service.Tell your team members to not commit on the project during the process. Open a terminal cd ~ mkdir migration cd migration git svn clone svn+ssh://marcoc@svn.superhero.com/com/super/hero/Services/ –trunk=super-hero-service super-hero-service go to take a coffee cd super-hero-service git svn show-ignore (Which outputs everything in the SVN ignore property to the console. Then you can copy this to a new file called .gitignore at the root of your repository.Add and commit the file.) go to https://stash.superhero.com/projectsServices and create a repository called super-hero-service git remote add origin ssh://git@stash.suoperhero.com:2022/services/super-hero-service.git git push -u origin masterReference: Migrate your project from SVN to Git Stash in few steps from our JCG partner Marco Castigliego at the Remove duplication and fix bad names blog....
jooq-logo-black-100x80

Look no Further! The Final Answer to “Where to Put Generated Code?”

This recent question on Stack Overflow made me think. Why does jOOQ suggest to put generated code under “/target” and not under “/src”?… and I’m about to give you the final answer to “Where to Put Generated Code?”         This isn’t only about jOOQ Even if you’re not using jOOQ, or if you’re using jOOQ but without the code generator, there might be some generated source code in your project. There are many tools that generate source code from other data, such as:The Java compiler (ok, byte code, not strictly source code. But still code generation) XJC, from XSD files Hibernate from .hbm.xml files, or from your schema Xtend translates Xtend code to Java code You could even consider data transformations, like XSLT many more…In this article, we’re going to look at how to deal with jOOQ-generated code, but the same thoughts apply also to any other type of code generated from other code or data. Now, the very very interesting strategic question that we need to ask ourselves is: Where to put that code? Under version control, like the original data? Or should we consider generated code to be derived code that must be re-generated all the time? The answer is nigh… It depends! Nope, unfortunately, as with many other flame-wary discussions, this one doesn’t have a completely correct or wrong answer, either. There are essentially two approaches: Considering generated code as part of your code base When you consider generated code as part of your code base, you will want to:Check in generated sources in your version control system Use manual source code generation Possibly use even partial source code generationThis approach is particularly useful when your Java developers are not in full control of or do not have full access to your database schema (or your XSD or your Java code, etc.), or if you have many developers that work simultaneously on the same database schema, which changes all the time. It is also useful to be able to track side-effects of database changes, as your checked-in database schema can be considered when you want to analyse the history of your schema. With this approach, you can also keep track of the change of behaviour in the jOOQ code generator, e.g. when upgrading jOOQ, or when modifying the code generation configuration. When you use this approach, you will treat your generated code as an external library with its own lifecycle. The drawback of this approach is that it is more error-prone and possibly a bit more work as the actual schema may go out of sync with the generated schema. Considering generated code as derived artefacts When you consider generated code to be derived artefacts, you will want to:Check in only the actual DDL, i.e. the “original source of truth” (e.g. controlled via Flyway) Regenerate jOOQ code every time the schema changes Regenerate jOOQ code on every machine – including continuous integration machines, and possibly, if you’re crazy enough, on productionThis approach is particularly useful when you have a smaller database schema that is under full control by your Java developers, who want to profit from the increased quality of being able to regenerate all derived artefacts in every step of your build. This approach is fully supported by Maven, for instance, which foresees special directories (e.g. target/generated-sources), and phases (e.g. <phase>generate-sources</phase>) specifically for source code generation. The drawback of this approach is that the build may break in perfectly “acceptable” situations, when parts of your database are temporarily unavailable. Pragmatic approach Some of you might not like that answer, but there is also a pragmatic approach, a combination of both. You can consider some code as part of your code base, and some code as derived. For instance, jOOQ-meta’s generated sources (used to query the dictionary views / INFORMATION_SCHEMA when generating jOOQ code) are put under version control as few jOOQ contributors will be able to run the jOOQ-meta code generator against all supported databases. But in many integration tests, we re-generate the sources every time to be sure the code generator works correctly. Huh! Conclusion I’m sorry to disappoint you. There is no final answer to whether one approach or the other is better. Pick the one that offers you more value in your specific situation. In case you’re choosing your generated code to be part of the code base, read this interesting experience report on the jOOQ User Group by Witold Szczerba about how to best achieve this.Reference: Look no Further! The Final Answer to “Where to Put Generated Code?” from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close