Featured FREE Whitepapers

What's New Here?


Improve Performance by caching and compression

Web page designs are becoming innovative with rich interface which involve extra code such as java scripts, css, and images etc. Most of the end-user response time tied-up in downloading these components.Optimizations of number of http requests and response size are the key parameters to improve the web application performance. This article explains about caching filter and compression filter that are suitable for use with any web application to optimize number of request and response size. Compressing Content Gzip Servlet Filter   HTTP Compression is a way to compress content transferred from servers to browsers which reduce the http response size. This standards-based method of delivering compressed content is built into HTTP/1.1, and all modern Web browsers support the HTTP/1.1 protocol, i.e. they can decode compressed files automatically at the client-side browser. The smaller the size of your information, the faster it can all be sent. Therefore, if you compress the content your web application, it will be displayed on the user’s screen faster. Source Code:Download link OpenWebOptimizer for Gzip Servlet filter source code with sample code and usage document.  Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. Gzipping generally reduces the response size by about 70%. For adding the Gzip filter just adds below code into web.xml including attached Gzip filters <filter> <filter-name>GZIPFilter</filter-name> <filter-class>com.opcat.gzip.GZIPFilter</filter-class> </filter> <filter-mapping> <filter-name>GZIPFilter</filter-name> <url-pattern>*</url-pattern> </filter-mapping>]]> HTTP Servlet Caching: Caching reduces number of HTTP request which makes web page faster. Web application generate browser content and in many scenario content won’t change between different requests therefore if you can cache the response ,you can reuse again without HTTP requests which improve the web application performance. We can achieve caching by just writing simple caching filter Source Code: Download link OpenWebOptimizer for Caching Servlet filter source code with sample code and usage document. Caching filter means a servlet that will intercept all requests and cache it and have a valid cached copy. The filter will immediately respond to the requests by sending a copy of cached contents. However, if no cache exists, the filter will pass the request on to its intended endpoint, and the response will be generated and cached for future request. For adding the caching filter just add below code into web.xml <filter> <filter-name>jsCache</filter-name> <filter-class>com.opcat.cache.CacheFilter</filter-class> <init-param> <param-name>private</param-name> <param-value>false</param-value> </init-param> <init-param> <param-name>expirationTime</param-name> <!-- Change this to add the expiry time for re-validating the files --> <param-value>0</param-value> </init-param> </filter> <filter-mapping> <filter-name>jsCache</filter-name> <url-pattern>*</url-pattern> </filter-mapping> Summary Caching and Compression filters optimize HTTP request call, contents size and contents generation. Caching and Compression are most important task in terms of web application performance.  We can use attached project which is free and open source to use in your application.   Reference: Improve Performance by caching and compression from our JCG partner Nitin Kumar at the Tech My Talk blog. ...

Spring MVC: Security with MySQL and Hibernate

Spring has a lot of different modules. All of them are useful for the concrete purposes. Today I’m going to talk about Spring Security. This module provides flexible approach to manage permitions for access to different parts of web-application. In the post I’ll examine integration of Spring MVC, Hibernate, MySQL with Spring Security. A regular case for any web-application is separation of functionality between some user groups. E.g. user with a “moderator” role can edit existing records in a database. An user   with “admin” role can do the same thing as the user with “moderator” role plus create new records. In Spring MVC application permition management can be implemented with the Spring Security. The goal As an example I will use sample Spring MVC application with Hibernate. The users and their roles will be stored in a database. MySQL will be used as the database. I’m going to create three tables: users, roles, user_roles. As you might guess the user_roles table is an intermediary table. In the application will be two roles: moderator and admin. There will be several pages with access for the moderator and for the admin. Preparation In order to make Spring Security available in a project, just add following dependencies in a pom.xml file: <!-- Spring Security --> <dependency> <groupid>org.springframework.security</groupid> <artifactid>spring-security-core</artifactid> <version>3.1.3.RELEASE</version> </dependency> <dependency> <groupid>org.springframework.security</groupid> <artifactid>spring-security-web</artifactid> <version>3.1.3.RELEASE</version> </dependency> <dependency> <groupid>org.springframework.security</groupid> <artifactid>spring-security-config</artifactid> <version>3.1.3.RELEASE</version> </dependency>I have to create three tables in the database and insert several records there. CREATE TABLE `roles` ( `id` int(6) NOT NULL AUTO_INCREMENT, `role` varchar(20) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8;CREATE TABLE `users` ( `id` int(6) NOT NULL AUTO_INCREMENT, `login` varchar(20) NOT NULL, `password` varchar(20) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8;CREATE TABLE `user_roles` ( `user_id` int(6) NOT NULL, `role_id` int(6) NOT NULL, KEY `user` (`user_id`), KEY `role` (`role_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; And here is a code for the roles and users: INSERT INTO hibnatedb.roles (role) VALUES ('admin'), ('moderator');INSERT INTO hibnatedb.users (login, password) VALUES ('moder', '111111'), ('adm', '222222');INSERT INTO hibnatedb.user_roles (user_id, role_id) VALUES (1, 2), (2, 1); Main part The complete structure of project has the following structure:Since you can find this project on GitHub, I’ll omit some things which are out of the current theme. I want to start from the heart of every web-project, I mean web.xml file. Spring Security is based on simple filters, so I need to add declaration of the filter in the deployment descriptor: ... <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> ...Now it’s time to create entities for the users and roles tables: @Entity @Table(name="users") public class User {@Id @GeneratedValue private Integer id;private String login;private String password;@OneToOne(cascade=CascadeType.ALL) @JoinTable(name="user_roles", joinColumns = {@JoinColumn(name="user_id", referencedColumnName="id")}, inverseJoinColumns = {@JoinColumn(name="role_id", referencedColumnName="id")} ) private Role role;public Integer getId() { return id; }public void setId(Integer id) { this.id = id; }public String getLogin() { return login; }public void setLogin(String login) { this.login = login; }public String getPassword() { return password; }public void setPassword(String password) { this.password = password; }public Role getRole() { return role; }public void setRole(Role role) { this.role = role; }} And @Entity @Table(name="roles") public class Role {@Id @GeneratedValue private Integer id;private String role;@OneToMany(cascade=CascadeType.ALL) @JoinTable(name="user_roles", joinColumns = {@JoinColumn(name="role_id", referencedColumnName="id")}, inverseJoinColumns = {@JoinColumn(name="user_id", referencedColumnName="id")} ) private Set userRoles;public Integer getId() { return id; }public void setId(Integer id) { this.id = id; }public String getRole() { return role; }public void setRole(String role) { this.role = role; }public Set getUserRoles() { return userRoles; }public void setUserRoles(Set userRoles) { this.userRoles = userRoles; }} Each entity class requires DAO and Service layer. public interface UserDAO {public User getUser(String login);} And @Repository public class UserDAOImpl implements UserDAO {@Autowired private SessionFactory sessionFactory;private Session openSession() { return sessionFactory.getCurrentSession(); }public User getUser(String login) { List userList = new ArrayList(); Query query = openSession().createQuery("from User u where u.login = :login"); query.setParameter("login", login); userList = query.list(); if (userList.size() > 0) return userList.get(0); else return null; }} Respectively for the Role class: public interface RoleDAO {public Role getRole(int id);} And @Repository public class RoleDAOImpl implements RoleDAO {@Autowired private SessionFactory sessionFactory;private Session getCurrentSession() { return sessionFactory.getCurrentSession(); }public Role getRole(int id) { Role role = (Role) getCurrentSession().load(Role.class, id); return role; }} The same pairs for the service layer: public interface UserService {public User getUser(String login);} And @Service @Transactional public class UserServiceImpl implements UserService {@Autowired private UserDAO userDAO;public User getUser(String login) { return userDAO.getUser(login); }} Respectively for the Role class: public interface RoleService {public Role getRole(int id);} And @Service @Transactional public class RoleServiceImpl implements RoleService {@Autowired private RoleDAO roleDAO;public Role getRole(int id) { return roleDAO.getRole(id); }} Everything above was just mechanical, routine code. Now let’s work on the Spring Security code. In order to plug in Spring Security into the project I have to create CustomUserDetailsService class and implement UserDetailsService interface. import java.util.ArrayList; import java.util.Collection; import java.util.List;import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.core.GrantedAuthority; import org.springframework.security.core.authority.SimpleGrantedAuthority; import org.springframework.security.core.userdetails.User; import org.springframework.security.core.userdetails.UserDetails; import org.springframework.security.core.userdetails.UserDetailsService; import org.springframework.security.core.userdetails.UsernameNotFoundException; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional;import com.sprsec.dao.UserDAO;@Service @Transactional(readOnly=true) public class CustomUserDetailsService implements UserDetailsService {@Autowired private UserDAO userDAO;public UserDetails loadUserByUsername(String login) throws UsernameNotFoundException {com.sprsec.model.User domainUser = userDAO.getUser(login);boolean enabled = true; boolean accountNonExpired = true; boolean credentialsNonExpired = true; boolean accountNonLocked = true;return new User( domainUser.getLogin(), domainUser.getPassword(), enabled, accountNonExpired, credentialsNonExpired, accountNonLocked, getAuthorities(domainUser.getRole().getId()) ); }public Collection getAuthorities(Integer role) { List authList = getGrantedAuthorities(getRoles(role)); return authList; }public List getRoles(Integer role) {List roles = new ArrayList();if (role.intValue() == 1) { roles.add("ROLE_MODERATOR"); roles.add("ROLE_ADMIN"); } else if (role.intValue() == 2) { roles.add("ROLE_MODERATOR"); } return roles; }public static List getGrantedAuthorities(List roles) { List authorities = new ArrayList();for (String role : roles) { authorities.add(new SimpleGrantedAuthority(role)); } return authorities; }} The main purpose of the class is to map User class of the application to the User class of Spring Security. This is one of the killer-feature of the Spring Security. In this way you can adapt any kind of Spring MVC application to usage of the Security module. Controllers and Views One of the most frequent question regarding Spring Security is how to create a custom login form. The answer is simple enough. You need to create a JSP file with a the form, and specify there action attribute ().The most part of the URL-mapping depends on spring-security.xml file: ... <http auto-config="true"> <intercept-url pattern="/sec/moderation.html" access="ROLE_MODERATOR"> <intercept-url pattern="/admin/*" access="ROLE_ADMIN"> <form-login login-page="/user-login.html" default-target-url="/success-login.html" authentication-failure-url="/error-login.html"> <logout logout-success-url="/index.html"> </logout></form-login></intercept-url></intercept-url></http> <authentication-manager> <authentication-provider user-service-ref="customUserDetailsService"> <password-encoder hash="plaintext"> </password-encoder></authentication-provider> </authentication-manager> ...As you can see, I specified URLs for the: login page, default page after success login, error page for the situations when credentials are invalid. Also I declared URLs which require some access permitions. And the most important thing is a declaration of the authentication-manager. Through this Spring Security will use database to identify users and their roles. Controllers: @Controller public class LinkNavigation {@RequestMapping(value="/", method=RequestMethod.GET) public ModelAndView homePage() { return new ModelAndView("home"); }@RequestMapping(value="/index", method=RequestMethod.GET) public ModelAndView indexPage() { return new ModelAndView("home"); }@RequestMapping(value="/sec/moderation", method=RequestMethod.GET) public ModelAndView moderatorPage() { return new ModelAndView("moderation"); }@RequestMapping(value="/admin/first", method=RequestMethod.GET) public ModelAndView firstAdminPage() { return new ModelAndView("admin-first"); }@RequestMapping(value="/admin/second", method=RequestMethod.GET) public ModelAndView secondAdminPage() { return new ModelAndView("admin-second"); }} And @Controller public class SecurityNavigation {@RequestMapping(value="/user-login", method=RequestMethod.GET) public ModelAndView loginForm() { return new ModelAndView("login-form"); }@RequestMapping(value="/error-login", method=RequestMethod.GET) public ModelAndView invalidLogin() { ModelAndView modelAndView = new ModelAndView("login-form"); modelAndView.addObject("error", true); return modelAndView; }@RequestMapping(value="/success-login", method=RequestMethod.GET) public ModelAndView successLogin() { return new ModelAndView("success-login"); }} Views you can see on GitHub. Pay you attention to adding of @ImportResource(“classpath:spring-security.xml”) in the WebAppConfig java class. Summary I think this article will help you to dive into Spring Security. I used here Hibernate and MySQL since such combination of technologies isn’t used often in other tutorials in the internet. Probably you noticed that I used some XMLs in the project, that’s because currently there is no ways to implement all these stuff using annotation based approach.   Reference: Spring MVC: Security with MySQL and Hibernate from our JCG partner Alexey Zvolinskiy at the Fruzenshtein’s notes blog. ...

Template Method Pattern – Using Lambda Expressions, Default Methods

Template Method pattern is one of the 23 design patterns explained in the famous Design Patterns book by Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides. The intent of this pattern is stated as: Define the skeleton of an algorithm in an operation, deferring some steps to subclasses. TemplateMethod lets subclasses redefine certain steps of an algorithm without changing the algorithm’s structure. To explain in simple terms, consider the following scenario: Assume there is a workflow system in which 4 tasks have to be performed in the given order so as to successfully complete the workflow. Some of the tasks out of the 4 tasks can be customised by   different workflow system implementation. Template Method pattern can be applied to above scenario by encapsulating the workflow system into an abstract class with few of the tasks out of the 4 tasks implemented. And leave the implementation of remaining tasks to the subclasses of the abstract class. So the above when implemented: /** * Abstract Workflow system */ abstract class WorkflowManager2{ public void doTask1(){ System.out.println("Doing Task1..."); }public abstract void doTask2();public abstract void doTask3();public void doTask4(){ System.out.println("Doing Task4..."); } }/** * One of the extensions of the abstract workflow system */ class WorkflowManager2Impl1 extends WorkflowManager2{ @Override public void doTask2(){ System.out.println("Doing Task2.1..."); }@Override public void doTask3(){ System.out.println("Doing Task3.1..."); } }/** * Other extension of the abstract workflow system */ class WorkflowManager2Impl2 extends WorkflowManager2{ @Override public void doTask2(){ System.out.println("Doing Task2.2..."); }@Override public void doTask3(){ System.out.println("Doing Task3.2..."); } } Let me just go ahead and show how these workflow implementations are used: public class TemplateMethodPattern { public static void main(String[] args) { initiateWorkFlow(new WorkflowManager2Impl1()); initiateWorkFlow(new WorkflowManager2Impl2()); }static void initiateWorkFlow(WorkflowManager2 workflowMgr){ System.out.println("Starting the workflow ... the old way"); workflowMgr.doTask1(); workflowMgr.doTask2(); workflowMgr.doTask3(); workflowMgr.doTask4(); } } and the output would be.. Starting the workflow ... the old way Doing Task1... Doing Task2.1... Doing Task3.1... Doing Task4... Starting the workflow ... the old way Doing Task1... Doing Task2.2... Doing Task3.2... Doing Task4... So far so good. But the main intent of this post is not to create yet another blog post on Template Method pattern, but to see how we can leverage Java 8 Lambda Expression and Default Methods. I have already written before, that only interfaces which have Single Abstract Methods can be written as lambda expressions. What this translates to in this example is that, the WorkflowManager2 can only have one abstract/customizable task out of the 4 tasks. So restricting to one abstract method is a major restriction and may not be applicable in many realtime scenarios. I dont wish to reiterate the same old Template Method pattern examples, instead my main intention of writing this is to show how lambda expressions and default methods can be leveraged in scenarios where you are dealing with abstract classes with single abstract methods. If you are left wondering what these lambda expressions in java mean and also these default methods in java, then please spend some time to read about lambda expressions and default methods before proceeding further. Instead of an abstract class we would use an interface with default methods, so our workflow system would look like: interface WorkflowManager{ public default void doTask1(){ System.out.println("Doing Task1..."); }public void doTask2();public default void doTask3(){ System.out.println("Doing Task3..."); }public default void doTask4(){ System.out.println("Doing Task4..."); } } Now that we have the workflow system with customisable Task2, we will go ahead and initiate some customised workflows using Lambda expressions… public class TemplateMethodPatternLambda { public static void main(String[] args) { /** * Using lambda expression to create different * implementation of the abstract workflow */ initiateWorkFlow(()->System.out.println("Doing Task2.1...")); initiateWorkFlow(()->System.out.println("Doing Task2.2...")); initiateWorkFlow(()->System.out.println("Doing Task2.3...")); }static void initiateWorkFlow(WorkflowManager workflowMgr){ System.out.println("Starting the workflow ..."); workflowMgr.doTask1(); workflowMgr.doTask2(); workflowMgr.doTask3(); workflowMgr.doTask4(); } } This is in a small way lambda expressions can be leveraged in the Template Method Pattern   Reference: Template Method Pattern – Using Lambda Expressions, Default Methods from our JCG partner Mohamed Sanaulla at the Experiences Unlimited blog. ...

A simple application of Lambda Expressions in Java 8

I have been trying to fit in lambda expressions in the code I write and this simple example is a consequence of the same. For those totally unaware of Lambda Expressions in Java, I would recommend them to read this first before getting into this post. Ok, now that you are familiar with Lambda Expressions (after reading the introductory post), lets go in the simple example which I thought of as a good use of lambda expression. Consider this scenario: A certain operation is surrounded by some pre-processing and some post-processing. And the operation to be executed can vary depending on   the behaviour expected. The pre-processing code extracts the required parameters for the operation and the post-processing does the necessary cleanup. Lets us see how this can be done with the use of Interfaces and their implementations via Anonymous Inner classes. Using anonymous inner classes An interface which has to be implemented to provide the required behavior: interface OldPerformer { public void performTask(String id, int status); } And lets look at method which performs the pre-processing, executes the required operation and then the post-processing: public class PrePostDemo { static void performTask(String id, OldPerformer performer) { System.out.println("Pre-Processing..."); System.out.println("Fetching the status for id: " + id); int status = 3;//Some status value fetched performer.performTask(id, status); System.out.println("Post-processing..."); } } We need to pass 2 things- an identifier to perform the pre-processing and an implementation of the operation, which can be done as shown below: public class PrePostDemo { public static void main(String[] args) { //has to be declared final to be accessed within //the anonymous inner class. final String outsideOfImpl = "Common Value"; performTask("1234", new OldPerformer() { @Override public void performTask(String id, int status) { System.out.println("Finding data based on id..."); System.out.println(outsideOfImpl); System.out.println("Asserting that the status matches"); } }); performTask("4567", new OldPerformer() { @Override public void performTask(String id, int status) { System.out.println("Finding data based on id..."); System.out.println(outsideOfImpl); System.out.println("Update status of the data found"); } }); } } As seen above, the variables declared outside of the Anonymous inner class have to be declared as final for them to be accessible in the methods of the anonymous inner class. The output of the above code would be: Pre-Processing... Fetching the status for id: 1234 Finding data based on id... Common Value Asserting that the status matches Post-processing... Pre-Processing... Fetching the status for id: 4567 Finding data based on id... Common Value Update the status of the data found Post-processing... Using Lambda expression Lets look at how the above can be written using the lambda expression: public class PrePostLambdaDemo { public static void main(String[] args) { //Need not be declared as final for use within a //lambda expression, but has to be eventually final. String outsideOfImpl = "Common Value";doSomeProcessing("123", (String id, int status) -> { System.out.println("Finding some data based on"+id); System.out.println(outsideOfImpl); System.out.println("Assert that the status is "+status ); });doSomeProcessing("456", (String id, int status) -> { System.out.print("Finding data based on id: "+id); System.out.println(outsideOfImpl); System.out.println("And updating the status: "+status); }); }static void doSomeProcessing(String id, Performer performer ){ System.out.println("Pre-Processing..."); System.out.println("Finding status for given id: "+id); int status = 2; performer.performTask(id, status); System.out.println("Post-processing..."); } }interface Performer{ public void performTask(String id, int status); } Apart from the interesting lambda expression syntax, the variable outside the scope of the lambda expression is not declared as final. But it has to be eventually final, which means that the value of the variable: outsideOfImpl shouldn’t be modified once declared. This is just another cleaner way of using lambda expression in place of Anonymous inner classes. A parting note: The schedule release of JDK 8 has been pushed further into Feb 2014 and the complete schedule can be found here. I am using the Project lambda build which keeps getting updated each day, so feel free to let me know if something of this doesn’t work on latest builds. I will try my best to keep updating the builds and trying out the samples posted here. Another note: Dont get overwhelmed by whats happening in Java 8, these features are already part of lot of programming languages now. I found that learning the syntax and approach of lambda expressions in Java has helped me to understand and think functionally and more specifically appreciate Scala closures.   Reference: A simple application of Lambda Expressions in Java 8 from our JCG partner Mohamed Sanaulla at the Experiences Unlimited blog. ...

Publish/Subscribe Pattern with Apache Camel

Publish/Subscribe is a simple messaging pattern where a publisher sends messages to a channel without the knowledge of who is going to receive them. Then it is the responsibility of the channel to deliver a copy of the messages to each subscriber. This messaging model enables creation of loosely coupled and scalable systems.            It is a very common messaging pattern and there are so many ways to create a kind of pub-sub in Apache Camel. But bear in mind that they are all different and have different characteristics. From the simplest to more complex, here is a list:Multicast – works only with a static list of subscribers, can deliver the message to subscriber in parallel, stops or continues on exception if one of the subscribers fails. Recipient List – it is similar to multicast, but allows the subscribers to be defined at run time, for example in the message header. SEDA – this component provides asynchronous SEDA behaviour using BlockingQueue. When multipleConsumers option is set, it can be used for asynchronous pub-sub messaging. It also has possibilities to block when full, set queue size or time out publishing if the message is not consumed on time. VM – same as SEDA, but works cross multiple CamelContexts, as long as they are in the same JMV. It is a nice mechanism for sending messages between webapps in a web-container or bundles in OSGI container. Spring-redis – Redis has pubsub feature which allows publishing messages to multiple receivers. It is possible to subscribe to a channel by name or using pattern-matching. When pattern-matching is used, the subscriber will receive messages from all the channels matching the pattern. Keep in mind that in this case it is possible to receive a message more than once, if the multiple patterns matches the same channel where the message was sent. JMS (ActiveMQ) – that’s probably the best know way for doing pub-sub including durable subscriptions. For a complete list of features check ActiveMQ website. Amazon SNS/SQS – if you need a really scalable and reliable solution, SNS is the way to go. Subscribing a SQS queue to the topic, turns it into a durable subscriber and allows polling the messages later. The important point to remember in this case is that it is not very fast and most importantly, Amazon doesn’t guarantee FIFO order for your messages.There are also less popular Camel components which offer publish-subscribe messaging model:websocket – it uses Eclipse Jetty Server and can sends message to all clients which are currently connected. hazelcast – SEDA implements a work-queue in order to support asynchronous SEDA architectures. guava-eventbus – integration bridge between Camel and Google Guava EventBus infrastructure. spring-event – provides access to the Spring ApplicationEvent objects. eventadmin – on OSGi environment to receive OSGI events. xmpp – implements XMPP (Jabber) transport. Posting a message in chat room is also pub-sub;) mqtt – for communicating with MQTT compliant message brokers. amqp – supports the AMQP protocol using the Client API of the Qpid project. javaspace – a transport for working with any JavaSpace compliant implementation.Can you name any other way for doing publish-subscribe?   Reference: Publish/Subscribe Pattern with Apache Camel from our JCG partner Bilgin Ibryam at the OFBIZian blog. ...

Comments are for Losers

If software development is like driving a car then comments are road signs along the way. Comments are purely informational and do NOT affect the final machine code. Imagine how much time you would waste driving in a city where road signs looked like this one. A good comment is one that reduces the development life cycle for the next developer that drives down the road. A bad comment is one that increases the development life cycle for any developer unfortunate enough to have to   drive down that road. Sometimes that next unfortunate driver will be you several years later! Comments do not Necessarily Increase Development Speed I was in university in 1985 (yup, I’m an old guy) and one of my professors presented a paper (which I have been unable to locate) of a study done in the 1970s. The study took a software program, introduced defects into it, and then asked several teams to find as many defects as they could. The interesting part of the study was that 50% of the teams had the comments completely removed from the source code. The result was that the teams without comments not only found more defects but also found them in less time. So unfortunately, comments can serve as weapons of mass distraction… Bad comments A bad comment is one that wastes your time and does not help you to drive your development faster. Let’s go through the categories of really bad comments:  Too many comments Excessive history comments Emotional and humorous commentsToo many comments are a clear case of where less is more. There are programs with so many comments that it obscures the code. Some of us have worked on programs where there were so many comments you could barely find the code!History comments can make some sense, but then again isn’t that what the version control comment is for? History comments are questionable when you have to page down multiple times just to get to the beginning of the source code. If anything, history comments should be moved to the bottom of the file so that Ctrl-End actually takes you to the bottom of the modification history.We have all run across comments that are not relevant.Some comments are purely about the developer’s instantaneous emotional and intellectual state, some are about how clever they are, and some are simply attempts at humor (don’t quit your day job!). Check out some of these gems (more can be found here):       // I am not sure if we need this, but too scared to delete.//When I wrote this, only God and I understood what I was doing //Now, God only knows// I am not responsible of this code. // They made me write it, against my will.// I have to find a better jobtry { ... } catch (SQLException ex) { // Basically, without saying too much, you're screwed. Royally and totally. } catch(Exception ex) { //If you thought you were screwed before, boy have I news for you!!! }// Catching exceptions is for communists// If you're reading this, that means you have been put in charge of my previous project. // I am so, so sorry for you. God speed.// if i ever see this again i'm going to start bringing guns to work//You are not expected to understand this Self-Documenting Code instead of CommentsWe are practitioners of computer science and not computer art. We apply science to software by checking the functionality we desire (requirements model) against the behavior of the program (machine code model). When observations of the final program disagree with the requirements model we have a defect which leads us to change our machine code model. Of course we don’t alter the machine code model directly (at least most of us); we update the source code which is the last easily modified model. Since comments are not compiled into the machine code there is some logic to making sure that the source code model be self-documenting.  It is the only model that really counts!Self-documenting code requires that you choose good names for variables, classes, function names, and enumerated types. Self-documenting means that OTHER developers can understand what you have done. Good self-documenting code has the same characteristic of good comments; it decreases the time it takes to do development. Practically, your code is self-documenting when your peers say that it is, not when YOU say that it is. Peer reviewed comments and code is the only way to make sure that code will lead to faster development cycles. Comments gone WildEven if all the comments in a program are good (i.e. reduce development life cycle) they are subject to drift over time. The speed of software development makes it difficult to make sure that comments stay in alignment with the source code. Comments that are allowed to drift become road signs that are no longer relevant to drivers. Good comments go wild when the developer is so focused on getting a release out that he does not stop to maintain comments. Comments have gone wild when they become misaligned with the source code; you have to terminate them. No animals (or comments) were harmed in the writing of this blog. Commented Code Code gets commented during a software release as we are experimenting with different designs or to help with debugging. What is really not clear is why code remains commented before the final check-in of a software release. Over my career as a manager and trainer, I’ve asked developers why they comment their code. The universal answer that I get is “just in case”. Just in case what? At the end of a software release you have already established that you are not going to use your commented code, so why are you dragging it around? People hang on to commented code as if it is a “ Get Out of Jail Free” card, it isn’t.The reality is that commented code can be a big distraction. When you leave commented code in your source code you are leaving a land mine for the next developer that walks through it. When the pressure is on to get defects fixed developers will uncomment previously commented code to see if it will fix the problem. There is no substitute for understanding the code you are working on – you might get lucky when you reinstate commented code; in all likelihood it will blow up in your face.     SolutionsIf your developers are not  taking (or given) enough time to put in good comments then they should not write ANY comments. You will get more productivity because they will not waste time putting in bad comments that will slow everyone else down. Time spent on writing self-documenting code will help you and your successors reduce development life cycles. It is absolutely false to believe that you do not have time to write self-documenting code. If you are going to take on the hazards of writing comments then they need to be peer reviewed to make sure that OTHER developers understand the code.  Unless the code reviewer(s) understands all the comments the code should not pass inspection. If you don’t have a code review process then you are only commenting the code for yourself. The key principle when writing comments is Non Nobis Solum (not for ourselves alone). When you run across a comment that sends you on a wild goose chase – fix it or delete it. If you are the new guy on the team and realize that the comments are wasting your time – get rid of them; your development speed will go up.         Make no mistake, I am the biggest “Loser” of them all.  I believe that I have made every mistake in the book at least once :-)   Reference: Comments are for Losers from our JCG partner Dalip Mahal at the Accelerated Development blog. ...

How to Integrate In App Purchase Billing in Android

Hello Friends! Today I am going to share very useful blog for In App Purchase in Android. Google provide In App Billing faculty in Android. In App Purchase is a very easy and secure way for make payment online. Please follow my blog step by step: Screen ShotCreate a new Project in Android. Create MainActivity.java class. Add activity_main.xml in your res/layout folder. Add Billing services and permission in Manifest.xml.Do’sCreate sign apk for your application. Upload your apk on Google play store. Create product for your application. wait for 6-12 hour for update item’s on store. Copy Key of your Google account and paste it into BillingSecurity.java class Line number 135- String base64EncodedPublicKey = "PUT YOUR PUBLIC KEY HERE";Give Billing permissions in Manifest.xml Add IMarketBillingService.java in com.android.vending.billing package.Don’tDon’t use emulator for testing its does not support Billing Services. Don’t use unsigned apk for Billing services. Don’t share your key with any one.My Code- 1)MainActivity.javapackage com.manish.inapppurchase;import android.app.Activity; import android.content.Context; import android.content.Intent; import android.os.Bundle; import android.os.Handler; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.Toast;public class MainActivity extends Activity implements OnClickListener { Button btn1, btn2, btn3; private Context mContext=this; private static final String TAG = "Android BillingService"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); btn1 = (Button) findViewById(R.id.button1); btn2 = (Button) findViewById(R.id.button2); btn3 = (Button) findViewById(R.id.button3); btn1.setOnClickListener(this); btn2.setOnClickListener(this); btn3.setOnClickListener(this); startService(new Intent(mContext, BillingService.class)); BillingHelper.setCompletedHandler(mTransactionHandler); } public Handler mTransactionHandler = new Handler(){ public void handleMessage(android.os.Message msg) { Log.i(TAG, "Transaction complete"); Log.i(TAG, "Transaction status: "+BillingHelper.latestPurchase.purchaseState); Log.i(TAG, "Item purchased is: "+BillingHelper.latestPurchase.productId); if(BillingHelper.latestPurchase.isPurchased()){ showItem(); } }; }; @Override public void onClick(View v) { if (v == btn1) { if(BillingHelper.isBillingSupported()){ BillingHelper.requestPurchase(mContext, "android.test.purchased"); } else { Log.i(TAG,"Can't purchase on this device"); btn1.setEnabled(false); // XXX press button before service started will disable when it shouldnt } Toast.makeText(this, "Shirt Button", Toast.LENGTH_SHORT).show(); } if (v == btn2) { if(BillingHelper.isBillingSupported()){ BillingHelper.requestPurchase(mContext, "android.test.purchased"); } else { Log.i(TAG,"Can't purchase on this device"); btn2.setEnabled(false); // XXX press button before service started will disable when it shouldnt } Toast.makeText(this, "TShirt Button", Toast.LENGTH_SHORT).show(); } if (v == btn3) { if(BillingHelper.isBillingSupported()){ BillingHelper.requestPurchase(mContext, "android.test.purchased"); } else { Log.i(TAG,"Can't purchase on this device"); btn3.setEnabled(false); // XXX press button before service started will disable when it shouldnt } Toast.makeText(this, "Denim Button", Toast.LENGTH_SHORT).show(); }}private void showItem() { //purchaseableItem.setVisibility(View.VISIBLE); }@Override protected void onPause() { Log.i(TAG, "onPause())"); super.onPause(); } @Override protected void onDestroy() { BillingHelper.stopService(); super.onDestroy(); } } 2)activity_main.xml <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:background="#0099CC" tools:context=".MainActivity" ><Button android:id="@+id/button1" android:layout_width="150dp" android:layout_height="35dp" android:layout_alignParentTop="true" android:layout_centerHorizontal="true" android:layout_marginTop="40dp" android:background="#FFFFFF" android:text="Shirt for 5.4$" /><Button android:id="@+id/button2" android:layout_width="150dp" android:layout_height="35dp" android:layout_below="@+id/button1" android:layout_centerHorizontal="true" android:layout_marginTop="10dp" android:background="#FFFFFF" android:text="Tshirt for 7.4$" /><Button android:id="@+id/button3" android:layout_width="150dp" android:layout_height="35dp" android:layout_below="@+id/button2" android:layout_centerHorizontal="true" android:layout_marginTop="10dp" android:background="#FFFFFF" android:text="Denim for 10.7$" /></RelativeLayout>3)manifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.manish.inapppurchase" android:versionCode="1" android:versionName="1.0" ><uses-sdk android:minSdkVersion="7" android:targetSdkVersion="16" /><uses-permission android:name="com.android.vending.BILLING" /> <uses-permission android:name="android.permission.INTERNET" /><application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name="com.manish.inapppurchase.MainActivity" android:label="@string/app_name" > <intent-filter> <action android:name="android.intent.action.MAIN" /><category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity><service android:name=".BillingService" /><receiver android:name=".BillingReceiver" > <intent-filter> <action android:name="com.android.vending.billing.IN_APP_NOTIFY" /> <action android:name="com.android.vending.billing.RESPONSE_CODE" /> <action android:name="com.android.vending.billing.PURCHASE_STATE_CHANGED" /> </intent-filter> </receiver> </application></manifest> 4) Zip Code   Reference: How to Integrate In App Purchase Billing in Android from our JCG partner Manish Srivastava at the Android Hub 4 you blog. ...

Using Java WebSockets, JSR 356, and JSON mapped to POJO’s

So I have been playing around with Tyrus, the reference implementation of the JSR 356 WebSocket for Java spec. Because I was looking at test tooling I was interested in running both the client and the server side in Java. So no HTML5 in this blog post I am afraid. In this example we want to sent JSON back and forth and because I am old fashioned like that I want to be able to bind to a POJO object. I am going to use Jackson for this so my maven file looks like this:       <dependencies> <dependency> <groupId>javax.websocket</groupId> <artifactId>javax.websocket-api</artifactId> <version>1.0-rc3</version> </dependency><dependency> <groupId>org.glassfish.tyrus</groupId> <artifactId>tyrus-client</artifactId> <version>1.0-rc3</version> </dependency><dependency> <groupId>org.glassfish.tyrus</groupId> <artifactId>tyrus-server</artifactId> <version>1.0-rc3</version> </dependency><dependency> <groupId>org.glassfish.tyrus</groupId> <artifactId>tyrus-container-grizzly</artifactId> <version>1.0-rc3</version> </dependency><dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.2.0</version> </dependency><dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-annotations</artifactId> <version>2.2.0</version> </dependency><dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-core</artifactId> <version>2.2.0</version> </dependency></dependencies> So the first things we need to do is to define an implementations of the Encode/Decoder interfaces to do this work for us. This is going to do some simple reflection to workout what the bean class is. Like with JAX-WS it is easier to put them on the same class. Note that we use the streaming version of the interface and are only handling text content. (Ignoring the ability to send binary data for the moment) package websocket;import com.fasterxml.jackson.databind.ObjectMapper;import java.io.IOException; import java.io.Reader; import java.io.Writer;import java.lang.reflect.ParameterizedType; import java.lang.reflect.Type;import javax.websocket.DecodeException; import javax.websocket.Decoder; import javax.websocket.EncodeException; import javax.websocket.Encoder; import javax.websocket.EndpointConfig;public abstract class JSONCoder<T> implements Encoder.TextStream<T>, Decoder.TextStream<T>{private Class<T> _type;// When configured my read in that ObjectMapper is not thread safe // private ThreadLocal<ObjectMapper> _mapper = new ThreadLocal<ObjectMapper>() {@Override protected ObjectMapper initialValue() { return new ObjectMapper(); } };@Override public void init(EndpointConfig endpointConfig) {ParameterizedType $thisClass = (ParameterizedType) this.getClass().getGenericSuperclass(); Type $T = $thisClass.getActualTypeArguments()[0]; if ($T instanceof Class) { _type = (Class<T>)$T; } else if ($T instanceof ParameterizedType) { _type = (Class<T>)((ParameterizedType)$T).getRawType(); } }@Override public void encode(T object, Writer writer) throws EncodeException, IOException { _mapper.get().writeValue(writer, object); }@Override public T decode(Reader reader) throws DecodeException, IOException { return _mapper.get().readValue(reader, _type); }@Override public void destroy() {}} The bean class is really quite simple with a static subclass of the Coder that we can use later. package websocket;public class EchoBean {public static class EchoBeanCode extends JSONCoder<EchoBean> {}private String _message; private String _reply;public EchoBean() {}public EchoBean(String _message) { super(); this._message = _message; }public void setMessage(String _message) { this._message = _message; }public String getMessage() { return _message; }public void setReply(String _reply) { this._reply = _reply; }public String getReply() { return _reply; }} So new we need to implement our server endpoint, you can go one of two way either annotating a POJO or extending Endpoint. I am going with the first for the server and the second for the client. Really all this service does is to post the message back to the client. Note the registration of the encode and decoder. The same class in this case. package websocket;import java.io.IOException;import javax.websocket.EncodeException; import javax.websocket.EndpointConfig; import javax.websocket.OnMessage; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; import static java.lang.System.out;@ServerEndpoint(value="/echo", encoders = {EchoBean.EchoBeanCode.class}, decoders = {EchoBean.EchoBeanCode.class}) public class EchoBeanService {@OnMessage public void echo (EchoBean bean, Session peer) throws IOException, EncodeException { // bean.setReply("Server says " + bean.getMessage()); out.println("Sending message to client"); peer.getBasicRemote().sendObject(bean); }@OnOpen public void onOpen(final Session session, EndpointConfig endpointConfig) { out.println("Server connected " + session + " " + endpointConfig); } } Lets look at a client bean, this time extending the standard Endpoint class and adding a specific listener for a message. In this case when the message is received the connection is simply closed to make our test case simple. In the real world managing this connection would obviously be more complicated. package websocket;import java.io.IOException;import javax.websocket.ClientEndpoint; import javax.websocket.CloseReason; import javax.websocket.EncodeException; import javax.websocket.Endpoint; import javax.websocket.EndpointConfig; import javax.websocket.MessageHandler; import javax.websocket.Session;import static java.lang.System.out;@ClientEndpoint(encoders = {EchoBean.EchoBeanCode.class}, decoders = {EchoBean.EchoBeanCode.class}) public class EchoBeanClient extends Endpoint { public void onOpen(final Session session, EndpointConfig endpointConfig) {out.println("Client Connection open " + session + " " + endpointConfig);// Add a listener to capture the returning event //session.addMessageHandler(new MessageHandler.Whole() {@Override public void onMessage(EchoBean bean) { out.println("Message from server : " + bean.getReply());out.println("Closing connection"); try { session.close(new CloseReason(CloseReason.CloseCodes.NORMAL_CLOSURE, "All fine")); } catch (IOException e) { e.printStackTrace(); } } });// Once we are connected we can now safely send out initial message to the server //out.println("Sending message to server"); try { EchoBean bean = new EchoBean("Hello"); session.getBasicRemote().sendObject(bean); } catch (IOException e) { e.printStackTrace(); } catch (EncodeException e) { e.printStackTrace(); }} } Now running the WebSocket standalone is really quite straightforward with Tyrus, you simple instantiate a Server and start it. Be aware this starts daemon threads so you need to make sure if this is in a main method that you do something to keep the JVM alive. import org.glassfish.tyrus.server.Server;Server server = new Server("localhost", 8025, "/", EchoBeanService.class); server.start(); So the client is relatively simple; but as we are doing the declarative method we need to explicitly register the encoders and decoders when registering the client class. import javax.websocket.ClientEndpointConfig; import javax.websocket.Decoder; import javax.websocket.Encoder; import javax.websocket.Session;import org.glassfish.tyrus.client.ClientManager;// Right now we have to create a client, which will send a message then close // when it has received a reply //ClientManager client = ClientManager.createClient(); EchoBeanClient beanClient = new EchoBeanClient();Session session = client.connectToServer( beanClient, ClientEndpointConfig.Builder.create() .encoders(Arrays.<Class<? extends Encoder>>asList(EchoBean.EchoBeanCode.class)) .decoders(Arrays.<Class<? extends Decoder>>asList(EchoBean.EchoBeanCode.class)) .build(), URI.create("ws://localhost:8025/echo"));// Wait until things are closed downwhile (session.isOpen()) { out.println("Waiting"); TimeUnit.MILLISECONDS.sleep(10); } Now the output of this looks like the following: Server connected SessionImpl{uri=/echo, id='e7739cc8-1ce5-4c26-ad5f-88a24c688799', endpoint=EndpointWrapper{endpointClass=null, endpoint=org.glassfish.tyrus.core.AnnotatedEndpoint@1ce5bc9, uri='/echo', contextPath='/'}} javax.websocket.server.DefaultServerEndpointConfig@ec120d Waiting Client Connection open SessionImpl{uri=ws://localhost:8025/echo, id='7428be2b-6f8a-4c40-a0c4-b1c8b22e1338', endpoint=EndpointWrapper{endpointClass=null, endpoint=websocket.EchoBeanClient@404c85, uri='ws://localhost:8025/echo', contextPath='ws://localhost:8025/echo'}} javax.websocket.DefaultClientEndpointConfig@15fdf14 Sending message to server Waiting Waiting Waiting Waiting Waiting Waiting Waiting Waiting Waiting Waiting Sending message to client Message from server : Server says Hello Closing connection Waiting Interestingly the first time this is run the there is a pause, I suspect this is due to Jackson setting itself up but I haven’t had time to profile. I did find that this long delay on occurred on the first post – although obviously this is going to be slower than just passing plain text messages in general. Whether the different is significant to you depends on your application. It would be interesting to compare the performance of the plain text with a JSON stream API such as that provided by the new JSR and of course the version that binds those values to a JSON POJO. Something for another day perhaps.   Reference: Using Java WebSockets, JSR 356, and JSON mapped to POJO’s from our JCG partner Gerard Davison at the Gerard Davison’s blog blog. ...

What does Code Ownership do to Code?

In my last post, I talked about Code Ownership models, and why you might want to choose one code ownership model (strong, weak/custodial or collective) over another. Most of the arguments over code ownership focus on managing people, team dynamics, and the effects on delivery. But what about the longer term effects on the shape, structure and quality of code – does the ownership model make a difference? What are the long-term effects of letting everyone working on the same code, or of having 1 or 2 people working on the same pieces of code for a long time? Collective Code Ownership and Code Quality Over time, changes tend to concentrate in certain areas of code: in core logic and in   and behind interfaces (listen to Michael Feathers’ fascinating talk Discovering Startling Things from your Version Control System). This means that the longer a system has been running, the more chances there are for people to touch the same code. Some interesting research work backs up what should be obvious: that the people who understand the code the best are the people who work on it the most, and the people who know the code the best make less mistakes when changing it. In Don’t Touch my Code!, researchers at Microsoft (BTW, the lead author Christian Bird is not a relative of mine, at least not a relative who I know) found that as more people touch the same piece of code, it leads to more opportunities for misunderstandings and more mistakes. Not surprisingly, people who hadn’t worked on a piece of code before made more mistakes, and as the number of developers working on the same module increased, so did the chance of introducing bugs. Another study, Ownership and Experience in Fix-Inducing Code tries to answer which is more important in code quality: “too many cooks spoil the broth”, or “given enough eyeballs, all bugs are shallow”? Does more people working on the same code lead to more bugs, or does having more people working on the code mean that there are more chances to find bugs early? This research team found that a programmer’s specific experience with the code was the most important factor in determining code quality – code that is changed by the programmer who does most of the work on that code is of higher quality than code written by someone who doesn’t normally work on the code, even if that someone is a senior developer who has worked on other parts of the code. And they found that the fewer the people working on a piece of code, the fewer the bugs that needed to be fixed. And a study on contributions to Linux reinforces that as the number of developers working on the same piece of code increase, the chance of bugs and security problems increases significantly: code touched by more than 9 developers is 16x more likely to have security vulnerabilities, and more vulnerabilities are introduced by developers who are making changes across many different pieces of code. Long-term Effects of Ownership Approach on Code Structure I’ve worked at shops where the same programmers have owned the same code for 3 or 4 or 5 or even 10 years or sometimes even longer. Over that time, that programmer’s biases, strengths, weaknesses and idiosyncrasies are all amplified, wearing deep grooves in the code. This can be a good thing, and a bad thing. The good thing is that with one person making most or all of the changes, internal consistency in any piece of code will be high – you can look at a piece of code written by that developer and once you understand their approach and way of thinking, the patterns and idioms that they prefer, everything should be familiar and easy to follow. Their style and approach might have changed over time as they learned and improved as a developer, but you can generally anticipate how the rest of the code will work, and you’ll recognize what they are good at and what their blind spots are, what kind of mistakes they are prone to: as I mentioned in the earlier post, this makes code easier to review and easier to test and so easier to find and fix bugs. If a developer tends to write good, clean, tight code, and if they are diligent about refactoring and keeping the code clean and tight, then most of the code will be good, clean, tight and easy to follow. Of course it follows that if they tend to write sloppy, hard-to-understand, poorly structured code, then most of it will be sloppy, hard-to-understand and poorly-structured. Then again, even this can be a good thing – at least bad code is isolated, and you know what you have to rewrite, instead of someone spreading a little bid of badness everywhere. When ownership changes – when the primary contributor leaves, and a new owner takes over, the structure and style of the code will change as well. Maybe not right away, because a new owner usually takes some time to get used to the code before they put their stamp on it, but at some point they’ll start adapting it – even unconsciously – to their own preferences and biases and ways of thinking, refactoring or rewriting it to suit them. If a lot of developers have worked on the same piece of code, they will introduce different ideas, techniques and approaches over time as they each do their part, as they refactor and rewrite things according to their own ideas of what is easy to understand and what isn’t, what’s right and wrong. They will each make different kinds of mistakes. Even with clear and consistent shared team conventions and standards, differences and inconsistencies can build up over time, as people leave and new people join the team, creating dissonance and making it harder to follow a thought through the code, harder to test and review, and harder to hold on to the design. Ownership Models and Refactoring But as Michael Feathers has found through mining version control history, there is also a positive Ownership Effect on code as more people work on the same code. Over time, methods and classes tend to get bigger because it’s easier to add code to an existing method than to write a new method, and easier to add another method to an existing class than create a new class. By correlating the number of developers who have touched a piece of code with method size, Feathers research shows that as the number of developers working on a piece of code increases, the average method size tends to get smaller. In other words, having multiple people working on a code base encourages refactoring and simpler code, because people who aren’t familiar with the code have to simplify it first in order to understand it. Feathers has also found that code behind APIs tends to be especially messy – because some interfaces are too hard to change, programmers are forced to come up with their own workarounds behind the scenes. Martin Fowler explains how this problem is made worse by strong code ownership, which inhibits refactoring and makes the code more internally rigid:In strong code ownership, there’s my code and your code. I can’t change your code. If I want to change the name of one of my methods, and it’s called by your code, I’ve got to get you to change the call into me before I can change my name. Or I’ve got to go through the whole deprecation business. Essentially any of my interfaces that you use become published in that situation, because I can’t touch your code for any reason at all. There’s an intermediate ground that I call weak code ownership. With weak code ownership, there’s my code and your code, but it is accepted that I could go in and change your code. There’s a sense that you’re still responsible for the overall quality of your code. If I were just going to change a method name in my code, I’d just do it. But on the other hand, if I were going to move some responsibilities between classes, I should at least let you know what I’m going to do before I do it, because it’s your code. That’s different than the collective code ownership model. Weak code ownership and refactoring are OK. Collective code ownership and refactoring are OK. But strong code ownership and refactoring are a right pain in the butt, because a lot of the refactorings you want to make you can’t make. You can’t make the refactorings, because you can’t go into the calling code and make the necessary updates there. That’s why strong code ownership doesn’t go well with refactoring, but weak code ownership works fine with refactoring. (Design Principles and Code Ownership) Ownership, Technical Debt or Deepening Insight An individual owner has a higher tolerance for complexity, because after all it’s their code and they know how it works and it’s not really that hard to understand (not for them at least) so they don’t need to constantly simplify it just to make a change or fix something. It’s also easy for them to take short cuts, and even short cuts on short cuts. This can build up over time until you end up with a serious technical debt problem – one person is always working on that code, not because the problem is highly specialized, but because the code has reached a point where nobody else but Scotty can understand it and make it work. There’s a flip side to spending more time on code too. The more time that you spend on the same problem, the deeper you can see into it. As you return to the same code again and again you can recognize patterns, and areas that you can improve, and compromises that you aren’t willing to accept any more. As you learn more about the language and the frameworks, you can go back and put in simpler and safer ways of doing things. You can see what the design really should be, where the code needs to go, and take it there. There’s also opportunity cost of not sticking to certain areas. Focusing on a problem allows you to create better solutions. Specifically, it allows you to create a vision of what needs to be done, work towards that vision and constantly revise where necessary… If you’re jumping from problem to problem, you’re more likely to create an inferior solution. You’ll solve problems, but you’ll be creating higher maintenance costs for the project in the long term. Jay Fields, Taking a Second Look at Collective Code Ownership So far I’ve found that the only way for a team to take on really big problems is by breaking the problems up and letting different people own different parts of the solution. This means taking on problems and costs in the short term and the long term, trading off quality and productivity against flexibility and consistency – not only flexibility and consistency in how the team works, but in the code itself. What I’ve also learned is that whether you have a team of people who each own a piece of the system, or a more open custodian environment, or even if everyone is working everywhere all of the time, you can’t let people do this work completely on their own. It’s critical to have people working together, whether you are pairing in XP or doing regular egoless code reviews. To help people work on code that they’ve never seen before – or to help long-time owners recognize their blind spots. To mentor and to share new ideas and techniques. To keep people from falling into bad habits. To keep control over complexity. To reinforce consistency – across the code base or inside a piece of code.   Reference: What does Code Ownership do to Code? from our JCG partner Jim Bird at the Building Real Software blog. ...

Fallacies of massively distributed computing

In the last few years, we see the advent of highly distributed systems. Systems that have clusters with lots of servers are no longer the sole realm of the googles’ and facebooks’ of the world and we begin to see multi-node and big data systems in enterprises. e.g. I don’t think a company such as Nice (the company I work for) would release an hadoop based analytics platform and solutions, something we did just last week, 5-6 years ago. So now that large(r) clusters are more prevalent, I thought it would be a good time to reflect on the fallacies of distributed computing and how/if they are relevant; should they be changed. If you don’t know about the fallacies you can see the list and read the article I wrote about them at the link mentioned above. In a few words I’d just say that these are statement, originally drafted by Peter Deutsch, Tom Lyon and others in in 1991-2, about failed assumptions we are tempted to make when   working on distributed systems which turn out as fallacies and cost us dearly. So the fallacies help keep in mind that distributed systems are different, and they do seem to hold, even after the 20 years that passed. I think, however, that working with larger cluster we should also consider the following 3 as fallacies we’re likely to assumeInstances are free Instances have identities Map/Reduce is a panaceaInstances are free A lot of the new technologies of the big-data and noSQL era bring with them the promise of massive scalability. If you see a performance problem, you can just (a famous lullaby word) add another server. In most cases that is even true, you can indeed add more servers and get better performance. What these technologies don’t tell you is that instances have costs. More instances mean increased TCO starting from management effort monitoring, configuring etc, as well as operations cost either for the hardware; the rented space and electricity in a hosted solution or the usage by hours in a cloud environment. So from the development side of the fence the solution is easy – add more hardware. In reality sometimes it is better to make the effort and optimize your code/design. Just the other week we had a more than a 10 fold improvement in query performance by removing query parts that were no longer needed after a change in the data flow of the system – that was way cheaper than adding 2-3 more nodes to achieve the same results. Instances have identities I remember, sometime in Jurassic age, when I set up a network for the first time (A Novell Netware 3.11 if you must ask) it had just one server. Naturally that server was treated with a lot of respect. It had a a printer connected, it had a name, nobody could touch it but me. One server to rule all them clients. Moving on I had server farms, so just a list of random names began to be a problem so we started to use themes like gods, single malts (“can you reboot the Macallan please”) etc. Anyway, that’s all nice and dandy and if you are starting small with a (potentially) big data project you might be tempted to do something similar. If you are tempted – don’t. When you have tens of servers (and naturally even worst when you have hundreds or thousands) you no longer care about the individual server. You want to look at the world as pools of server types. you have a pool of data nodes in your hadoop cluster, a pool of application servers , a pool of servers running configuration x and another with configuration y. You’d need tools like abiquo and/or chef and/or ansible or similar products to manage this mess. But again, you won’t care much about XYZ2011 server and even it runs tomcat today, tomorrow it may make more sense to make it part of the cassandra cluster. What matters are the roles in the pools of resources and that the pool sizes will be enough to handle the capacity needed. Map/Reduce is a panacea Hadoop seems to be the VHS of large clusters. It might not be the ultimate solution, but it does seem to be the one that gets the most traction – a lot of vendors old (like IBM, Microsoft, Oracle etc.) and new (Hortonworks, Cloudera, Pivotal etc.) offer Hadoop distros and many other solutions offer Hadoop adaptors (Mongodb, Casandra, Vertica etc.) and Hadoop, well hadoop is about the distributed file system and, well, map/reduce. Map/Reduce, which was introduced in 2004 by Google is an efficient algorithm for going over a large distributed data set without moving the data (map) and then producing aggregated or merged of results (reduce). Map/Reduce is great and it is a very useful paradigm applicable for a large set of problems. However it shouldn’t be the only tool in your tool set as map/reduce is inefficient when there’s a need to do multiple iterations on the data (e.g. grpah processing) or when you have to do many incremental updates to the data but don’t need to touch all of it. Also there’s the matter of ad-hoc reports (which I’ll probably blog about separately) Google solved these in pregel, percolator and dremel in 2009/2010 and now the rest of the world is playing catchup as it did with map/reduce a few year ago – but even if the solutions are not mature yet, you should keep in mind that they are coming Instances are free; Instances have identities; and map/reduce is a panacea – these are my suggested additions to the fallacies of distributed computing when talking about large clusters. I’d be happy to hear what you think and/or if there are other things to keep in mind that I’ve missed   Reference: Fallacies of massively distributed computing from our JCG partner Arnon Rotem-Gal-Oz at the Cirrus Minor blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: