What's New Here?


Choosing Between Vaadin and JSF

With the recent release of Primefaces 3.0, JSF finally reaches an unprecedent level of maturity and utility that puts it face to face with other popular Rich Internet Applications (RIA) options, such as Google Web Toolkit (GWT), ExtJS, Vaadin, Flex and others. This open source project also proved to be very active and in constant growing path. I have been working with JSF + Primefaces since a year ago, when I started the project JUG Management, a web application conceived to manage user groups or communities focused on a certain domain of knowledge, whose members are constantly sharing information and attending social and educational events. JSF is a standard Java framework for building user interfaces for web applications with well-established development patterns and built upon the experience of many preexisting Java Web development frameworks. It is component-based and server-side user interface rendering, sending to clients (web browsers) pre-processed web based content such as HTML, JavaScript and CSS. My experience on this technology is openly available on java.net. Meanwhile, I had the opportunity to create a Proof of Concept (PoC) to compare JSF and Vaadin in order to help developers and architects to decide between one of them. Vaadin is a web application framework for RIA that offers robust server-side architecture, in contrast to other Javascript libraries and browser plugin-based solutions. The business logic runs on the server while a richer user interface, based on Google Web Toolkit (GWT), is fully rendered by the web browser, ensuring a fluent user experience. The result of the PoC was surprisingly interesting :) It ended up proposing both technologies instead of eliminating one of them. I found out, exploring available books, articles, blogs and websites, that despite being able to implement all sort of web applications, each technology has special characteristics, optimized to certain kinds of those applications. In practical terms, if we find out that JSF is better for a certain kind application, that’s because it would take more time and code to do the same with Vaadin. The inverse logic is also true. In order to understand that, we have to visit two fundamental concepts that have direct impact on web applications:Context of Use considers the user who will operate the application, the environment where the user is inserted, and the device the user is interacting with. Information Architecture considers the user of the application again, the business domain in which he or she works on and the content managed in that domain.Notice in the figure bellow that the user is always the center of attention in both concepts. That’s because we are evaluating two frameworks that have direct impact on the way users interact with web applications.Visiting the concepts above we have: Environment Some applications are available for internal purpose only, such as the ones available on the intranet, other applications are used by external users, such as the company website. Users of internal applications are more homogeneous and in limited number, which means that the UI can be a bit more complex to allow faster user interactions. That explains the fight Microsoft Office vs. Google Docs. The last one is not yet fully acceptable in the office environment because it has less functionalities than Microsoft Office. It is, on the other hand, more complex and more expensive. However, a limited number of users to a larger number of features makes acceptable to have some additional costs with training sections to profit from the productivity features. A company website targets heterogeneous users in unlimited environments. It is not possible to train all this people, thus simpler user interfaces with short and self-explanatory interactions are desirable. Considering the environment, we would recommend Vaadin for homogeneous users in limited environments and JSF for heterogeneous users in unlimited environments. Device Different devices demmand multiple sets of UI components, designed to look great from small to large screens. Fortunately, both frameworks have components to support the full range of screen sizes from regular desktops to mobile devices. The problem is that different devices brings different connectivity capabilities and the application should be ready to deal with short band-width and reduced transfer rates. In this case, Vaadin seems to be more suitable for multiple devices, as long as the variety of devices is not so extensive, because the user interface is rendered locally, using JavaScript, and it has a richer Ajax support to optimize the exchange of application data with the server. Business Domain In principle, good quality UI frameworks such as JSF and Vaadin can implement any business domain. The problem is how experienced the team is with the technology or how small is the learning curve to master it. Business is about timing and the technology that offers the best productivity will certainly win. If your team has previous experience with Swing then Vaadin is the natural choice. If the previous experience was more web-oriented, manipulating HTML, CSS ans Scripts, then JSF is recommended. Content Content is a very relevant criterion for choosing between Vaadin and JSF. In case the application needs to deal with volumous content of any type, such as long textual descriptions, videos, presentations, animations, graphics, charts and so on, then JSF is the recommended over Vaadin because JSF uses a web content rendering strategy to profit from all content-types supported by web browsers without the need for additional plugins or tags. The support for multiple content is only available on Vaadin through the use of plugins, which must be individually assessed before adoption. User At last, but not least, we have the user, who is the most important criterion when choosing a UI framework. We would emphasize two aspects:The user population: the largest is the target population the highest are the concerns about application compatibility. It must deal with several versions and types of browsers, operating systems, computers with different memory capacity and monitor resolution. All these without failures or security issues. For larger populations, the most appropriate technology is the most compatible one in a cross-platform environment, which is the case of JSF, since it uses a balanced combination of HTML, JavaScript and CSS, while Vaadin relies only on JavaScript and CSS. But shorter populations would have better profit with Vaadin because cross-browser compatibility is and will remain being a very hard work to be done by Vaadin’s development team behind the scene. User’s tasks: If the application is intensively operated by users then it is expected that it has more user’s tasks implemented. On the other hand, if the application is rarely used or has short intervals of intensive use, then there is a lower concentration of user’s tasks. According to the PoC, Vaadin is the technology that provides the best support on delivering user tasks with richer user interaction because of its fast visual response. JSF is less optimized on which concerns the user interaction.In conclusion, instead of discarding one of these frameworks consider both on the shelf of the company’s architectural choices, but visit the criteria above to make sure that you are using the right technology to implement the expected solution. A simple way to apply those criteria would be to assign weights to each criterion, according to the project’s characteristics; set which technology is appropriate for each criterion; and sum the weights for each technology. The highest weight elects the technology to be used in the project. Reference: Choosing Between Vaadin and JSF from our JCG partner Hildeberto Mendonca at the Hildeberto’s Blog....

20 Database Design Best Practices

Use well defined and consistent names for tables and columns (e.g. School, StudentCourse, CourseID …). Use singular for table names (i.e. use StudentCourse instead of StudentCourses). Table represents a collection of entities, there is no need for plural names. Don’t use spaces for table names. Otherwise you will have to use ‘{‘, ‘[‘, ‘“’ etc. characters to define tables (i.e. for accesing table Student Course you'll write “Student Course”. StudentCourse is much better). Don’t use unnecessary prefixes or suffixes for table names (i.e. use School instead of TblSchool, SchoolTable etc.). Keep passwords as encrypted for security. Decrypt them in application when required. Use integer id fields for all tables. If id is not required for the time being, it may be required in the future (for association tables, indexing ...). Choose columns with the integer data type (or its variants) for indexing. varchar column indexing will cause performance problems. Use bit fields for boolean values. Using integer or varchar is unnecessarily storage consuming. Also start those column names with “Is”. Provide authentication for database access. Don’t give admin role to each user. Avoid “select *” queries until it is really needed. Use "select [required_columns_list]” for better performance. Use an ORM (object relational mapping) framework (i.e. hibernate, iBatis …) if application code is big enough. Performance issues of ORM frameworks can be handled by detailed configuration parameters. Partition big and unused/rarely used tables/table parts to different physical storages for better query performance. For big, sensitive and mission critic database systems, use disaster recovery and security services like failover clustering, auto backups, replication etc. Use constraints (foreign key, check, not null …) for data integrity. Don’t give whole control to application code. Lack of database documentation is evil. Document your database design with ER schemas and instructions. Also write comment lines for your triggers, stored procedures and other scripts. Use indexes for frequently used queries on big tables. Analyser tools can be used to determine where indexes will be defined. For queries retrieving a range of rows, clustered indexes are usually better. For point queries, non-clustered indexes are usually better. Database server and the web server must be placed in different machines. This will provide more security (attackers can’t access data directly) and server CPU and memory performance will be better because of reduced request number and process usage. Image and blob data columns must not be defined in frequently queried tables because of performance issues. These data must be placed in separate tables and their pointer can be used in queried tables. Normalization must be used as required, to optimize the performance. Under-normalization will cause excessive repetition of data, over-normalization will cause excessive joins across too many tables. Both of them will get worse performance. Spend time for database modeling and design as much as required. Otherwise saved(!) design time will cause (saved(!) design time) * 10/100/1000 maintenance and re-design time.Reference: 20 Database Design Best Practices from our JCG partner Cagdas Basaraner at the CodeBuild blog....

Another aspect of coupling in Object Oriented paradigm

I had previously written a post related to coupling and cohesion here and that was more of a basic definition of both the terms. In this post I would like to throw some light on the tight dependency on the type of the component in use. Generally we would aim to design classes such that they interact via the interfaces or more generally put via API. Suppose we use interfaces (more of a generic term where we dont have implementations, not specific to keyword interface in Java or C#), but this is not enough, we need to provide some kind of implementation for the interface which is actually consumed by the other client classes. Before going into details let me pick some example (examples in Java): I would like to design a Reader which would help the client classes to get the information from any source specified- be it File System/Web. So the interface would be: interface Reader { public String read(); /** * Get the source to read from */ public String getSource(); }I would want the user to seamlessly use the API for reading file from the file system or from the web/ web document. The next step would be to create implementations to read from file system and from the web. class FileSystemReader implements Reader { private String source; public FileSystemReader(String source) { this.source = source; } @override public String getSource() { return this.source; } public String read() { //Read from the source. //The source is a file in file system. } } class HttpReader implements Reader { private String source; public FileSystemReader(String source) { this.source = source; } @override public String getSource() { return this.source; } public String read() { //Read from the source. //The source is a document in the web. } }One way of using these Interfaces and implementations is- The client classes deciding which Implementation to instantiate based on the format of the source. So it would be something like: class Client { /** * This is the consumer of the Reader API * @param source Source to read from */ public void performSomeOperation(String source) { Reader myReader = null; if(source contains "http://") { //its a web document, create a HttpReader myReader = new HttpReader(source); } else { myReader = new FileReader(source); } print(myReader.read()); } }All looks good, the classes interact with each other via the API, but you might feel within that something’s not good. Then there comes another requirement where in there can be a third source to read from and you would have to change in all the places where ever such a creation was being done. If you miss out the change in some place then you end up with a broken code, see how fragile your code has become. Apart from that your client class knows that HttpReader is used for this and FileReader for that, so you are giving away the type related information about the instance you are using and thereby the client code gets tightly coupled with this type information. This approach can at times break the Open Closed Principle because you end up editing the class each time a new implementation of the interface is added. So there should be someway we can shield this creation of instances, of different implementations of the interface, from user of these interfaces. Yes there are ways and I know by now you must have been waiting to unleash the Factory Method pattern. So how can the above code be modified to use a Factory /** * Factory to get the instance of Reader implementation */ class ReaderFactory { public static getReader(String source) { if( source contains http) { return new HttpReader(source); } else if(someother condition) { return new SomeReader(source); } else { return new FileReader(source); } } } class Client { /** * This is the consumer of the Reader API * @param source Source to read from */ public void performSomeOperation(String source) { Reader myReader = ReaderFacotry.getReader(source); print(myReader.read()); } }See how simple the Client code has become, no type related noise, the user would not need to know what type of instance is being used and hence keep itself less coupled with the type. The factory method takes care of seeing which implementation to return to the client code based on the pattern in the source string. This way we end up having a less coupled code in terms of the coupling due to exposing the type information. And now when there comes a new requirement for a new reader for a new source then you know where you have to make the change and the change would be only in one place. You can see that your code is less fragile and also you have eliminated unwanted redundancy from the code. One thing to keep in mind is that encapsulation is not only data hiding but also hiding the type related information from the user. Reference: Another aspect of coupling in Object Oriented paradigm from our JCG partner Mohamed Sanaulla at the Experiences Unlimited blog....

Spring 3, Spring Web Services 2 & LDAP Security

This year started on a good note, another one of those “the deadline won’t change” / “skip all the red tape” / “Wild West” type of projects in which I got to figure out and implement some functionality using some relatively new libraries and tech for a change, well Spring 3 ain’t new but in the Java 5, weblogic 10(.01), Spring 2.5.6 slow corporate kind of world it is all relative. Due to general time constraints I am not including too much “fluff” in this post, just the nitty gritty of creating and securing a Spring 3 , Spring WS 2 web service using multiple XSDs and LDAP security. The Code: The Service Endpoint: ExampleServiceEndpoint This is the class that will be exposed as web service using the configuration later in the post. package javaitzen.spring.ws;import org.springframework.ws.server.endpoint.annotation.Endpoint; import org.springframework.ws.server.endpoint.annotation.PayloadRoot; import org.springframework.ws.server.endpoint.annotation.RequestPayload; import org.springframework.ws.server.endpoint.annotation.ResponsePayload;import javax.annotation.Resource;@Endpoint public class ExampleServiceEndpoint {private static final String NAMESPACE_URI = "http://www.briandupreez.net";/** * Autowire a POJO to handle the business logic @Resource(name = "businessComponent") private ComponentInterface businessComponent; */public ExampleServiceEndpoint() { System.out.println(">> javaitzen.spring.ws.ExampleServiceEndpoint loaded."); }@PayloadRoot(localPart = "ProcessExample1Request", namespace = NAMESPACE_URI + "/example1") @ResponsePayload public Example1Response processExample1Request(@RequestPayload final Example1 request) { System.out.println(">> process example request1 ran."); return new Example1Response(); }@PayloadRoot(localPart = "ProcessExample2Request", namespace = NAMESPACE_URI + "/example2") @ResponsePayload public Example2Response processExample2Request(@RequestPayload final Example2 request) { System.out.println(">> process example request2 ran."); return new Example2Response(); }}The Code: CustomValidationCallbackHandler This was my bit of custom code I wrote to extend the AbstactCallbackHandler allowing us to use LDAP. As per the comments in the CallbackHandler below, it’s probably a good idea to have a cache manager, something like Hazelcast or Ehcache to cache authenticated users, depending on security / performance considerations. The Digest Validator below can just be used directly from the Sun library, I was just wanted to see how it worked. package javaitzen.spring.ws;import com.sun.org.apache.xml.internal.security.exceptions.Base64DecodingException; import com.sun.xml.wss.impl.callback.PasswordValidationCallback; import com.sun.xml.wss.impl.misc.Base64; import org.springframework.beans.factory.InitializingBean; import org.springframework.security.authentication.AuthenticationManager; import org.springframework.security.authentication.UsernamePasswordAuthenticationToken; import org.springframework.security.core.Authentication; import org.springframework.util.Assert; import org.springframework.ws.soap.security.callback.AbstractCallbackHandler;import javax.security.auth.callback.Callback; import javax.security.auth.callback.UnsupportedCallbackException; import java.io.IOException; import java.io.UnsupportedEncodingException; import java.security.MessageDigest; import java.util.Properties;public class CustomValidationCallbackHandler extends AbstractCallbackHandler implements InitializingBean {private Properties users = new Properties(); private AuthenticationManager ldapAuthenticationManager;@Override protected void handleInternal(final Callback callback) throws IOException, UnsupportedCallbackException {if (callback instanceof PasswordValidationCallback) { final PasswordValidationCallback passwordCallback = (PasswordValidationCallback) callback; if (passwordCallback.getRequest() instanceof PasswordValidationCallback.DigestPasswordRequest) { final PasswordValidationCallback.DigestPasswordRequest digestPasswordRequest = (PasswordValidationCallback.DigestPasswordRequest) passwordCallback.getRequest(); final String password = users .getProperty(digestPasswordRequest .getUsername()); digestPasswordRequest.setPassword(password); passwordCallback .setValidator(new CustomDigestPasswordValidator());} if (passwordCallback.getRequest() instanceof PasswordValidationCallback.PlainTextPasswordRequest) { passwordCallback .setValidator(new LDAPPlainTextPasswordValidator());} } else { throw new UnsupportedCallbackException(callback); }}/** * Digest Validator. * This code is directly from the sun class, I was just curious how it worked. */ private class CustomDigestPasswordValidator implements PasswordValidationCallback.PasswordValidator { public boolean validate(final PasswordValidationCallback.Request request) throws PasswordValidationCallback.PasswordValidationException {final PasswordValidationCallback.DigestPasswordRequest req = (PasswordValidationCallback.DigestPasswordRequest) request; final String passwd = req.getPassword(); final String nonce = req.getNonce(); final String created = req.getCreated(); final String passwordDigest = req.getDigest(); final String username = req.getUsername();if (null == passwd) return false; byte[] decodedNonce = null; if (null != nonce) { try { decodedNonce = Base64.decode(nonce); } catch (final Base64DecodingException bde) { throw new PasswordValidationCallback.PasswordValidationException(bde); } } String utf8String = ""; if (created != null) { utf8String += created; } utf8String += passwd; final byte[] utf8Bytes; try { utf8Bytes = utf8String.getBytes("utf-8"); } catch (final UnsupportedEncodingException uee) { throw new PasswordValidationCallback.PasswordValidationException(uee); }final byte[] bytesToHash; if (decodedNonce != null) { bytesToHash = new byte[utf8Bytes.length + decodedNonce.length]; for (int i = 0; i < decodedNonce.length; i++) bytesToHash[i] = decodedNonce[i]; for (int i = decodedNonce.length; i < utf8Bytes.length + decodedNonce.length; i++) bytesToHash[i] = utf8Bytes[i - decodedNonce.length]; } else { bytesToHash = utf8Bytes; } final byte[] hash; try { final MessageDigest sha = MessageDigest.getInstance("SHA-1"); hash = sha.digest(bytesToHash); } catch (final Exception e) { throw new PasswordValidationCallback.PasswordValidationException( "Password Digest could not be created" + e); } return (passwordDigest.equals(Base64.encode(hash))); }}/** * LDAP Plain Text validator. */ private class LDAPPlainTextPasswordValidator implements PasswordValidationCallback.PasswordValidator {/** * Validate the callback against the injected LDAP server. * Probably a good idea to have a cache manager - ehcache / hazelcast injected to cache authenticated users. * * @param request the callback request * @return true if login successful * @throws PasswordValidationCallback.PasswordValidationException * */ public boolean validate(final PasswordValidationCallback.Request request) throws PasswordValidationCallback.PasswordValidationException { final PasswordValidationCallback.PlainTextPasswordRequest plainTextPasswordRequest = (PasswordValidationCallback.PlainTextPasswordRequest) request; final String username = plainTextPasswordRequest.getUsername();final Authentication authentication; final Authentication userPassAuth = new UsernamePasswordAuthenticationToken(username, plainTextPasswordRequest.getPassword()); authentication = ldapAuthenticationManager.authenticate(userPassAuth);return authentication.isAuthenticated();} }/** * Assert users. * * @throws Exception error */ public void afterPropertiesSet() throws Exception { Assert.notNull(users, "Users is required."); Assert.notNull(this.ldapAuthenticationManager, "A LDAP Authentication manager is required."); }/** * Sets the users to validate against. Property names are usernames, property values are passwords. * * @param users the users */ public void setUsers(final Properties users) { this.users = users; }/** * The the authentication manager. * * @param ldapAuthenticationManager the provider */ public void setLdapAuthenticationManager(final AuthenticationManager ldapAuthenticationManager) { this.ldapAuthenticationManager = ldapAuthenticationManager; } }The service config: The configuration for the Endpoint, CallbackHandler and the LDAP Authentication manager. The Application Context – Server Side: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.springframework.org/schema/beans" xmlns:context="http://www.springframework.org/schema/context" xmlns:sws="http://www.springframework.org/schema/web-services" xmlns:s="http://www.springframework.org/schema/security" xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans.xsdhttp://www.springframework.org/schema/web-serviceshttp://www.springframework.org/schema/web-services/web-services-2.0.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context.xsdhttp://www.springframework.org/schema/securityhttp://www.springframework.org/schema/security/spring-security-3.0.xsd"> <sws:annotation-driven/> <context:component-scan base-package="javaitzen.spring.ws"/> <sws:dynamic-wsdl id="exampleService" portTypeName="javaitzen.spring.ws.ExampleServiceEndpoint" locationUri="/exampleService/" targetNamespace="http://www.briandupreez.net/exampleService"> <sws:xsd location="classpath:/xsd/Example1Request.xsd"/> <sws:xsd location="classpath:/xsd/Example1Response.xsd"/> <sws:xsd location="classpath:/xsd/Example2Request.xsd"/> <sws:xsd location="classpath:/xsd/Example2Response.xsd"/> </sws:dynamic-wsdl> <sws:interceptors> <bean id="validatingInterceptor" class="org.springframework.ws.soap.server.endpoint.interceptor.PayloadValidatingInterceptor"> <property name="schema" value="classpath:/xsd/Example1Request.xsd"/> <property name="validateRequest" value="true"/> <property name="validateResponse" value="true"/> </bean> <bean id="loggingInterceptor" class="org.springframework.ws.server.endpoint.interceptor.PayloadLoggingInterceptor"/> <bean class="org.springframework.ws.soap.security.xwss.XwsSecurityInterceptor"> <property name="policyConfiguration" value="/WEB-INF/securityPolicy.xml"/> <property name="callbackHandlers"> <list> <ref bean="callbackHandler"/> </list> </property> </bean> </sws:interceptors> <bean id="callbackHandler" class="javaitzen.spring.ws.CustomValidationCallbackHandler"> <property name="ldapAuthenticationManager" ref="authManager" /> </bean> <s:authentication-manager alias="authManager"> <s:ldap-authentication-provider user-search-filter="(uid={0})" user-search-base="ou=users" group-role-attribute="cn" role-prefix="ROLE_"> </s:ldap-authentication-provider> </s:authentication-manager> <!-- Example... (inmemory apache ldap service) --> <s:ldap-server id="contextSource" root="o=example" ldif="classpath:example.ldif"/> <!-- If you want to connect to a real LDAP server it would look more like: <s:ldap-server id="contextSource" url="ldap://localhost:7001/o=example" manager-dn="uid=admin,ou=system" manager-password="secret"> </s:ldap-server>--> <bean id="marshallingPayloadMethodProcessor" class="org.springframework.ws.server.endpoint.adapter.method.MarshallingPayloadMethodProcessor"> <constructor-arg ref="serviceMarshaller"/> <constructor-arg ref="serviceMarshaller"/> </bean> <bean id="defaultMethodEndpointAdapter" class="org.springframework.ws.server.endpoint.adapter.DefaultMethodEndpointAdapter"> <property name="methodArgumentResolvers"> <list> <ref bean="marshallingPayloadMethodProcessor"/> </list> </property> <property name="methodReturnValueHandlers"> <list> <ref bean="marshallingPayloadMethodProcessor"/> </list> </property> </bean> <bean id="serviceMarshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="classesToBeBound"> <list> <value>javaitzen.spring.ws.Example1</value> <value>javaitzen.spring.ws.Example1Response</value> <value>javaitzen.spring.ws.Example2</value> <value>javaitzen.spring.ws.Example2Response</value> </list> </property> <property name="marshallerProperties"> <map> <entry key="jaxb.formatted.output"> <value type="java.lang.Boolean">true</value> </entry> </map> </property> </bean> </beans>The Security Context – Server Side: xwss:SecurityConfiguration xmlns:xwss="http://java.sun.com/xml/ns/xwss/config"> <xwss:RequireTimestamp maxClockSkew="60" timestampFreshnessLimit="300"/> <!-- Expect plain text tokens from the client --> <xwss:RequireUsernameToken passwordDigestRequired="false" nonceRequired="false"/> <xwss:Timestamp/> <!-- server side reply token --> <xwss:UsernameToken name="server" password="server1" digestPassword="false" useNonce="false"/> </xwss:SecurityConfiguration>The Web XML: Nothing really special here, just the Spring WS MessageDispatcherServlet.spring-ws org.springframework.ws.transport.http.MessageDispatcherServlet transformWsdlLocationstrue 1 spring-ws /*The client config: To test or use the service you’ll need the following: The Application Context – Client Side Test: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="messageFactory" class="org.springframework.ws.soap.saaj.SaajSoapMessageFactory"/> <bean id="webServiceTemplate" class="org.springframework.ws.client.core.WebServiceTemplate"> <constructor-arg ref="messageFactory"/> <property name="marshaller" ref="serviceMarshaller"/> <property name="unmarshaller" ref="serviceMarshaller"/> <property name="defaultUri" value="http://localhost:7001/example/spring-ws/exampleService"/> <property name="interceptors"> <list> <ref local="xwsSecurityInterceptor"/> </list> </property> </bean> <bean id="xwsSecurityInterceptor" class="org.springframework.ws.soap.security.xwss.XwsSecurityInterceptor"> <property name="policyConfiguration" value="testSecurityPolicy.xml"/> <property name="callbackHandlers"> <list> <ref bean="callbackHandler"/> </list> </property> </bean> <!-- As a client the username and password generated by the server must match with the client! --> <!-- a simple callback handler to configure users and passwords with an in-memory Properties object. --> <bean id="callbackHandler" class="org.springframework.ws.soap.security.xwss.callback.SimplePasswordValidationCallbackHandler"> <property name="users"> <props> <prop key="server">server1</prop> </props> </property> </bean> <bean id="serviceMarshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="classesToBeBound"> <list> <value>javaitzen.spring.ws.Example1</value> <value>javaitzen.spring.ws.Example1Response</value> <value>javaitzen.spring.ws.Example2</value> <value>javaitzen.spring.ws.Example2Response</value> </list> </property> <property name="marshallerProperties"> <map> <entry key="jaxb.formatted.output"> <value type="java.lang.Boolean">true</value> </entry> </map> </property> </bean>The Security Context – Client Side: <xwss:SecurityConfiguration xmlns:xwss="http://java.sun.com/xml/ns/xwss/config"> <xwss:RequireTimestamp maxClockSkew="60" timestampFreshnessLimit="300"/> <!-- Expect a plain text reply from the server --> <xwss:RequireUsernameToken passwordDigestRequired="false" nonceRequired="false"/> <xwss:Timestamp/> <!-- Client sending to server --> <xwss:UsernameToken name="example" password="pass" digestPassword="false" useNonce="false"/> </xwss:SecurityConfiguration>As usual with Java there can be a couple little nuances when it comes to jars and versions so below is part of the pom I used. The Dependencies:3.0.6.RELEASE 2.0.2.RELEASE org.apache.directory.server apacheds-all 1.5.5 jar compile org.springframework.ws spring-ws-core ${spring-ws-version} org.springframework spring-webmvc ${spring-version} org.springframework spring-web ${spring-version} org.springframework spring-context ${spring-version} org.springframework spring-core ${spring-version} org.springframework spring-beans ${spring-version} org.springframework spring-oxm ${spring-version} org.springframework.ws spring-ws-security ${spring-ws-version} org.springframework.security spring-security-core ${spring-version} org.springframework.security spring-security-ldap ${spring-version} org.springframework.ldap spring-ldap-core 1.3.0.RELEASE org.apache.ws.security wss4j 1.5.12 com.sun.xml.wss xws-security 3.0 org.apache.ws.commons.schema XmlSchema 1.4.2</project>Reference: Spring 3, Spring Web Services 2 & LDAP Security. from our JCG partner Brian Du Preez at the Zen in the art of IT blog....

Scala: Working with Predicates

I love me some Scala. Actually, since it’s now my day job, I love it all the time. It combines the short, expressiveness that I prized in Python with a rich library base (thanks Java) and the compiler checking that I have come to depend upon in a statically typed language. I don’t care what some people say. I recognize that the language is not without it’s flaws. One could say that there’s a bit of missing language extentions, particularly with predicates. What do I mean by that? Is there not implicit support baked into the language such that they generalize any A => Boolean? Certainly. However, I have a problem when I see methods like List‘s ::filter and ::filterNot. The former makes sense, the later highlights the absence of fundamental building blocks which can be seen directly in the name. That is, we’re missing a “Not” helper predicate function: case class Not[A](func: A => Boolean) extends (A => Boolean){ def apply(arg0: A) = !func(arg0) }If it were that simple a fix and if that were all that was missing then it would be easy to suggest and have put into the next version of Scala. Of course we’d also need to have 22 versions of “Not” for each of the 22 versions of Function but that’s a debate for another day. Suffice to say, Scala needs explicit predicate support. It needs more than just a “Not,” it needs easy to read and maintain logic combinators, and it needs support for the basic building blocks that can be used to form higher order predicate logic. Using other accepted Predicates libraries would not give us the power and flexibility needed. Adding Predicate Expressions That’s exactly what I did with my Predicates library. One of the goals of this small library was to add some simple syntactic support for composing predicate functions in a descriptive and concise manner. Specifically I wanted to be able to say “greater than 4 but less than 10? or “greater than zero or even but not both” in almost plain English. I write expressions equivalent to that all the time with ::filter and ::exists statements: myList.filter(x => x > 4 && x < 10)For small phrases, it’s not that difficult. The only extra boilerplate that’s added is the designation “x =>” to indicate that we’re forming an anonymous function. Unfortunately, if I want to reuse, extend or maintain that logic I have to use even more boilerplate. Sometimes, if the logic is severe enough, I need to splice it into several methods which might or might not be attached to traits/class hierarchies. While good coding style, this added verbosity leaves a bad taste in my mouth. What I’d really like to do is have operators which apply to the expressions themselves and not the evaluation of the expressions. The result of these operators would be functions themselves, preserving the composable nature we first started with. To say this another way, an “or” which turns two predicate objects into a third, distinct predicate object that represents a logical or between the first two predicates. As long as each of the precursor objects was built upon an immutable, referentially transparent foundation the resulting compound predicate expression would be safe to use in any environment. This is what was added to each Predicate variant within the Predicates library. The Predicate member functions work as factory methods to generate new Predicates based upon the current Predicate and a Predicate argument. While similar in concept to composition between functions, there is no guarantee that each composed Predicate is even evaluated. There are 22 of Predicate variants, much akin to how Scala chose to have 22 Function variants, each equiped with the following methods: and => pred1(…) && pred2(…) } andNot => pred1(…) && !pred2(…) nand => !pred1(…) || !pred2(…) or => pred1(…) || pred2(…) orNot => pred1(…) || !pred2(…) nor => !(pred1(…) || pred2(…)) xor => if(pred1(…) !pred2(…) else pred2(…) xnor => And as I said before, each of these functions returns another Predicate (which is really just another function.) In practice using these member functions looks something like this: case class LessThen(x: Int) extends Function[Int,Boolean]{ def apply(arg: Int) = arg < x } case class Modulo(x: Int, group: Int) extends Function[Int,Boolean]{ def apply(arg: Int) = (arg % x) == group } case class GreaterThanEqual(x: Int) extends Function[Int,Boolean]{ def apply(arg: Int) = arg >= x } val myList = List(1,2,3,4,5,6,7,8,9) myList.filter( LessThan(7) and GreaterThanEqual(4)) myList.filter( Modulo(4,2) or Modulo(3,0) or Modulo(5,1) )with Predicates being able to be chained together to form more complicated logical expressions. Using Implicit Conversions to Avoid Pollution In object oriented programming, if I had some difficult logic which I wanted to pass around or call associated with a single class from a particular hierarchy I could either add it as a companion class which adhered to the single responsibility philosophy or tack it onto the object itself. The later was generally discouraged unless it needed access to private state or we were using delegation. That said, if several functions were needed the companion class’ interface might grow and become a helper class (and boy did some people love to grow them.) As the libraries and code base matured, combining predicate expressions became a hideously complex, dangerous and blame ridden process. In short, the code often became a maintenance nightmare. I want to state for the record this wasn’t an innate problem of imperative or object oriented programming but rather how people were allowed to program in it. While OO-design has the strategy pattern, it is only as good as it is enforced. My implementation of Predicates, yielding to a somewhat imperative flair (the factory methods are instance methods,) does not protect against misuse. Some people argue that Scala isn’t functional enough, that it doesn’t enforce immutability and in some ways this is true. It’s an unfortunate side-effect (love puns) of being backwards compatible with Java. I wanted to avoid the kinds of problems I faced previously with a strictly OO-code base in as general a way as possible. The implicit conversion hid the transformed class behind a restricted interface, a la an adapter pattern, much like Scala does with anonymous functions. I reasoned whatever crud might be added to a class would be hidden by this interface and thus would not pollute the predicate. Add to this the ability to compose functions to create different types of predicates from an initial predicate and we gained a rather large leg up on bad code production. Functional composition has got to be one of the best things Scala stole from functional programming. What Else? There was only one other thing to add to the “predicates” portion of this library, an “is” function. The idea for this function was stolen from Data.Function.Predicate of Haskell. At first I created all 22 versions with the same exact signature of Haskell’s “is” but then I realized Scala’s eager evaluation caused a type mismatch that couldn’t easily be overcome without added boilerplate. Since “is” was designed with reducing boilerplate while at the same time increasing readability the simple solution was to create an implicit conversion to an anonymous class with a single “is” method accepting a predicate. Thus written it could be used as follows: myStringList.filter(_.length is LessThan(0)) which is very readable and maps an anonymous function of type A => B to A => Boolean. The downside is that it creates a new object at each invocation. Future Work Conditional functions are hard to design well yet at the same time are the bedrock of computational logic gates. Partial Functions can be used to create predicated logic but in a non-transparent manner to the outside observer. There’s an ::orElse function for a reason (a good one too) which is used more for case coverage rather than case completeness. In fact, the existence of the ::lift member function showcases that a “catch all” logic path is not required unlike the standard “if-else” statement. Hence, PartialFunction is not a good choice for predicated applications. After I fleshed out some simple logic composition functions to work with Predicates I wanted to add a structure for composing more complicated predicated expressions. That is, a function which included a predicate to control flow which was both composable and extendable. Adding in conditional support for predicated application such that a Predicate expression controlled the program flow: case class ApplyEither[A,B](pred: Predicate[A], thatTrue: A => B, thatFalse: A => B) extends (A => B){ def apply(arg0: A) = if(pred(arg0)) thatTrue(arg0) else thatFalse(arg0) }was easy following a very simple imperative model. Expanding upon that to composition: case class ComposeEither[A,B,C](pred: Predicate[B], that: A => B, thatTrue: B => C, thatFalse: B => C) extends (A => C){ def apply(arg0: A) ={ val out = that(arg0) if(pred(out)) thatTrue(out) else thatFalse(out) } }also proved to be easy. It was so easy, I wrote more scripts to generate the code for 22 versions of an “ApplyIf,” “ApplyEither,” “ComposeIf,” “ComposeEither,” “AndThenIf,” and “AndThenEither.” Then I expanded on the code I had written so that they all extended the same trait, thus allowing one to be used within another. There was only one big problem with it all, it created an inflexible structure that couldn’t be traversed easily without expanding upon the interface of the various predicated function classes. The question “what are all values down all potential paths” required a new method. The question “what function did I use” required yet another. And so on and so on until the interface of every class began to look like the dreaded helper class. This was a classic example of the expression problem. The right approach, in hindsight, was to create a tree like structure to express computation tree logic. Something that held the arrangement of the functions and predicates and was accompanied by distinctly separate set of functions to traverse that tree. I say in hindsight because I first created all the classes and then deleted them after I started feeling the pain of all the different questions I couldn’t answer without tacking on yet another method. This is something coming in the future. Personally I’d like to wait for a proper implementation of an HList that doesn’t suffer from type erasure or require experimental compiler flags but in the mean time. Miles Sabin has already proved it can be done with his incredible library Shapeless. Now all I need to do is wait for the compiler changes it requires to go mainstream. Reference: Scala: Working with Predicates from our JCG partner Owein Reese at the Statically Typed blog....

Launching and Debugging Tomcat from Eclipse without complex plugins

Modern IDEs like Eclipse provide various Plugins to ease web developement. However, I believe that starting Tomcat as “normal” Java application still provides the best debugging experience. Most of the time, this is because these tools launch Tomcat or any other servlet container as external process and then attach a remote debugger on it. While you’re still able to set breakpoints and inspect variables, other features like hot code replacement don’t work that well. Therefore I prefer to start my Tomcat just like any other Java application from within Eclipse. Here’s how it works: This article addresses experienced Eclipse users. You should already know how to create projects, change their built path and how to run classes. If you need any help, feel free to leave a comment or contact me. We’ll add the Tomcat as additional Eclipse project, so that paths and all remain platform independent. (I even keep this project in our SVN so that everybody works with the same setup). Step 1 – Create new Java project named “Tomcat7”Step 2 – Remove the “src” source folderStep 3 – Download Tomcat (Core Version) and unzip into our newly created project. This should now look something like this:Step 4 – If you havn’t, create a new Test project which contains your sources (servlets, jsp pages, jsf pages…). Make sure you add the required libraries to the built path of the projectStep 5.1 – Create a run configuration. Select our Test project as base and set org.apache.catalina.startup.Bootstrap as main class.Step 5.2 – Optionally specify larger heap settings as VM arguments. Important: Select the “Tomcat” project as working directory (Click on the “Workspace” button below the entry field.Step 5.3 – Add bootstrap.jar and tomcat-juli.jar from the Tomcat7/bin directory as bootstrap classpath.Add everything in Tomcat7/lib as user entries. Make sure the Test project and all other classpath entries (i.e. maven dependencies) are below those.Now you can “Apply” and start Tomcat by hitting “Debug”. After a few seconds (check the console output) you can go to http://localhost:8080/examples/ and check out the examples provided by Tomcat. Step 6 – Add Demo-Servlet – Go to our Test project, add a new package called “demo” and a new servlet called “TestServlet”. Be creative with some test output – like I was…Step 7 – Change web.xml – Go to the web.xml of the examples context and add our servlet (as shown in the image). Below all servlets you also have to add a servlet-mapping (not shown in the image below). This looks like that: <servlet-mapping> <servlet-name>test</servlet-name> <url-pattern>/demo/test</url-pattern> </servlet-mapping>Hit save and restart tomcat. You should now see your debug output by surfing to http://localhost:8080/examples/demo/test – You now can set breakpoints, change the output (thanks to hot code replacement) and do all the other fun stuff you do with other debugging sessions. Hint: Keeping your JSP/JSF files as well as your web.xml and other resources already in another project? Just create a little ANT script which copies them into the webapps folder of the tomcat – and you get re-deployment with a single mouse click. Even better (this is what we do): You can modify/override the ResourceResolver of JSF. Therefore you can simply use the classloader to resolve your .xhtml files. This way, you can keep your Java sources and your JSF sources close to each other. I will cover that in another post – The fun stuff starts when running multi tenant systems with custom JSF files per tenant. The JSF implementation of Sun/Oracle has some nice gotchas built-in for that case ;-) Reference: Launching and Debugging Tomcat from Eclipse without complex plugins from our JCG partner Andreas Haufler at the Andy’s Software Engineering Corner blog....

Hibernate cache levels tutorial

One of the common problems of people that start using Hibernate is performance, if you don’t have much experience in Hibernate you will find how quickly your application becomes slow. If you enable sql traces, you would see how many queries are sent to database that can be avoided with little Hibernate knowledge. In current post I am going to explain how to use Hibernate Query Cache to avoid amount of traffic between your application and database. Hibernate offers two caching levels:The first level cache is the session cache. Objects are cached within the current session and they are only alive until the session is closed. The second level cache exists as long as the session factory is alive. Keep in mind that in case of Hibernate, second level cache is not a tree of objects; object instances are not cached, instead it stores attribute values.After this brief introduction (so brief I know) about Hibernate cache, let’s see what is Query Cache and how is interrelated with second level cache. Query Cache is responsible for caching the combination of query and values provided as parameters as key, and list of identifiers of objects returned by query execution as values. Note that using Query Cache requires a second level cache too because when query result is get from cache (that is a list of identifiers), Hibernate will load objects using cached identifiers from second level. To sum up, and as a conceptual schema, given next query: “from Country where population > :number“, after first execution, Hibernate caches would contain next fictional values (note that number parameter is set to 1000): L2 Cache [ id:1, {name='Spain', population=1000, ....} id:2, {name='Germany', population=2000,...} .... ] QueryCache [{from Country where population > :number, 1000}, {id:2}] So before start using Query Cache, we need to configure cache of second level. First of all you must decide what cache provider you are going to use. For this example Ehcache is chosen, but refer to Hibernate documentation for complete list of all supported providers. To configure second level cache, set next Hibernate properties: hibernate.cache.provider_class = org.hibernate.cache.EhCacheProvider hibernate.cache.use_structured_entries = true hibernate.cache.use_second_level_cache = true And if you are using annotation approach, annotate cachable entities with: @Cacheable @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) See that in this case cache concurrency strategy is NONSTRICT_READ_WRITE, but depending on cache provider, other strategies can be followed like TRANSACTIONAL, READ_ONLY, … take a look at cache section of Hibernate documentation to chose the one that fits better with your requirements. And finally add Ehcache dependencies: <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache-core</artifactId> <version>2.5.0</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-ehcache</artifactId> <version>3.6.0.Final</version> </dependency> Now second level cache is configured, but not query cache; anyway we are not far from our goal. Set hibernate.cache.use_query_cache property to true. And for each cachable query, we must call setCachable method during query creation: List<Country> list = session.createQuery(“from Country where population > 1000″).setCacheable(true).list(); To make example more practical I have uploaded a full query cache example with Spring Framework. To see clearly that query cache works I have used one public database hosted in ensembl.org. The Ensembl project produces genome databases for vertebrates and other eukaryotic species, and makes this information freely available online. In this example query to dna table is cached. First of all Hibernate configuration: @Configuration public class HibernateConfiguration {@Value("#{dataSource}") private DataSource dataSource;@Bean public AnnotationSessionFactoryBean sessionFactoryBean() { Properties props = new Properties(); props.put("hibernate.dialect", EnhancedMySQL5HibernateDialect.class.getName()); props.put("hibernate.format_sql", "true"); props.put("hibernate.show_sql", "true"); props.put("hibernate.cache.provider_class", "org.hibernate.cache.EhCacheProvider"); props.put("hibernate.cache.use_structured_entries", "true"); props.put("hibernate.cache.use_query_cache", "true"); props.put("hibernate.cache.use_second_level_cache", "true"); props.put("hibernate.hbm2ddl.auto", "validate");AnnotationSessionFactoryBean bean = new AnnotationSessionFactoryBean(); bean.setAnnotatedClasses(new Class[]{Dna.class}); bean.setHibernateProperties(props); bean.setDataSource(this.dataSource); bean.setSchemaUpdate(true); return bean; }}It is a simple Hibernate configuration, using properties previously explained to configure second level cache. Entity class is an entity that represents a sequence of DNA. @Entity(name="dna") @Cacheable @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class Dna {@Id private int seq_region_id;private String sequence;public int getSeq_region_id() { return seq_region_id; }public void setSeq_region_id(int seq_region_id) { this.seq_region_id = seq_region_id; }@Column public String getSequence() { return sequence; }public void setSequence(String sequence) { this.sequence = sequence; }}To try query cache, we are going to implement one test where same query is executed multiple times. @Autowired private SessionFactory sessionFactory;@Test public void fiftyFirstDnaSequenceShouldBeReturnedAndCached() throws Exception { for (int i = 0; i < 5; i++) { Session session = sessionFactory.openSession(); session.beginTransaction();Time elapsedTime = new Time("findDna"+i);List<Dna> list = session.createQuery( "from dna").setFirstResult(0).setMaxResults(50).setCacheable(true).list();session.getTransaction().commit(); session.close(); elapsedTime.miliseconds(System.out);for (Dna dna : list) { System.out.println(dna); }} }We can see that we are returning first fifty dna sequences, and if you execute it, you will see that elapsed time between creation of query and commiting transaction is printed. As you can suppose only first iteration takes about 5 seconds to get all data, but the other ones only milliseconds. The foreach line just before query iteration will print object identifier through console. If you look carefully none of these identifiers will be repeated during all execution. This fact just goes to show you that Hibernate cache does not save objects but properties values, and the object itself is created each time. Last note, remember that Hibernate does not cache associations by default. Now after writing a query, think if it will contain static data and if it will be executed often. If it is the case, query cache is your friend to make Hibernate applications run faster. Download Code Reference: Hibernate cache levels tutorial from our JCG partner Alex Soto at the One Jar To Rule Them All blog....

GWT MVP made simple

GWT Model-View-Presenter is a design pattern for large scale application development. Being derived from MVC, it divides between view and logic and helps to create well-structured, easily testable code. To help lazy developers like me, I investigate how to reduce the amount of classes and interfaces to write when using declarative UIs. Classic MVP You know how to post a link in facebook? – Recently I had to create a this functionality for a little GWT travelling app.So you can enter a URL, which is then fetched and parsed. You can select one of the images from the page, review the text and finally store the link. Now how to properly set this up in MVP? – First, you create an abstract interface resembling the view: interface Display { HasValue<String> getUrl(); void showResult(); HasValue<String> getName(); HasClickHandlers getPrevImage(); HasClickHandlers getNextImage(); void setImageUrl(String url); HasHTML getText(); HasClickHandlers getSave(); }It makes use of interfaces GWT components implement that give some access to their state and functionality. During tests you can easily implement this interface without referring to GWT internals. Also, view implementation may be changed without influence on deeper logic. The implementation is straightforward, shown here with declarated UI fields: class LinkView implements Display @UiField TextBox url; @UiField Label name; @UiField VerticalPanel result; @UiField Anchor prevImage; @UiField Anchor nextImage; @UiField Image image; @UiField HTML text; @UiField Button save; public HasValue<String> getUrl() { return url; } public void showResult() { result.setVisible(true); } // ... and so on ... } The presenter then accesses the view using the interface, which by convention is written inside the presenter class: class LinkPresenter interface Display {...}; public LinkPresenter(final Display display) { display.getUrl().addValueChangeHandler(new ValueChangeHandler<String>() { @Override public void onValueChange(ValueChangeEvent<String> event) { Page page = parseLink(display.getUrl().getValue()); display.getName().setValue(page.getTitle()); // ... display.showResult(); } }); } // ... and so on ... }So here we are: Using MVP, you can structure your code very well and make it easily readable. The simplification The payoff is: Three types for each screen or component. Three files to change whenever the UI is re-defined. Not counted the ui.xml file for the view declaration. For a lazy man like me, these are too many. And if you take a look at the view implementation, it is obvious how to simplify this: Use the view declaration (*.ui.xml) as the view and inject ui elements directly into the presenter: class LinkPresenter @UiField HasValue<String> url; @UiField HasValue<String> name; @UiField VerticalPanel result; @UiField HasClickHandlers prevImage; @UiField HasClickHandlers nextImage; @UiField HasUrl image; @UiField HasHTML text; @UiField HasClickHandlers save; public LinkPresenter(final Display display) { url.addValueChangeHandler(new ValueChangeHandler<String>() { @Override public void onValueChange(ValueChangeEvent<String> event) { Page page = parseLink(url.getValue()); name.setValue(page.getTitle()); // ... result.setVisible(true); } }); } // ... and so on ... }Since it is possible to declare the injected elements using their interfaces this presenter has a lot of the advantages of the full-fledged MVP presenter: You can test it by setting implementing components (see below) and you can change the views implementation easily. But now, you have it all in one class and one view.ui.xml-file and you can apply structural changes much simpler.Making UI elements abstract TextBox implements HasValue<String>. This is simple. But what about properties of ui elements that are not accessible through interfaces? An example you may already have recognized is the VerticalPanel named result in the above code and its method setVisible(), which unfortunately is implemented in the UiObject base class. So no interface is available that could eg. be implemented at test time. For the sake of being able to switch view implementations, it would be better to inject a ComplexPanel, but even that cannot be instantiated at test time. The only way out in this case is to create a new Interface, say interface Visible { void setVisible(boolean visible); boolean isVisible(); }and subclass interesting UI components, implementing the relevant interfaces: package de.joergviola.gwt.tools; class VisibleVerticalPanel extends VerticalPanel implements Visible {}This seems to be tedious and sub-optimal. Nonetheless, is has to be done only per component and not per view as in the full-fledged MVP described above. Wait – how to use self-made components in UiBuilder templates? – That is simple: <ui:UiBinder xmlns:ui='urn:ui:com.google.gwt.uibinder' xmlns:g="urn:import:com.google.gwt.user.client.ui" xmlns:t="urn:import:de.joergviola.gwt.tools"> <g:VerticalPanel width="100%"> <g:TextBox styleName="big" ui:field="url" width="90%"/> <t:VisibleVerticalPanel ui:field="result" visible="false" width="100%"> </t:VisibleVerticalPanel> </g:VerticalPanel> </ui:UiBinder>Declaring handlers The standard way of declaring (click-)handlers is very convinient: @UiHandler("login") public void login(ClickEvent event) { srv.login(username.getValue(), password.getValue()); }In the simplified MVP approach, this code would reside in the presenter. But the ClickEvent parameter is a View component and can eg. not be instantiated at runtime. On the other hand, it cannot be eliminated from the signature because UiBuilder requires an Event parameter. So unfortunately one has to stick back to registering ClickHandlers manually (as one has to do in full MVP anyway): public initWidget() { ... login.addClickHandler(new ClickHandler() { @Override public void onClick(ClickEvent event) { login(); } }); ... } public void login(ClickEvent event) { srv.login(username.getValue(), password.getValue()); }Testing Making your app testable is one of the main goals when introducing MVP. GwtTestCase is able to execute tests in the container environment but requires some startup-time. In TDD, it is desirable to have very fast-running tests that can be applied after every single change without loosing context. So MVP is designed to be able to test all your code in a standard JVM. In standard MVP, you create implementations of the view interfaces. In this simplified approach, it is sufficient to create implementations on a component interface level like the following: class Value<T> implements HasValue<T> { private T value; List<ValueChangeHandler<T>> handlers = new ArrayList<ValueChangeHandler<T>>(); @Override public HandlerRegistration addValueChangeHandler( ValueChangeHandler<T> handler) { handlers.add(handler); return null; } @Override public void fireEvent(GwtEvent<?> event) { for (ValueChangeHandler<T> handler : handlers) { handler.onValueChange((ValueChangeEvent) event); } } @Override public T getValue() { return value; } @Override public void setValue(T value) { this.value = value; } @Override public void setValue(T value, boolean fireEvents) { if (fireEvents) ValueChangeEvent.fire(this, value); setValue(value); } }As usual, you have to inject this component into the presenter-under-test. Though in principle, you could create a setter for the component, I stick to the usual trick to make the component package-protected, put the test into the same package (but of course different project folder) as the presenter and set the component directly. What do you win? You get code structered as clean as in full MVP with much less classes and boilerplate code. Some situations require utility classes for components and their interfaces, but as time goes by, you build an environment which is really easy to understand, test and extend. I’m curious: Tell me your experiences! Reference: GWT MVP made simple from our JCG partner Joerg Viola at the Joerg Viola blog....

Code reviews in the 21st Century

There’s an old adage that goes something like: ‘Do not talk about religion or politics’.  Why?  Because these subjects are full of strong opinions but are thin on objective answers. One person’s certainty is another person’s skepticism; someone else’s common sense just appears as an a prior bias to those who see matters differently. Sadly, conversing these controversial subjects can generate more heat than light. All too often people can get so wound up that they forget that the outcome of their “discussion” has no bearing on their life expectancy, their salary, their chances to win x- factor, getting that dream date, winning the lottery, finding a cure for climate change or whatever it is they regard as important! Similarly, in the world of software engineering code reviews can end up in pointless engagements of conflict.  Developers can bicker over silly little things, offend each other and occasionally catch a bug that probably would have being caught in QA anyway – that conflict free zone around the corner! Now don’t get me wrong, there are perfectly valid reasons why you may think code reviews are a good idea for your project:Catching bugs sooner means less cost to your project. You don’t have to release a fix patch because it’s has been caught in development phase – yippee! Code becomes more maintable.  That crazy 200 line method that Jonny was writing with a hangover has being caughted before it has the chance to make itself at home deep in your code base. Knowledge is spread across your team. There are no longer large blocks of code that only one person knows about.  And we all know, when that one person talks about taking a two month holiday everyone panics! Developers make more of an effort. If a developer knows someone else is going to pass judgement on his work, he’s more likely to put that line of javadoc to clarify when an exception will be thrown.However, it would be naive to think that code reviews don’t cause problems.  In fact, they cause so many problems many 21st century projects don’t do them.  I think they have a place but there needs to be some thought regarding how and when they are done so that they are beneficial as opposed to a nuisance.   Here are some guidlines…1. Never forget TDD Ensure you have tested your code before you asked someone else to look at it.  Catch your own bugs and deal with them before someone else does.2. Automate as much you can There are several very good tools for Java such as PMD, Checkstyle, Findbugs etc  What is the point  getting a human to spend time reviewing code when these tools can very quickly identify many things the human would waste time moaning about?  I am going to say that again.   What is the point  getting a human to spend time reviewing code when these tools can very quickly identify many things the human would waste time moaning about? When using these tools, it’s important to use a common set of rules for them. This ensures your code is at some sort of agreed standard and much of what used to happend in an old fashioned 20th century code review, won’t need to happen.  Ideally, these tools should be run on every check in of code by a hook from your version control system.  If code is really bad – it will be prevented from being checked in.  Billy the developer is prevented from checking in the rubbish he wrote (when he had killer migraine) that he is too embarrassed to look at.  You are actually doing him favours, not just your team. 3. Respect Design In some of the earlier Java projects I worked on, the reviews happened way too late.  You were reviewing code when the actual design was flawed.  A design pattern was misunderstood, some nasty dependencies were introduced or a developer just went way off on a tangent.  The review would bring up these points.  The proverbial retort was: ‘This is a code review not a design a review!’ .  A mess inevitably ensued.  To stop these problems we changed things so that anyone asked to review code would also be involved – in some way – in either the design or the design review.  In fact, we got much more bang from the buck from design reviews than code reviews.  Designs were of a much higher quality and those late nasty surprises stopped.4. Agree a style guide (and a dictionary) Even with the automated tooling (such as Checkstyle, Findbugs etc), to avoid unnecessary conflict on style, your project should have a style guide. Stick to the industry standard java conventions – where possible.  Try to have a ‘dictionary’ for all the concepts your project introduces. This means, when code refers to them it’s easier to check that the usage and context is correct.5. Get the right toolingIf all your developers are using Eclipse (and happy using it) something like Jupiter makes sense.  You can navigate your way through code, debug code and essentially leverage everything the Eclipse IDE does to make your life looking at code easier when reviewing code.  However, if everyone is not on the same IDE (or the IDE is not making your life easier) consider something like Review Board. 6. Remember every Project is different.You may have done something in a previous project that worked.  But remember, every project is different.  The other project had a certain architecture (may have been highly concurrent, highly distributed), had a certain culture (everyone may have enjoyed using eclipse) and used certain tools (maven or ant).  Does the new one tick the same boxes?  Remember, different things work for different projects.7. Remember give and takeWhen reviewing be positive, be meticulous but do not be pedantic.  Will tiny trivial things that get on your nerves make a project fail or cost your company money?  Probably not.  Put things in perspective.  Remember to be open to other ideas and to change your own mind rather than getting hung up changing someone else’s.8. Be buddies In my experience, what I call ‘buddy reviews‘ (others call ‘over the shoulder’)  have worked really well.  A buddy review consists of meeting up with another team member informally every day or two and having a quick glance (5 – 10 mins)  at each other’s code at your desk or their’s.  This approach means:Problems are caught very early You are always up to speed as to what is going on Reviews are always very short because you are only looking at new code since the last catch up Because the setting is informal – there is no nervous tension. They’re fun! You can exchange ideas – regularly. When Tech Leading, buddy reviewing your team members is a great way of seeing if anyone on the team is running into trouble early rather than late.  You can help people and get an idea of everyone’s progress all at the same time.  And because of the regular nature of buddy reviews, they become habitual and actually get done.  Something we can’t say for many other 21st century code reviews! In summary, if your project is going to engage in code reviews, they should be fast, effective and not waste people’s time.  As argued in this post, it is really important to think about how they are organised to ensure that does not happen. ‘Til the next time – take care of yourselves. Reference: Code reviews in the 21st Century from our JCG partner Alex Staveley at the Dublin’s Tech Blog ....

Java 7: Copy and Move Files and Directories

This post is a continuation of my series on the Java 7 java.nio.file package, this time covering the copying and moving of files and complete directory trees. If you have ever been frustrated by Java’s lack of copy and move methods, then read on, for relief is at hand. Included in the coverage is the very useful Files.walkFileTree method. Before we dive into the main content however, some background information is needed. Path Objects Path objects represent a sequence of directories that may or may not include a file. There are three ways to construct a Path object:FileSystems.getDefault().getPath(String first, String… more) Paths.get(String path, String… more), convenience method that calls FileSystems.getDefault().getPath Calling the toPath method on a java.io.File objectFrom this point forward in all our examples, we will use the Paths.get method. Here are some examples of creating Path objects: //Path string would be "/foo" Paths.get("/foo"); //Path string "/foo/bar" Paths.get("/foo","bar");To manipulate Path objects there are the Path.resolve and Path.relativize methods. Here is an example of using Path.resolve: //This is our base path "/foo" Path base = Paths.get("/foo"); //filePath is "/foo/bar/file.txt" while base still "/foo" Path filePath = base.resolve("bar/file.txt");Using the Path.resolve method will append the given String or Path object to the end of the calling Path, unless the given String or Path represents an absolute path, the the given path is returned, for example: Path path = Paths.get("/foo"); //resolved Path string is "/usr/local" Path resolved = path.resolve("/usr/local");The Path.relativize works in the opposite fashion, returning a new relative path that if resolved against the calling Path would result in the same Path string. Here’s an example: // base Path string "/usr" Path base = Paths.get("/usr"); // foo Path string "/usr/foo" Path foo = base.resolve("foo"); // bar Path string "/usr/foo/bar" Path bar = foo.resolve("bar"); // relative Path string "foo/bar" Path relative = base.relativize(bar);Another method on the Path class that is helpful is the Path.getFileName, that returns the name of the farthest element represented by this Path object, with the name being an actual file or just a directory. For example: //assume filePath constructed elsewhere as "/home/user/info.txt" //returns Path with path string "info.txt" filePath.getFileName()//now assume dirPath constructed elsewhere as "/home/user/Downloads" //returns Path with path string "Downloads" dirPath.getFileName()In the next section we are going to take a look at how we can use Path.resolve and Path.relativize in conjunction with Files class for copying and moving files. Files Class The Files class consists of static methods that use Path objects to work with files and directories. While there are over 50 methods in the Files class, at this point we are only going to discuss the copy and move methods. Copy A File To copy one file to another you would use the (any guesses on the name?) Files.copy method – copy(Path source, Path target, CopyOption… options) very concise and no anonymous inner classes, are we sure it’s Java?. The options argument are enums that specify how the file should be copied. (There are actually 2 different Enum classes, LinkOption and StandardCopyOption, but both implement the CopyOption interface) Here is the list of available options for Files.copy:LinkOption.NOFOLLOW_LINKS StandardCopyOption.COPY_ATTRIBUTES StandardCopyOption.REPLACE_EXISTINGThere is also a StandardCopyOption.ATOMIC_MOVE enum, but if this option is specified, an UsupportedOperationException is thrown. If no options are specified, the default is to throw an error if the target file exists or is a symbolic link. If the path object is a directory then an empty directory is created in the target location. (Wait a minute! didn’t it say in the introduction that we could copy the entire contents of a directory? The answer is still yes and that is coming!) Here’s an example of copying a file to another with Path objects using the Path.resolve and Path.relativize methods: Path sourcePath ... Path basePath ... Path targetPath ...Files.copy(sourcePath, targetPath.resolve(basePath.relativize(sourcePath));Move A File Moving a file is equally as straight forward – move(Path source, Path target, CopyOption… options); The available StandardCopyOptions enums available are:StandardCopyOption.REPLACE_EXISTING StandardCopyOption.ATOMIC_MOVEIf Files.move is called with StandardCopyOption.COPY_ATTRIBUTES an UnsupportedOperationException is thrown. Files.move can be called on an empty directory or if it does not require moving a directories contents, re-naming for example, the call will succeed, otherwise it will throw an IOException (we’ll see in the following section how to move non-empty directories). The default is to throw an Exception if the target file already exists. If the source is a symbolic link, then the link itself is moved, not the target of the link. Here’s an example of Files.move, again tying in the Path.relativize and Path.resolve methods: Path sourcePath ... Path basePath ... Path targetPath ...Files.move(sourcePath, targetPath.resolve(basePath.relativize(sourcePath));Copying and Moving Directories One of the more interesting and useful methods found in the Files class is Files.walkFileTree. The walkFileTree method performs a depth first traversal of a file tree. There are two signatures:walkFileTree(Path start,Set options,int maxDepth,FileVisitor visitor) walkFileTree(Path start,FileVisitor visitor)The second option for Files.walkFileTree calls the first option with EnumSet.noneOf(FileVisitOption.class) and Integer.MAX_VALUE. As of this writing, there is only one file visit option – FOLLOW_LINKS. The FileVisitor is an interface that has four methods defined:preVisitDirectory(T dir, BasicFileAttributes attrs) called for a directory before all entires are traversed. visitFile(T file, BasicFileAttributes attrs) called for a file in the directory. postVisitDirectory(T dir, IOException exc) only called after all files and sub-directories have been traversed. visitFileFailed(T file, IOException exc) called for files that could not be visitedAll of the methods return one of the four possible FileVisitResult enums :FileVistitResult.CONTINUE FileVistitResult.SKIP_SIBLINGS (continue without traversing siblings of the directory or file) FileVistitResult.SKIP_SUBTREE (continue without traversing contents of the directory) FileVistitResult.TERMINATETo make life easier there is a default implementation of the FileVisitor, SimpleFileVisitor (validates arguments are not null and returns FileVisitResult.CONTINUE), that can be subclassed co you can override just the methods you need to work with. Let’s take a look at a basic example for copying an entire directory structure. Copying A Directory Tree Example Let’s take a look at a class that extends SimpleFileVisitor used for copying a directory tree (some details left out for clarity): public class CopyDirVisitor extends SimpleFileVisitor<Path> { private Path fromPath; private Path toPath; private StandardCopyOption copyOption = StandardCopyOption.REPLACE_EXISTING; .... @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { Path targetPath = toPath.resolve(fromPath.relativize(dir)); if(!Files.exists(targetPath)){ Files.createDirectory(targetPath); } return FileVisitResult.CONTINUE; }@Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { Files.copy(file, toPath.resolve(fromPath.relativize(file)), copyOption); return FileVisitResult.CONTINUE; } }On line 9, each directory will be created in the target, ‘toPath’, as each directory from the source, ‘fromPath’,is traversed. Here we can see the power the Path object with respect to working with directories and files. As the code moves deeper into the directory structure, the correct Path objects are constructed simply from calling relativize and resolve on the fromPath and toPath objects, respectively. At no point do we need to be aware of where we are in the directory tree, and as a result no cumbersome StringBuilder manipulations are needed to create the correct paths. On line 17, we see the Files.copy method used to copy the file from the source directory to the target directory. Next is a simple example of deleting an entire directory tree. Deleting A Directory Tree Example In this example SimpleFileVisitor has been subclassed for deleting a directory structure: public class DeleteDirVisitor extends SimpleFileVisitor<Path> {@Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { Files.delete(file); return FileVisitResult.CONTINUE; }@Override public FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException { if(exc == null){ Files.delete(dir); return FileVisitResult.CONTINUE; } throw exc; } }As you can see, deleting is a very simple operation. Simply delete each file as you find them then delete the directory on exit. Combining Files.walkFileTree with Google Guava The previous two examples, although useful, were very ‘vanilla’. Let’s take a look at two more examples that are a little more creative by combining the Google Gauva Function and Predicate interfaces. public class FunctionVisitor extends SimpleFileVisitor<Path> { Function<Path,FileVisitResult> pathFunction;public FunctionVisitor(Function<Path, FileVisitResult> pathFunction) { this.pathFunction = pathFunction; }@Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { return pathFunction.apply(file); } }In this very simple example, we subclass SimpleFileVisitor to take a Function object as a constructor parameter and as the directory structure is traversed, apply the function to each file. public class CopyPredicateVisitor extends SimpleFileVisitor<Path> { private Path fromPath; private Path toPath; private Predicate<Path> copyPredicate;public CopyPredicateVisitor(Path fromPath, Path toPath, Predicate<Path> copyPredicate) { this.fromPath = fromPath; this.toPath = toPath; this.copyPredicate = copyPredicate; }@Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { if (copyPredicate.apply(dir)) { Path targetPath = toPath.resolve(fromPath.relativize(dir)); if (!Files.exists(targetPath)) { Files.createDirectory(targetPath); } return FileVisitResult.CONTINUE; } return FileVisitResult.SKIP_SUBTREE; }@Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { Files.copy(file, toPath.resolve(fromPath.relativize(file))); return FileVisitResult.CONTINUE; } }In this example the CopyPredicateVisitor takes a Predicate object and based on the returned boolean, parts of the directory structure are not copied. I would like to point out that the previous two examples, usefulness aside, do work in the unit tests foe the source code provided with this post. DirUtils Building on everything we’ve covered so far, I could not resist the opportunity to create a utility class, DirUtils, as an abstraction for working with directories that provides the following methods: //deletes all files but leaves the directory tree in place DirUtils.clean(Path sourcePath); //completely removes a directory tree DirUtils.delete(Path sourcePath); //replicates a directory tree DirUtils.copy(Path sourcePath, Path targetPath); //not a true move but performs a copy then a delete of a directory tree DirUtils.move(Path sourcePath, Path targetPath); //apply the function to all files visited DirUtils.apply(Path sourcePath,Path targetPath, Function function);While I wouldn’t go as far to say it’s production ready, it was fun to write. Conclusion That wraps up the new copy and move functionality provided by the java.nio.file package. I personally think it’s very useful and will take much of the pain out of working with files in Java. There’s much more to cover, working with symbolic links, stream copy methods, DirectoryStreams etc, so be sure to stick around. Thanks for your time. As always comments and suggestions are welcomed. Reference: What’s New In Java 7: Copy and Move Files and Directories from our JCG partner Bill Bejeck at the Random Thoughts On Coding blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books