Featured FREE Whitepapers

What's New Here?

software-development-2-logo

20 Database Design Best Practices

Use well defined and consistent names for tables and columns (e.g. School, StudentCourse, CourseID …). Use singular for table names (i.e. use StudentCourse instead of StudentCourses). Table represents a collection of entities, there is no need for plural names. Don’t use spaces for table names. Otherwise you will have to use ‘{‘, ‘[‘, ‘“’ etc. characters to define tables (i.e. for accesing table Student Course you’ll write “Student Course”. StudentCourse is much better). Don’t use unnecessary prefixes or suffixes for table names (i.e. use School instead of TblSchool, SchoolTable etc.). Keep passwords as encrypted for security. Decrypt them in application when required. Use integer id fields for all tables. If id is not required for the time being, it may be required in the future (for association tables, indexing …). Choose columns with the integer data type (or its variants) for indexing. varchar column indexing will cause performance problems. Use bit fields for boolean values. Using integer or varchar is unnecessarily storage consuming. Also start those column names with “Is”. Provide authentication for database access. Don’t give admin role to each user. Avoid “select *” queries until it is really needed. Use “select [required_columns_list]” for better performance. Use an ORM (object relational mapping) framework (i.e. hibernate, iBatis …) if application code is big enough. Performance issues of ORM frameworks can be handled by detailed configuration parameters. Partition big and unused/rarely used tables/table parts to different physical storages for better query performance. For big, sensitive and mission critic database systems, use disaster recovery and security services like failover clustering, auto backups, replication etc. Use constraints (foreign key, check, not null …) for data integrity. Don’t give whole control to application code. Lack of database documentation is evil. Document your database design with ER schemas and instructions. Also write comment lines for your triggers, stored procedures and other scripts. Use indexes for frequently used queries on big tables. Analyser tools can be used to determine where indexes will be defined. For queries retrieving a range of rows, clustered indexes are usually better. For point queries, non-clustered indexes are usually better. Database server and the web server must be placed in different machines. This will provide more security (attackers can’t access data directly) and server CPU and memory performance will be better because of reduced request number and process usage. Image and blob data columns must not be defined in frequently queried tables because of performance issues. These data must be placed in separate tables and their pointer can be used in queried tables. Normalization must be used as required, to optimize the performance. Under-normalization will cause excessive repetition of data, over-normalization will cause excessive joins across too many tables. Both of them will get worse performance. Spend time for database modeling and design as much as required. Otherwise saved(!) design time will cause (saved(!) design time) * 10/100/1000 maintenance and re-design time.Reference: 20 Database Design Best Practices from our JCG partner Cagdas Basaraner at the CodeBuild blog....
software-development-2-logo

Another aspect of coupling in Object Oriented paradigm

I had previously written a post related to coupling and cohesion here and that was more of a basic definition of both the terms. In this post I would like to throw some light on the tight dependency on the type of the component in use. Generally we would aim to design classes such that they interact via the interfaces or more generally put via API. Suppose we use interfaces (more of a generic term where we dont have implementations, not specific to keyword interface in Java or C#), but this is not enough, we need to provide some kind of implementation for the interface which is actually consumed by the other client classes. Before going into details let me pick some example (examples in Java): I would like to design a Reader which would help the client classes to get the information from any source specified- be it File System/Web. So the interface would be: interface Reader { public String read(); /** * Get the source to read from */ public String getSource(); }I would want the user to seamlessly use the API for reading file from the file system or from the web/ web document. The next step would be to create implementations to read from file system and from the web. class FileSystemReader implements Reader { private String source; public FileSystemReader(String source) { this.source = source; } @override public String getSource() { return this.source; } public String read() { //Read from the source. //The source is a file in file system. } } class HttpReader implements Reader { private String source; public FileSystemReader(String source) { this.source = source; } @override public String getSource() { return this.source; } public String read() { //Read from the source. //The source is a document in the web. } }One way of using these Interfaces and implementations is- The client classes deciding which Implementation to instantiate based on the format of the source. So it would be something like: class Client { /** * This is the consumer of the Reader API * @param source Source to read from */ public void performSomeOperation(String source) { Reader myReader = null; if(source contains "http://") { //its a web document, create a HttpReader myReader = new HttpReader(source); } else { myReader = new FileReader(source); } print(myReader.read()); } }All looks good, the classes interact with each other via the API, but you might feel within that something’s not good. Then there comes another requirement where in there can be a third source to read from and you would have to change in all the places where ever such a creation was being done. If you miss out the change in some place then you end up with a broken code, see how fragile your code has become. Apart from that your client class knows that HttpReader is used for this and FileReader for that, so you are giving away the type related information about the instance you are using and thereby the client code gets tightly coupled with this type information. This approach can at times break the Open Closed Principle because you end up editing the class each time a new implementation of the interface is added. So there should be someway we can shield this creation of instances, of different implementations of the interface, from user of these interfaces. Yes there are ways and I know by now you must have been waiting to unleash the Factory Method pattern. So how can the above code be modified to use a Factory /** * Factory to get the instance of Reader implementation */ class ReaderFactory { public static getReader(String source) { if( source contains http) { return new HttpReader(source); } else if(someother condition) { return new SomeReader(source); } else { return new FileReader(source); } } } class Client { /** * This is the consumer of the Reader API * @param source Source to read from */ public void performSomeOperation(String source) { Reader myReader = ReaderFacotry.getReader(source); print(myReader.read()); } }See how simple the Client code has become, no type related noise, the user would not need to know what type of instance is being used and hence keep itself less coupled with the type. The factory method takes care of seeing which implementation to return to the client code based on the pattern in the source string. This way we end up having a less coupled code in terms of the coupling due to exposing the type information. And now when there comes a new requirement for a new reader for a new source then you know where you have to make the change and the change would be only in one place. You can see that your code is less fragile and also you have eliminated unwanted redundancy from the code. One thing to keep in mind is that encapsulation is not only data hiding but also hiding the type related information from the user. Reference: Another aspect of coupling in Object Oriented paradigm from our JCG partner Mohamed Sanaulla at the Experiences Unlimited blog....
spring-interview-questions-answers

Spring 3, Spring Web Services 2 & LDAP Security

This year started on a good note, another one of those “the deadline won’t change” / “skip all the red tape” / “Wild West” type of projects in which I got to figure out and implement some functionality using some relatively new libraries and tech for a change, well Spring 3 ain’t new but in the Java 5, weblogic 10(.01), Spring 2.5.6 slow corporate kind of world it is all relative. Due to general time constraints I am not including too much “fluff” in this post, just the nitty gritty of creating and securing a Spring 3 , Spring WS 2 web service using multiple XSDs and LDAP security. The Code: The Service Endpoint: ExampleServiceEndpoint This is the class that will be exposed as web service using the configuration later in the post. package javaitzen.spring.ws;import org.springframework.ws.server.endpoint.annotation.Endpoint; import org.springframework.ws.server.endpoint.annotation.PayloadRoot; import org.springframework.ws.server.endpoint.annotation.RequestPayload; import org.springframework.ws.server.endpoint.annotation.ResponsePayload;import javax.annotation.Resource;@Endpoint public class ExampleServiceEndpoint {private static final String NAMESPACE_URI = "http://www.briandupreez.net";/** * Autowire a POJO to handle the business logic @Resource(name = "businessComponent") private ComponentInterface businessComponent; */public ExampleServiceEndpoint() { System.out.println(">> javaitzen.spring.ws.ExampleServiceEndpoint loaded."); }@PayloadRoot(localPart = "ProcessExample1Request", namespace = NAMESPACE_URI + "/example1") @ResponsePayload public Example1Response processExample1Request(@RequestPayload final Example1 request) { System.out.println(">> process example request1 ran."); return new Example1Response(); }@PayloadRoot(localPart = "ProcessExample2Request", namespace = NAMESPACE_URI + "/example2") @ResponsePayload public Example2Response processExample2Request(@RequestPayload final Example2 request) { System.out.println(">> process example request2 ran."); return new Example2Response(); }}The Code: CustomValidationCallbackHandler This was my bit of custom code I wrote to extend the AbstactCallbackHandler allowing us to use LDAP. As per the comments in the CallbackHandler below, it’s probably a good idea to have a cache manager, something like Hazelcast or Ehcache to cache authenticated users, depending on security / performance considerations. The Digest Validator below can just be used directly from the Sun library, I was just wanted to see how it worked. package javaitzen.spring.ws;import com.sun.org.apache.xml.internal.security.exceptions.Base64DecodingException; import com.sun.xml.wss.impl.callback.PasswordValidationCallback; import com.sun.xml.wss.impl.misc.Base64; import org.springframework.beans.factory.InitializingBean; import org.springframework.security.authentication.AuthenticationManager; import org.springframework.security.authentication.UsernamePasswordAuthenticationToken; import org.springframework.security.core.Authentication; import org.springframework.util.Assert; import org.springframework.ws.soap.security.callback.AbstractCallbackHandler;import javax.security.auth.callback.Callback; import javax.security.auth.callback.UnsupportedCallbackException; import java.io.IOException; import java.io.UnsupportedEncodingException; import java.security.MessageDigest; import java.util.Properties;public class CustomValidationCallbackHandler extends AbstractCallbackHandler implements InitializingBean {private Properties users = new Properties(); private AuthenticationManager ldapAuthenticationManager;@Override protected void handleInternal(final Callback callback) throws IOException, UnsupportedCallbackException {if (callback instanceof PasswordValidationCallback) { final PasswordValidationCallback passwordCallback = (PasswordValidationCallback) callback; if (passwordCallback.getRequest() instanceof PasswordValidationCallback.DigestPasswordRequest) { final PasswordValidationCallback.DigestPasswordRequest digestPasswordRequest = (PasswordValidationCallback.DigestPasswordRequest) passwordCallback.getRequest(); final String password = users .getProperty(digestPasswordRequest .getUsername()); digestPasswordRequest.setPassword(password); passwordCallback .setValidator(new CustomDigestPasswordValidator());} if (passwordCallback.getRequest() instanceof PasswordValidationCallback.PlainTextPasswordRequest) { passwordCallback .setValidator(new LDAPPlainTextPasswordValidator());} } else { throw new UnsupportedCallbackException(callback); }}/** * Digest Validator. * This code is directly from the sun class, I was just curious how it worked. */ private class CustomDigestPasswordValidator implements PasswordValidationCallback.PasswordValidator { public boolean validate(final PasswordValidationCallback.Request request) throws PasswordValidationCallback.PasswordValidationException {final PasswordValidationCallback.DigestPasswordRequest req = (PasswordValidationCallback.DigestPasswordRequest) request; final String passwd = req.getPassword(); final String nonce = req.getNonce(); final String created = req.getCreated(); final String passwordDigest = req.getDigest(); final String username = req.getUsername();if (null == passwd) return false; byte[] decodedNonce = null; if (null != nonce) { try { decodedNonce = Base64.decode(nonce); } catch (final Base64DecodingException bde) { throw new PasswordValidationCallback.PasswordValidationException(bde); } } String utf8String = ""; if (created != null) { utf8String += created; } utf8String += passwd; final byte[] utf8Bytes; try { utf8Bytes = utf8String.getBytes("utf-8"); } catch (final UnsupportedEncodingException uee) { throw new PasswordValidationCallback.PasswordValidationException(uee); }final byte[] bytesToHash; if (decodedNonce != null) { bytesToHash = new byte[utf8Bytes.length + decodedNonce.length]; for (int i = 0; i < decodedNonce.length; i++) bytesToHash[i] = decodedNonce[i]; for (int i = decodedNonce.length; i < utf8Bytes.length + decodedNonce.length; i++) bytesToHash[i] = utf8Bytes[i - decodedNonce.length]; } else { bytesToHash = utf8Bytes; } final byte[] hash; try { final MessageDigest sha = MessageDigest.getInstance("SHA-1"); hash = sha.digest(bytesToHash); } catch (final Exception e) { throw new PasswordValidationCallback.PasswordValidationException( "Password Digest could not be created" + e); } return (passwordDigest.equals(Base64.encode(hash))); }}/** * LDAP Plain Text validator. */ private class LDAPPlainTextPasswordValidator implements PasswordValidationCallback.PasswordValidator {/** * Validate the callback against the injected LDAP server. * Probably a good idea to have a cache manager - ehcache / hazelcast injected to cache authenticated users. * * @param request the callback request * @return true if login successful * @throws PasswordValidationCallback.PasswordValidationException * */ public boolean validate(final PasswordValidationCallback.Request request) throws PasswordValidationCallback.PasswordValidationException { final PasswordValidationCallback.PlainTextPasswordRequest plainTextPasswordRequest = (PasswordValidationCallback.PlainTextPasswordRequest) request; final String username = plainTextPasswordRequest.getUsername();final Authentication authentication; final Authentication userPassAuth = new UsernamePasswordAuthenticationToken(username, plainTextPasswordRequest.getPassword()); authentication = ldapAuthenticationManager.authenticate(userPassAuth);return authentication.isAuthenticated();} }/** * Assert users. * * @throws Exception error */ public void afterPropertiesSet() throws Exception { Assert.notNull(users, "Users is required."); Assert.notNull(this.ldapAuthenticationManager, "A LDAP Authentication manager is required."); }/** * Sets the users to validate against. Property names are usernames, property values are passwords. * * @param users the users */ public void setUsers(final Properties users) { this.users = users; }/** * The the authentication manager. * * @param ldapAuthenticationManager the provider */ public void setLdapAuthenticationManager(final AuthenticationManager ldapAuthenticationManager) { this.ldapAuthenticationManager = ldapAuthenticationManager; } }The service config: The configuration for the Endpoint, CallbackHandler and the LDAP Authentication manager. The Application Context – Server Side: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.springframework.org/schema/beans" xmlns:context="http://www.springframework.org/schema/context" xmlns:sws="http://www.springframework.org/schema/web-services" xmlns:s="http://www.springframework.org/schema/security" xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans.xsdhttp://www.springframework.org/schema/web-serviceshttp://www.springframework.org/schema/web-services/web-services-2.0.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context.xsdhttp://www.springframework.org/schema/securityhttp://www.springframework.org/schema/security/spring-security-3.0.xsd"> <sws:annotation-driven/> <context:component-scan base-package="javaitzen.spring.ws"/> <sws:dynamic-wsdl id="exampleService" portTypeName="javaitzen.spring.ws.ExampleServiceEndpoint" locationUri="/exampleService/" targetNamespace="http://www.briandupreez.net/exampleService"> <sws:xsd location="classpath:/xsd/Example1Request.xsd"/> <sws:xsd location="classpath:/xsd/Example1Response.xsd"/> <sws:xsd location="classpath:/xsd/Example2Request.xsd"/> <sws:xsd location="classpath:/xsd/Example2Response.xsd"/> </sws:dynamic-wsdl> <sws:interceptors> <bean id="validatingInterceptor" class="org.springframework.ws.soap.server.endpoint.interceptor.PayloadValidatingInterceptor"> <property name="schema" value="classpath:/xsd/Example1Request.xsd"/> <property name="validateRequest" value="true"/> <property name="validateResponse" value="true"/> </bean> <bean id="loggingInterceptor" class="org.springframework.ws.server.endpoint.interceptor.PayloadLoggingInterceptor"/> <bean class="org.springframework.ws.soap.security.xwss.XwsSecurityInterceptor"> <property name="policyConfiguration" value="/WEB-INF/securityPolicy.xml"/> <property name="callbackHandlers"> <list> <ref bean="callbackHandler"/> </list> </property> </bean> </sws:interceptors> <bean id="callbackHandler" class="javaitzen.spring.ws.CustomValidationCallbackHandler"> <property name="ldapAuthenticationManager" ref="authManager" /> </bean> <s:authentication-manager alias="authManager"> <s:ldap-authentication-provider user-search-filter="(uid={0})" user-search-base="ou=users" group-role-attribute="cn" role-prefix="ROLE_"> </s:ldap-authentication-provider> </s:authentication-manager> <!-- Example... (inmemory apache ldap service) --> <s:ldap-server id="contextSource" root="o=example" ldif="classpath:example.ldif"/> <!-- If you want to connect to a real LDAP server it would look more like: <s:ldap-server id="contextSource" url="ldap://localhost:7001/o=example" manager-dn="uid=admin,ou=system" manager-password="secret"> </s:ldap-server>--> <bean id="marshallingPayloadMethodProcessor" class="org.springframework.ws.server.endpoint.adapter.method.MarshallingPayloadMethodProcessor"> <constructor-arg ref="serviceMarshaller"/> <constructor-arg ref="serviceMarshaller"/> </bean> <bean id="defaultMethodEndpointAdapter" class="org.springframework.ws.server.endpoint.adapter.DefaultMethodEndpointAdapter"> <property name="methodArgumentResolvers"> <list> <ref bean="marshallingPayloadMethodProcessor"/> </list> </property> <property name="methodReturnValueHandlers"> <list> <ref bean="marshallingPayloadMethodProcessor"/> </list> </property> </bean> <bean id="serviceMarshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="classesToBeBound"> <list> <value>javaitzen.spring.ws.Example1</value> <value>javaitzen.spring.ws.Example1Response</value> <value>javaitzen.spring.ws.Example2</value> <value>javaitzen.spring.ws.Example2Response</value> </list> </property> <property name="marshallerProperties"> <map> <entry key="jaxb.formatted.output"> <value type="java.lang.Boolean">true</value> </entry> </map> </property> </bean> </beans>The Security Context – Server Side: xwss:SecurityConfiguration xmlns:xwss="http://java.sun.com/xml/ns/xwss/config"> <xwss:RequireTimestamp maxClockSkew="60" timestampFreshnessLimit="300"/> <!-- Expect plain text tokens from the client --> <xwss:RequireUsernameToken passwordDigestRequired="false" nonceRequired="false"/> <xwss:Timestamp/> <!-- server side reply token --> <xwss:UsernameToken name="server" password="server1" digestPassword="false" useNonce="false"/> </xwss:SecurityConfiguration>The Web XML: Nothing really special here, just the Spring WS MessageDispatcherServlet.spring-ws org.springframework.ws.transport.http.MessageDispatcherServlet transformWsdlLocationstrue 1 spring-ws /*The client config: To test or use the service you’ll need the following: The Application Context – Client Side Test: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="messageFactory" class="org.springframework.ws.soap.saaj.SaajSoapMessageFactory"/> <bean id="webServiceTemplate" class="org.springframework.ws.client.core.WebServiceTemplate"> <constructor-arg ref="messageFactory"/> <property name="marshaller" ref="serviceMarshaller"/> <property name="unmarshaller" ref="serviceMarshaller"/> <property name="defaultUri" value="http://localhost:7001/example/spring-ws/exampleService"/> <property name="interceptors"> <list> <ref local="xwsSecurityInterceptor"/> </list> </property> </bean> <bean id="xwsSecurityInterceptor" class="org.springframework.ws.soap.security.xwss.XwsSecurityInterceptor"> <property name="policyConfiguration" value="testSecurityPolicy.xml"/> <property name="callbackHandlers"> <list> <ref bean="callbackHandler"/> </list> </property> </bean> <!-- As a client the username and password generated by the server must match with the client! --> <!-- a simple callback handler to configure users and passwords with an in-memory Properties object. --> <bean id="callbackHandler" class="org.springframework.ws.soap.security.xwss.callback.SimplePasswordValidationCallbackHandler"> <property name="users"> <props> <prop key="server">server1</prop> </props> </property> </bean> <bean id="serviceMarshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="classesToBeBound"> <list> <value>javaitzen.spring.ws.Example1</value> <value>javaitzen.spring.ws.Example1Response</value> <value>javaitzen.spring.ws.Example2</value> <value>javaitzen.spring.ws.Example2Response</value> </list> </property> <property name="marshallerProperties"> <map> <entry key="jaxb.formatted.output"> <value type="java.lang.Boolean">true</value> </entry> </map> </property> </bean>The Security Context – Client Side: <xwss:SecurityConfiguration xmlns:xwss="http://java.sun.com/xml/ns/xwss/config"> <xwss:RequireTimestamp maxClockSkew="60" timestampFreshnessLimit="300"/> <!-- Expect a plain text reply from the server --> <xwss:RequireUsernameToken passwordDigestRequired="false" nonceRequired="false"/> <xwss:Timestamp/> <!-- Client sending to server --> <xwss:UsernameToken name="example" password="pass" digestPassword="false" useNonce="false"/> </xwss:SecurityConfiguration>As usual with Java there can be a couple little nuances when it comes to jars and versions so below is part of the pom I used. The Dependencies:3.0.6.RELEASE 2.0.2.RELEASE org.apache.directory.server apacheds-all 1.5.5 jar compile org.springframework.ws spring-ws-core ${spring-ws-version} org.springframework spring-webmvc ${spring-version} org.springframework spring-web ${spring-version} org.springframework spring-context ${spring-version} org.springframework spring-core ${spring-version} org.springframework spring-beans ${spring-version} org.springframework spring-oxm ${spring-version} org.springframework.ws spring-ws-security ${spring-ws-version} org.springframework.security spring-security-core ${spring-version} org.springframework.security spring-security-ldap ${spring-version} org.springframework.ldap spring-ldap-core 1.3.0.RELEASE org.apache.ws.security wss4j 1.5.12 com.sun.xml.wss xws-security 3.0 org.apache.ws.commons.schema XmlSchema 1.4.2</project>Reference: Spring 3, Spring Web Services 2 & LDAP Security. from our JCG partner Brian Du Preez at the Zen in the art of IT blog....
scala-logo

Scala: Working with Predicates

I love me some Scala. Actually, since it’s now my day job, I love it all the time. It combines the short, expressiveness that I prized in Python with a rich library base (thanks Java) and the compiler checking that I have come to depend upon in a statically typed language. I don’t care what some people say. I recognize that the language is not without it’s flaws. One could say that there’s a bit of missing language extentions, particularly with predicates. What do I mean by that? Is there not implicit support baked into the language such that they generalize any A => Boolean? Certainly. However, I have a problem when I see methods like List‘s ::filter and ::filterNot. The former makes sense, the later highlights the absence of fundamental building blocks which can be seen directly in the name. That is, we’re missing a “Not” helper predicate function: case class Not[A](func: A => Boolean) extends (A => Boolean){ def apply(arg0: A) = !func(arg0) }If it were that simple a fix and if that were all that was missing then it would be easy to suggest and have put into the next version of Scala. Of course we’d also need to have 22 versions of “Not” for each of the 22 versions of Function but that’s a debate for another day. Suffice to say, Scala needs explicit predicate support. It needs more than just a “Not,” it needs easy to read and maintain logic combinators, and it needs support for the basic building blocks that can be used to form higher order predicate logic. Using other accepted Predicates libraries would not give us the power and flexibility needed. Adding Predicate Expressions That’s exactly what I did with my Predicates library. One of the goals of this small library was to add some simple syntactic support for composing predicate functions in a descriptive and concise manner. Specifically I wanted to be able to say “greater than 4 but less than 10? or “greater than zero or even but not both” in almost plain English. I write expressions equivalent to that all the time with ::filter and ::exists statements: myList.filter(x => x > 4 && x < 10)For small phrases, it’s not that difficult. The only extra boilerplate that’s added is the designation “x =>” to indicate that we’re forming an anonymous function. Unfortunately, if I want to reuse, extend or maintain that logic I have to use even more boilerplate. Sometimes, if the logic is severe enough, I need to splice it into several methods which might or might not be attached to traits/class hierarchies. While good coding style, this added verbosity leaves a bad taste in my mouth. What I’d really like to do is have operators which apply to the expressions themselves and not the evaluation of the expressions. The result of these operators would be functions themselves, preserving the composable nature we first started with. To say this another way, an “or” which turns two predicate objects into a third, distinct predicate object that represents a logical or between the first two predicates. As long as each of the precursor objects was built upon an immutable, referentially transparent foundation the resulting compound predicate expression would be safe to use in any environment. This is what was added to each Predicate variant within the Predicates library. The Predicate member functions work as factory methods to generate new Predicates based upon the current Predicate and a Predicate argument. While similar in concept to composition between functions, there is no guarantee that each composed Predicate is even evaluated. There are 22 of Predicate variants, much akin to how Scala chose to have 22 Function variants, each equiped with the following methods: and => pred1(…) && pred2(…) } andNot => pred1(…) && !pred2(…) nand => !pred1(…) || !pred2(…) or => pred1(…) || pred2(…) orNot => pred1(…) || !pred2(…) nor => !(pred1(…) || pred2(…)) xor => if(pred1(…) !pred2(…) else pred2(…) xnor => And as I said before, each of these functions returns another Predicate (which is really just another function.) In practice using these member functions looks something like this: case class LessThen(x: Int) extends Function[Int,Boolean]{ def apply(arg: Int) = arg < x } case class Modulo(x: Int, group: Int) extends Function[Int,Boolean]{ def apply(arg: Int) = (arg % x) == group } case class GreaterThanEqual(x: Int) extends Function[Int,Boolean]{ def apply(arg: Int) = arg >= x } val myList = List(1,2,3,4,5,6,7,8,9) myList.filter( LessThan(7) and GreaterThanEqual(4)) myList.filter( Modulo(4,2) or Modulo(3,0) or Modulo(5,1) )with Predicates being able to be chained together to form more complicated logical expressions. Using Implicit Conversions to Avoid Pollution In object oriented programming, if I had some difficult logic which I wanted to pass around or call associated with a single class from a particular hierarchy I could either add it as a companion class which adhered to the single responsibility philosophy or tack it onto the object itself. The later was generally discouraged unless it needed access to private state or we were using delegation. That said, if several functions were needed the companion class’ interface might grow and become a helper class (and boy did some people love to grow them.) As the libraries and code base matured, combining predicate expressions became a hideously complex, dangerous and blame ridden process. In short, the code often became a maintenance nightmare. I want to state for the record this wasn’t an innate problem of imperative or object oriented programming but rather how people were allowed to program in it. While OO-design has the strategy pattern, it is only as good as it is enforced. My implementation of Predicates, yielding to a somewhat imperative flair (the factory methods are instance methods,) does not protect against misuse. Some people argue that Scala isn’t functional enough, that it doesn’t enforce immutability and in some ways this is true. It’s an unfortunate side-effect (love puns) of being backwards compatible with Java. I wanted to avoid the kinds of problems I faced previously with a strictly OO-code base in as general a way as possible. The implicit conversion hid the transformed class behind a restricted interface, a la an adapter pattern, much like Scala does with anonymous functions. I reasoned whatever crud might be added to a class would be hidden by this interface and thus would not pollute the predicate. Add to this the ability to compose functions to create different types of predicates from an initial predicate and we gained a rather large leg up on bad code production. Functional composition has got to be one of the best things Scala stole from functional programming. What Else? There was only one other thing to add to the “predicates” portion of this library, an “is” function. The idea for this function was stolen from Data.Function.Predicate of Haskell. At first I created all 22 versions with the same exact signature of Haskell’s “is” but then I realized Scala’s eager evaluation caused a type mismatch that couldn’t easily be overcome without added boilerplate. Since “is” was designed with reducing boilerplate while at the same time increasing readability the simple solution was to create an implicit conversion to an anonymous class with a single “is” method accepting a predicate. Thus written it could be used as follows: myStringList.filter(_.length is LessThan(0)) which is very readable and maps an anonymous function of type A => B to A => Boolean. The downside is that it creates a new object at each invocation. Future Work Conditional functions are hard to design well yet at the same time are the bedrock of computational logic gates. Partial Functions can be used to create predicated logic but in a non-transparent manner to the outside observer. There’s an ::orElse function for a reason (a good one too) which is used more for case coverage rather than case completeness. In fact, the existence of the ::lift member function showcases that a “catch all” logic path is not required unlike the standard “if-else” statement. Hence, PartialFunction is not a good choice for predicated applications. After I fleshed out some simple logic composition functions to work with Predicates I wanted to add a structure for composing more complicated predicated expressions. That is, a function which included a predicate to control flow which was both composable and extendable. Adding in conditional support for predicated application such that a Predicate expression controlled the program flow: case class ApplyEither[A,B](pred: Predicate[A], thatTrue: A => B, thatFalse: A => B) extends (A => B){ def apply(arg0: A) = if(pred(arg0)) thatTrue(arg0) else thatFalse(arg0) }was easy following a very simple imperative model. Expanding upon that to composition: case class ComposeEither[A,B,C](pred: Predicate[B], that: A => B, thatTrue: B => C, thatFalse: B => C) extends (A => C){ def apply(arg0: A) ={ val out = that(arg0) if(pred(out)) thatTrue(out) else thatFalse(out) } }also proved to be easy. It was so easy, I wrote more scripts to generate the code for 22 versions of an “ApplyIf,” “ApplyEither,” “ComposeIf,” “ComposeEither,” “AndThenIf,” and “AndThenEither.” Then I expanded on the code I had written so that they all extended the same trait, thus allowing one to be used within another. There was only one big problem with it all, it created an inflexible structure that couldn’t be traversed easily without expanding upon the interface of the various predicated function classes. The question “what are all values down all potential paths” required a new method. The question “what function did I use” required yet another. And so on and so on until the interface of every class began to look like the dreaded helper class. This was a classic example of the expression problem. The right approach, in hindsight, was to create a tree like structure to express computation tree logic. Something that held the arrangement of the functions and predicates and was accompanied by distinctly separate set of functions to traverse that tree. I say in hindsight because I first created all the classes and then deleted them after I started feeling the pain of all the different questions I couldn’t answer without tacking on yet another method. This is something coming in the future. Personally I’d like to wait for a proper implementation of an HList that doesn’t suffer from type erasure or require experimental compiler flags but in the mean time. Miles Sabin has already proved it can be done with his incredible library Shapeless. Now all I need to do is wait for the compiler changes it requires to go mainstream. Reference: Scala: Working with Predicates from our JCG partner Owein Reese at the Statically Typed blog....
eclipse-logo

Launching and Debugging Tomcat from Eclipse without complex plugins

Modern IDEs like Eclipse provide various Plugins to ease web developement. However, I believe that starting Tomcat as “normal” Java application still provides the best debugging experience. Most of the time, this is because these tools launch Tomcat or any other servlet container as external process and then attach a remote debugger on it. While you’re still able to set breakpoints and inspect variables, other features like hot code replacement don’t work that well. Therefore I prefer to start my Tomcat just like any other Java application from within Eclipse. Here’s how it works: This article addresses experienced Eclipse users. You should already know how to create projects, change their built path and how to run classes. If you need any help, feel free to leave a comment or contact me. We’ll add the Tomcat as additional Eclipse project, so that paths and all remain platform independent. (I even keep this project in our SVN so that everybody works with the same setup). Step 1 – Create new Java project named “Tomcat7”Step 2 – Remove the “src” source folderStep 3 – Download Tomcat (Core Version) and unzip into our newly created project. This should now look something like this:Step 4 – If you havn’t, create a new Test project which contains your sources (servlets, jsp pages, jsf pages…). Make sure you add the required libraries to the built path of the projectStep 5.1 – Create a run configuration. Select our Test project as base and set org.apache.catalina.startup.Bootstrap as main class.Step 5.2 – Optionally specify larger heap settings as VM arguments. Important: Select the “Tomcat” project as working directory (Click on the “Workspace” button below the entry field.Step 5.3 – Add bootstrap.jar and tomcat-juli.jar from the Tomcat7/bin directory as bootstrap classpath.Add everything in Tomcat7/lib as user entries. Make sure the Test project and all other classpath entries (i.e. maven dependencies) are below those.Now you can “Apply” and start Tomcat by hitting “Debug”. After a few seconds (check the console output) you can go to http://localhost:8080/examples/ and check out the examples provided by Tomcat. Step 6 – Add Demo-Servlet – Go to our Test project, add a new package called “demo” and a new servlet called “TestServlet”. Be creative with some test output – like I was…Step 7 – Change web.xml – Go to the web.xml of the examples context and add our servlet (as shown in the image). Below all servlets you also have to add a servlet-mapping (not shown in the image below). This looks like that: <servlet-mapping> <servlet-name>test</servlet-name> <url-pattern>/demo/test</url-pattern> </servlet-mapping>Hit save and restart tomcat. You should now see your debug output by surfing to http://localhost:8080/examples/demo/test – You now can set breakpoints, change the output (thanks to hot code replacement) and do all the other fun stuff you do with other debugging sessions. Hint: Keeping your JSP/JSF files as well as your web.xml and other resources already in another project? Just create a little ANT script which copies them into the webapps folder of the tomcat – and you get re-deployment with a single mouse click. Even better (this is what we do): You can modify/override the ResourceResolver of JSF. Therefore you can simply use the classloader to resolve your .xhtml files. This way, you can keep your Java sources and your JSF sources close to each other. I will cover that in another post – The fun stuff starts when running multi tenant systems with custom JSF files per tenant. The JSF implementation of Sun/Oracle has some nice gotchas built-in for that case ;-) Reference: Launching and Debugging Tomcat from Eclipse without complex plugins from our JCG partner Andreas Haufler at the Andy’s Software Engineering Corner blog....
jboss-hibernate-logo

Hibernate cache levels tutorial

One of the common problems of people that start using Hibernate is performance, if you don’t have much experience in Hibernate you will find how quickly your application becomes slow. If you enable sql traces, you would see how many queries are sent to database that can be avoided with little Hibernate knowledge. In current post I am going to explain how to use Hibernate Query Cache to avoid amount of traffic between your application and database. Hibernate offers two caching levels:The first level cache is the session cache. Objects are cached within the current session and they are only alive until the session is closed. The second level cache exists as long as the session factory is alive. Keep in mind that in case of Hibernate, second level cache is not a tree of objects; object instances are not cached, instead it stores attribute values.After this brief introduction (so brief I know) about Hibernate cache, let’s see what is Query Cache and how is interrelated with second level cache. Query Cache is responsible for caching the combination of query and values provided as parameters as key, and list of identifiers of objects returned by query execution as values. Note that using Query Cache requires a second level cache too because when query result is get from cache (that is a list of identifiers), Hibernate will load objects using cached identifiers from second level. To sum up, and as a conceptual schema, given next query: “from Country where population > :number“, after first execution, Hibernate caches would contain next fictional values (note that number parameter is set to 1000): L2 Cache [ id:1, {name='Spain', population=1000, ....} id:2, {name='Germany', population=2000,...} .... ] QueryCache [{from Country where population > :number, 1000}, {id:2}] So before start using Query Cache, we need to configure cache of second level. First of all you must decide what cache provider you are going to use. For this example Ehcache is chosen, but refer to Hibernate documentation for complete list of all supported providers. To configure second level cache, set next Hibernate properties: hibernate.cache.provider_class = org.hibernate.cache.EhCacheProvider hibernate.cache.use_structured_entries = true hibernate.cache.use_second_level_cache = true And if you are using annotation approach, annotate cachable entities with: @Cacheable @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) See that in this case cache concurrency strategy is NONSTRICT_READ_WRITE, but depending on cache provider, other strategies can be followed like TRANSACTIONAL, READ_ONLY, … take a look at cache section of Hibernate documentation to chose the one that fits better with your requirements. And finally add Ehcache dependencies: <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache-core</artifactId> <version>2.5.0</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-ehcache</artifactId> <version>3.6.0.Final</version> </dependency> Now second level cache is configured, but not query cache; anyway we are not far from our goal. Set hibernate.cache.use_query_cache property to true. And for each cachable query, we must call setCachable method during query creation: List<Country> list = session.createQuery(“from Country where population > 1000″).setCacheable(true).list(); To make example more practical I have uploaded a full query cache example with Spring Framework. To see clearly that query cache works I have used one public database hosted in ensembl.org. The Ensembl project produces genome databases for vertebrates and other eukaryotic species, and makes this information freely available online. In this example query to dna table is cached. First of all Hibernate configuration: @Configuration public class HibernateConfiguration {@Value("#{dataSource}") private DataSource dataSource;@Bean public AnnotationSessionFactoryBean sessionFactoryBean() { Properties props = new Properties(); props.put("hibernate.dialect", EnhancedMySQL5HibernateDialect.class.getName()); props.put("hibernate.format_sql", "true"); props.put("hibernate.show_sql", "true"); props.put("hibernate.cache.provider_class", "org.hibernate.cache.EhCacheProvider"); props.put("hibernate.cache.use_structured_entries", "true"); props.put("hibernate.cache.use_query_cache", "true"); props.put("hibernate.cache.use_second_level_cache", "true"); props.put("hibernate.hbm2ddl.auto", "validate");AnnotationSessionFactoryBean bean = new AnnotationSessionFactoryBean(); bean.setAnnotatedClasses(new Class[]{Dna.class}); bean.setHibernateProperties(props); bean.setDataSource(this.dataSource); bean.setSchemaUpdate(true); return bean; }}It is a simple Hibernate configuration, using properties previously explained to configure second level cache. Entity class is an entity that represents a sequence of DNA. @Entity(name="dna") @Cacheable @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class Dna {@Id private int seq_region_id;private String sequence;public int getSeq_region_id() { return seq_region_id; }public void setSeq_region_id(int seq_region_id) { this.seq_region_id = seq_region_id; }@Column public String getSequence() { return sequence; }public void setSequence(String sequence) { this.sequence = sequence; }}To try query cache, we are going to implement one test where same query is executed multiple times. @Autowired private SessionFactory sessionFactory;@Test public void fiftyFirstDnaSequenceShouldBeReturnedAndCached() throws Exception { for (int i = 0; i < 5; i++) { Session session = sessionFactory.openSession(); session.beginTransaction();Time elapsedTime = new Time("findDna"+i);List<Dna> list = session.createQuery( "from dna").setFirstResult(0).setMaxResults(50).setCacheable(true).list();session.getTransaction().commit(); session.close(); elapsedTime.miliseconds(System.out);for (Dna dna : list) { System.out.println(dna); }} }We can see that we are returning first fifty dna sequences, and if you execute it, you will see that elapsed time between creation of query and commiting transaction is printed. As you can suppose only first iteration takes about 5 seconds to get all data, but the other ones only milliseconds. The foreach line just before query iteration will print object identifier through console. If you look carefully none of these identifiers will be repeated during all execution. This fact just goes to show you that Hibernate cache does not save objects but properties values, and the object itself is created each time. Last note, remember that Hibernate does not cache associations by default. Now after writing a query, think if it will contain static data and if it will be executed often. If it is the case, query cache is your friend to make Hibernate applications run faster. Download Code Reference: Hibernate cache levels tutorial from our JCG partner Alex Soto at the One Jar To Rule Them All blog....
google-gwt-logo

GWT MVP made simple

GWT Model-View-Presenter is a design pattern for large scale application development. Being derived from MVC, it divides between view and logic and helps to create well-structured, easily testable code. To help lazy developers like me, I investigate how to reduce the amount of classes and interfaces to write when using declarative UIs. Classic MVP You know how to post a link in facebook? – Recently I had to create a this functionality for a little GWT travelling app.So you can enter a URL, which is then fetched and parsed. You can select one of the images from the page, review the text and finally store the link. Now how to properly set this up in MVP? – First, you create an abstract interface resembling the view: interface Display { HasValue<String> getUrl(); void showResult(); HasValue<String> getName(); HasClickHandlers getPrevImage(); HasClickHandlers getNextImage(); void setImageUrl(String url); HasHTML getText(); HasClickHandlers getSave(); }It makes use of interfaces GWT components implement that give some access to their state and functionality. During tests you can easily implement this interface without referring to GWT internals. Also, view implementation may be changed without influence on deeper logic. The implementation is straightforward, shown here with declarated UI fields: class LinkView implements Display @UiField TextBox url; @UiField Label name; @UiField VerticalPanel result; @UiField Anchor prevImage; @UiField Anchor nextImage; @UiField Image image; @UiField HTML text; @UiField Button save; public HasValue<String> getUrl() { return url; } public void showResult() { result.setVisible(true); } // ... and so on ... } The presenter then accesses the view using the interface, which by convention is written inside the presenter class: class LinkPresenter interface Display {...}; public LinkPresenter(final Display display) { display.getUrl().addValueChangeHandler(new ValueChangeHandler<String>() { @Override public void onValueChange(ValueChangeEvent<String> event) { Page page = parseLink(display.getUrl().getValue()); display.getName().setValue(page.getTitle()); // ... display.showResult(); } }); } // ... and so on ... }So here we are: Using MVP, you can structure your code very well and make it easily readable. The simplification The payoff is: Three types for each screen or component. Three files to change whenever the UI is re-defined. Not counted the ui.xml file for the view declaration. For a lazy man like me, these are too many. And if you take a look at the view implementation, it is obvious how to simplify this: Use the view declaration (*.ui.xml) as the view and inject ui elements directly into the presenter: class LinkPresenter @UiField HasValue<String> url; @UiField HasValue<String> name; @UiField VerticalPanel result; @UiField HasClickHandlers prevImage; @UiField HasClickHandlers nextImage; @UiField HasUrl image; @UiField HasHTML text; @UiField HasClickHandlers save; public LinkPresenter(final Display display) { url.addValueChangeHandler(new ValueChangeHandler<String>() { @Override public void onValueChange(ValueChangeEvent<String> event) { Page page = parseLink(url.getValue()); name.setValue(page.getTitle()); // ... result.setVisible(true); } }); } // ... and so on ... }Since it is possible to declare the injected elements using their interfaces this presenter has a lot of the advantages of the full-fledged MVP presenter: You can test it by setting implementing components (see below) and you can change the views implementation easily. But now, you have it all in one class and one view.ui.xml-file and you can apply structural changes much simpler.Making UI elements abstract TextBox implements HasValue<String>. This is simple. But what about properties of ui elements that are not accessible through interfaces? An example you may already have recognized is the VerticalPanel named result in the above code and its method setVisible(), which unfortunately is implemented in the UiObject base class. So no interface is available that could eg. be implemented at test time. For the sake of being able to switch view implementations, it would be better to inject a ComplexPanel, but even that cannot be instantiated at test time. The only way out in this case is to create a new Interface, say interface Visible { void setVisible(boolean visible); boolean isVisible(); }and subclass interesting UI components, implementing the relevant interfaces: package de.joergviola.gwt.tools; class VisibleVerticalPanel extends VerticalPanel implements Visible {}This seems to be tedious and sub-optimal. Nonetheless, is has to be done only per component and not per view as in the full-fledged MVP described above. Wait – how to use self-made components in UiBuilder templates? – That is simple: <ui:UiBinder xmlns:ui='urn:ui:com.google.gwt.uibinder' xmlns:g="urn:import:com.google.gwt.user.client.ui" xmlns:t="urn:import:de.joergviola.gwt.tools"> <g:VerticalPanel width="100%"> <g:TextBox styleName="big" ui:field="url" width="90%"/> <t:VisibleVerticalPanel ui:field="result" visible="false" width="100%"> </t:VisibleVerticalPanel> </g:VerticalPanel> </ui:UiBinder>Declaring handlers The standard way of declaring (click-)handlers is very convinient: @UiHandler("login") public void login(ClickEvent event) { srv.login(username.getValue(), password.getValue()); }In the simplified MVP approach, this code would reside in the presenter. But the ClickEvent parameter is a View component and can eg. not be instantiated at runtime. On the other hand, it cannot be eliminated from the signature because UiBuilder requires an Event parameter. So unfortunately one has to stick back to registering ClickHandlers manually (as one has to do in full MVP anyway): public initWidget() { ... login.addClickHandler(new ClickHandler() { @Override public void onClick(ClickEvent event) { login(); } }); ... } public void login(ClickEvent event) { srv.login(username.getValue(), password.getValue()); }Testing Making your app testable is one of the main goals when introducing MVP. GwtTestCase is able to execute tests in the container environment but requires some startup-time. In TDD, it is desirable to have very fast-running tests that can be applied after every single change without loosing context. So MVP is designed to be able to test all your code in a standard JVM. In standard MVP, you create implementations of the view interfaces. In this simplified approach, it is sufficient to create implementations on a component interface level like the following: class Value<T> implements HasValue<T> { private T value; List<ValueChangeHandler<T>> handlers = new ArrayList<ValueChangeHandler<T>>(); @Override public HandlerRegistration addValueChangeHandler( ValueChangeHandler<T> handler) { handlers.add(handler); return null; } @Override public void fireEvent(GwtEvent<?> event) { for (ValueChangeHandler<T> handler : handlers) { handler.onValueChange((ValueChangeEvent) event); } } @Override public T getValue() { return value; } @Override public void setValue(T value) { this.value = value; } @Override public void setValue(T value, boolean fireEvents) { if (fireEvents) ValueChangeEvent.fire(this, value); setValue(value); } }As usual, you have to inject this component into the presenter-under-test. Though in principle, you could create a setter for the component, I stick to the usual trick to make the component package-protected, put the test into the same package (but of course different project folder) as the presenter and set the component directly. What do you win? You get code structered as clean as in full MVP with much less classes and boilerplate code. Some situations require utility classes for components and their interfaces, but as time goes by, you build an environment which is really easy to understand, test and extend. I’m curious: Tell me your experiences! Reference: GWT MVP made simple from our JCG partner Joerg Viola at the Joerg Viola blog....
software-development-2-logo

Code reviews in the 21st Century

There’s an old adage that goes something like: ‘Do not talk about religion or politics’.  Why?  Because these subjects are full of strong opinions but are thin on objective answers. One person’s certainty is another person’s skepticism; someone else’s common sense just appears as an a prior bias to those who see matters differently. Sadly, conversing these controversial subjects can generate more heat than light. All too often people can get so wound up that they forget that the outcome of their “discussion” has no bearing on their life expectancy, their salary, their chances to win x- factor, getting that dream date, winning the lottery, finding a cure for climate change or whatever it is they regard as important! Similarly, in the world of software engineering code reviews can end up in pointless engagements of conflict.  Developers can bicker over silly little things, offend each other and occasionally catch a bug that probably would have being caught in QA anyway – that conflict free zone around the corner! Now don’t get me wrong, there are perfectly valid reasons why you may think code reviews are a good idea for your project:Catching bugs sooner means less cost to your project. You don’t have to release a fix patch because it’s has been caught in development phase – yippee! Code becomes more maintable.  That crazy 200 line method that Jonny was writing with a hangover has being caughted before it has the chance to make itself at home deep in your code base. Knowledge is spread across your team. There are no longer large blocks of code that only one person knows about.  And we all know, when that one person talks about taking a two month holiday everyone panics! Developers make more of an effort. If a developer knows someone else is going to pass judgement on his work, he’s more likely to put that line of javadoc to clarify when an exception will be thrown.However, it would be naive to think that code reviews don’t cause problems.  In fact, they cause so many problems many 21st century projects don’t do them.  I think they have a place but there needs to be some thought regarding how and when they are done so that they are beneficial as opposed to a nuisance.   Here are some guidlines…1. Never forget TDD Ensure you have tested your code before you asked someone else to look at it.  Catch your own bugs and deal with them before someone else does.2. Automate as much you can There are several very good tools for Java such as PMD, Checkstyle, Findbugs etc  What is the point  getting a human to spend time reviewing code when these tools can very quickly identify many things the human would waste time moaning about?  I am going to say that again.   What is the point  getting a human to spend time reviewing code when these tools can very quickly identify many things the human would waste time moaning about? When using these tools, it’s important to use a common set of rules for them. This ensures your code is at some sort of agreed standard and much of what used to happend in an old fashioned 20th century code review, won’t need to happen.  Ideally, these tools should be run on every check in of code by a hook from your version control system.  If code is really bad – it will be prevented from being checked in.  Billy the developer is prevented from checking in the rubbish he wrote (when he had killer migraine) that he is too embarrassed to look at.  You are actually doing him favours, not just your team. 3. Respect Design In some of the earlier Java projects I worked on, the reviews happened way too late.  You were reviewing code when the actual design was flawed.  A design pattern was misunderstood, some nasty dependencies were introduced or a developer just went way off on a tangent.  The review would bring up these points.  The proverbial retort was: ‘This is a code review not a design a review!’ .  A mess inevitably ensued.  To stop these problems we changed things so that anyone asked to review code would also be involved – in some way – in either the design or the design review.  In fact, we got much more bang from the buck from design reviews than code reviews.  Designs were of a much higher quality and those late nasty surprises stopped.4. Agree a style guide (and a dictionary) Even with the automated tooling (such as Checkstyle, Findbugs etc), to avoid unnecessary conflict on style, your project should have a style guide. Stick to the industry standard java conventions – where possible.  Try to have a ‘dictionary’ for all the concepts your project introduces. This means, when code refers to them it’s easier to check that the usage and context is correct.5. Get the right toolingIf all your developers are using Eclipse (and happy using it) something like Jupiter makes sense.  You can navigate your way through code, debug code and essentially leverage everything the Eclipse IDE does to make your life looking at code easier when reviewing code.  However, if everyone is not on the same IDE (or the IDE is not making your life easier) consider something like Review Board. 6. Remember every Project is different.You may have done something in a previous project that worked.  But remember, every project is different.  The other project had a certain architecture (may have been highly concurrent, highly distributed), had a certain culture (everyone may have enjoyed using eclipse) and used certain tools (maven or ant).  Does the new one tick the same boxes?  Remember, different things work for different projects.7. Remember give and takeWhen reviewing be positive, be meticulous but do not be pedantic.  Will tiny trivial things that get on your nerves make a project fail or cost your company money?  Probably not.  Put things in perspective.  Remember to be open to other ideas and to change your own mind rather than getting hung up changing someone else’s.8. Be buddies In my experience, what I call ‘buddy reviews‘ (others call ‘over the shoulder’)  have worked really well.  A buddy review consists of meeting up with another team member informally every day or two and having a quick glance (5 – 10 mins)  at each other’s code at your desk or their’s.  This approach means:Problems are caught very early You are always up to speed as to what is going on Reviews are always very short because you are only looking at new code since the last catch up Because the setting is informal – there is no nervous tension. They’re fun! You can exchange ideas – regularly. When Tech Leading, buddy reviewing your team members is a great way of seeing if anyone on the team is running into trouble early rather than late.  You can help people and get an idea of everyone’s progress all at the same time.  And because of the regular nature of buddy reviews, they become habitual and actually get done.  Something we can’t say for many other 21st century code reviews! In summary, if your project is going to engage in code reviews, they should be fast, effective and not waste people’s time.  As argued in this post, it is really important to think about how they are organised to ensure that does not happen. ‘Til the next time – take care of yourselves. Reference: Code reviews in the 21st Century from our JCG partner Alex Staveley at the Dublin’s Tech Blog ....
java-logo

Java 7: Copy and Move Files and Directories

This post is a continuation of my series on the Java 7 java.nio.file package, this time covering the copying and moving of files and complete directory trees. If you have ever been frustrated by Java’s lack of copy and move methods, then read on, for relief is at hand. Included in the coverage is the very useful Files.walkFileTree method. Before we dive into the main content however, some background information is needed. Path Objects Path objects represent a sequence of directories that may or may not include a file. There are three ways to construct a Path object:FileSystems.getDefault().getPath(String first, String… more) Paths.get(String path, String… more), convenience method that calls FileSystems.getDefault().getPath Calling the toPath method on a java.io.File objectFrom this point forward in all our examples, we will use the Paths.get method. Here are some examples of creating Path objects: //Path string would be "/foo" Paths.get("/foo"); //Path string "/foo/bar" Paths.get("/foo","bar");To manipulate Path objects there are the Path.resolve and Path.relativize methods. Here is an example of using Path.resolve: //This is our base path "/foo" Path base = Paths.get("/foo"); //filePath is "/foo/bar/file.txt" while base still "/foo" Path filePath = base.resolve("bar/file.txt");Using the Path.resolve method will append the given String or Path object to the end of the calling Path, unless the given String or Path represents an absolute path, the the given path is returned, for example: Path path = Paths.get("/foo"); //resolved Path string is "/usr/local" Path resolved = path.resolve("/usr/local");The Path.relativize works in the opposite fashion, returning a new relative path that if resolved against the calling Path would result in the same Path string. Here’s an example: // base Path string "/usr" Path base = Paths.get("/usr"); // foo Path string "/usr/foo" Path foo = base.resolve("foo"); // bar Path string "/usr/foo/bar" Path bar = foo.resolve("bar"); // relative Path string "foo/bar" Path relative = base.relativize(bar);Another method on the Path class that is helpful is the Path.getFileName, that returns the name of the farthest element represented by this Path object, with the name being an actual file or just a directory. For example: //assume filePath constructed elsewhere as "/home/user/info.txt" //returns Path with path string "info.txt" filePath.getFileName()//now assume dirPath constructed elsewhere as "/home/user/Downloads" //returns Path with path string "Downloads" dirPath.getFileName()In the next section we are going to take a look at how we can use Path.resolve and Path.relativize in conjunction with Files class for copying and moving files. Files Class The Files class consists of static methods that use Path objects to work with files and directories. While there are over 50 methods in the Files class, at this point we are only going to discuss the copy and move methods. Copy A File To copy one file to another you would use the (any guesses on the name?) Files.copy method – copy(Path source, Path target, CopyOption… options) very concise and no anonymous inner classes, are we sure it’s Java?. The options argument are enums that specify how the file should be copied. (There are actually 2 different Enum classes, LinkOption and StandardCopyOption, but both implement the CopyOption interface) Here is the list of available options for Files.copy:LinkOption.NOFOLLOW_LINKS StandardCopyOption.COPY_ATTRIBUTES StandardCopyOption.REPLACE_EXISTINGThere is also a StandardCopyOption.ATOMIC_MOVE enum, but if this option is specified, an UsupportedOperationException is thrown. If no options are specified, the default is to throw an error if the target file exists or is a symbolic link. If the path object is a directory then an empty directory is created in the target location. (Wait a minute! didn’t it say in the introduction that we could copy the entire contents of a directory? The answer is still yes and that is coming!) Here’s an example of copying a file to another with Path objects using the Path.resolve and Path.relativize methods: Path sourcePath ... Path basePath ... Path targetPath ...Files.copy(sourcePath, targetPath.resolve(basePath.relativize(sourcePath));Move A File Moving a file is equally as straight forward – move(Path source, Path target, CopyOption… options); The available StandardCopyOptions enums available are:StandardCopyOption.REPLACE_EXISTING StandardCopyOption.ATOMIC_MOVEIf Files.move is called with StandardCopyOption.COPY_ATTRIBUTES an UnsupportedOperationException is thrown. Files.move can be called on an empty directory or if it does not require moving a directories contents, re-naming for example, the call will succeed, otherwise it will throw an IOException (we’ll see in the following section how to move non-empty directories). The default is to throw an Exception if the target file already exists. If the source is a symbolic link, then the link itself is moved, not the target of the link. Here’s an example of Files.move, again tying in the Path.relativize and Path.resolve methods: Path sourcePath ... Path basePath ... Path targetPath ...Files.move(sourcePath, targetPath.resolve(basePath.relativize(sourcePath));Copying and Moving Directories One of the more interesting and useful methods found in the Files class is Files.walkFileTree. The walkFileTree method performs a depth first traversal of a file tree. There are two signatures:walkFileTree(Path start,Set options,int maxDepth,FileVisitor visitor) walkFileTree(Path start,FileVisitor visitor)The second option for Files.walkFileTree calls the first option with EnumSet.noneOf(FileVisitOption.class) and Integer.MAX_VALUE. As of this writing, there is only one file visit option – FOLLOW_LINKS. The FileVisitor is an interface that has four methods defined:preVisitDirectory(T dir, BasicFileAttributes attrs) called for a directory before all entires are traversed. visitFile(T file, BasicFileAttributes attrs) called for a file in the directory. postVisitDirectory(T dir, IOException exc) only called after all files and sub-directories have been traversed. visitFileFailed(T file, IOException exc) called for files that could not be visitedAll of the methods return one of the four possible FileVisitResult enums :FileVistitResult.CONTINUE FileVistitResult.SKIP_SIBLINGS (continue without traversing siblings of the directory or file) FileVistitResult.SKIP_SUBTREE (continue without traversing contents of the directory) FileVistitResult.TERMINATETo make life easier there is a default implementation of the FileVisitor, SimpleFileVisitor (validates arguments are not null and returns FileVisitResult.CONTINUE), that can be subclassed co you can override just the methods you need to work with. Let’s take a look at a basic example for copying an entire directory structure. Copying A Directory Tree Example Let’s take a look at a class that extends SimpleFileVisitor used for copying a directory tree (some details left out for clarity): public class CopyDirVisitor extends SimpleFileVisitor<Path> { private Path fromPath; private Path toPath; private StandardCopyOption copyOption = StandardCopyOption.REPLACE_EXISTING; .... @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { Path targetPath = toPath.resolve(fromPath.relativize(dir)); if(!Files.exists(targetPath)){ Files.createDirectory(targetPath); } return FileVisitResult.CONTINUE; }@Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { Files.copy(file, toPath.resolve(fromPath.relativize(file)), copyOption); return FileVisitResult.CONTINUE; } }On line 9, each directory will be created in the target, ‘toPath’, as each directory from the source, ‘fromPath’,is traversed. Here we can see the power the Path object with respect to working with directories and files. As the code moves deeper into the directory structure, the correct Path objects are constructed simply from calling relativize and resolve on the fromPath and toPath objects, respectively. At no point do we need to be aware of where we are in the directory tree, and as a result no cumbersome StringBuilder manipulations are needed to create the correct paths. On line 17, we see the Files.copy method used to copy the file from the source directory to the target directory. Next is a simple example of deleting an entire directory tree. Deleting A Directory Tree Example In this example SimpleFileVisitor has been subclassed for deleting a directory structure: public class DeleteDirVisitor extends SimpleFileVisitor<Path> {@Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { Files.delete(file); return FileVisitResult.CONTINUE; }@Override public FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException { if(exc == null){ Files.delete(dir); return FileVisitResult.CONTINUE; } throw exc; } }As you can see, deleting is a very simple operation. Simply delete each file as you find them then delete the directory on exit. Combining Files.walkFileTree with Google Guava The previous two examples, although useful, were very ‘vanilla’. Let’s take a look at two more examples that are a little more creative by combining the Google Gauva Function and Predicate interfaces. public class FunctionVisitor extends SimpleFileVisitor<Path> { Function<Path,FileVisitResult> pathFunction;public FunctionVisitor(Function<Path, FileVisitResult> pathFunction) { this.pathFunction = pathFunction; }@Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { return pathFunction.apply(file); } }In this very simple example, we subclass SimpleFileVisitor to take a Function object as a constructor parameter and as the directory structure is traversed, apply the function to each file. public class CopyPredicateVisitor extends SimpleFileVisitor<Path> { private Path fromPath; private Path toPath; private Predicate<Path> copyPredicate;public CopyPredicateVisitor(Path fromPath, Path toPath, Predicate<Path> copyPredicate) { this.fromPath = fromPath; this.toPath = toPath; this.copyPredicate = copyPredicate; }@Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { if (copyPredicate.apply(dir)) { Path targetPath = toPath.resolve(fromPath.relativize(dir)); if (!Files.exists(targetPath)) { Files.createDirectory(targetPath); } return FileVisitResult.CONTINUE; } return FileVisitResult.SKIP_SUBTREE; }@Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { Files.copy(file, toPath.resolve(fromPath.relativize(file))); return FileVisitResult.CONTINUE; } }In this example the CopyPredicateVisitor takes a Predicate object and based on the returned boolean, parts of the directory structure are not copied. I would like to point out that the previous two examples, usefulness aside, do work in the unit tests foe the source code provided with this post. DirUtils Building on everything we’ve covered so far, I could not resist the opportunity to create a utility class, DirUtils, as an abstraction for working with directories that provides the following methods: //deletes all files but leaves the directory tree in place DirUtils.clean(Path sourcePath); //completely removes a directory tree DirUtils.delete(Path sourcePath); //replicates a directory tree DirUtils.copy(Path sourcePath, Path targetPath); //not a true move but performs a copy then a delete of a directory tree DirUtils.move(Path sourcePath, Path targetPath); //apply the function to all files visited DirUtils.apply(Path sourcePath,Path targetPath, Function function);While I wouldn’t go as far to say it’s production ready, it was fun to write. Conclusion That wraps up the new copy and move functionality provided by the java.nio.file package. I personally think it’s very useful and will take much of the pain out of working with files in Java. There’s much more to cover, working with symbolic links, stream copy methods, DirectoryStreams etc, so be sure to stick around. Thanks for your time. As always comments and suggestions are welcomed. Reference: What’s New In Java 7: Copy and Move Files and Directories from our JCG partner Bill Bejeck at the Random Thoughts On Coding blog....
eclipselink-logo

Sneak peak at Java EE 7 – Multitenant Examples with EclipseLink

The Aquarium is a great source of inspiration and most recent information about Java EE progress across all relevant specifications and reference implementations. They picked up a presentation by Oracle’s Shaun Smith (blog/twitter) about the status and future of EclipseLink as an open source project. He covers all the new features which are going to be in EclipseLink 2.4 which will be availabke along with the June Eclipse Juno Release. In detail these are REST, NoSQL and Multitenancy. (details refer to the complete slide-deck (PDF) from the marsjug event.) I like to see that EclipseLink still is the center of innovation in Java persistence and they are trying hard to adopt latest buzz timely. Working for a more conservative industry in general the main new feature I am looking for is multitenancy. What you can guess from the slides is, that something in this field should already be working with latest EclipseLink 2.3.0. What is Multitenancy going to look like? Let’s start looking at what Oracle’s Linda DeMichiel announced at last years JavaOne (compare blog-post) and also let’s look at what the early draft (PDF) of the JPA 2.1 specification has to offer. The easier part is the early draft. Not a single line mentions “Multitenan[t|cy]” in any context. So, this is obviously still a big to do for further iterations. A bit more could be found in the JavaOne Strategy Keynote (Slides 41,42) and the JavaOne Technical Keynote (PDF) (Slide 25). The general Java EE 7 approach will be to have support for separate isolated instances of the same app for different tenants. The mapping should be done by the container and be available to apps in some way. This is all very vague until today and the only concrete code examples available from the slides refer to some obvious JPA related examples using two annotations @Multitenant and @TenantDiscriminatorColumn. Hmm. Doesn’t this look familiar to you? What is possible today? It does! EclipseLink (as of 2.3.0 – Indigo) supports shared multitenant tables using tenant discriminator column(s), allowing an application to be re-used for multiple tenants and have all their data co-located. All tenants share the same schema without being aware of one another and can use non-multitenant entity types as per usual. But be aware of the fact, that this is only one possible approach to multitenancy for data. This is commonly referred to as “dedicated database” because all tenant’s data go into one single db! The basic principles for the following are: – application instances handle multiple tenants – caching has to be isolated for every tenant by JPA You can look at all the details on a dedicated EclipseLink wiki page. Want to give it a test drive? let’s start. Prerequisites as usual (NetBeans, GlassFish, MySQL, compare older posts if you need more help.). Make sure to have the right EclipseLink dependencies (at least 2.3.0)! Create a new entity via the wizard, setup your datasource and persistence.xml and call it e.g. Customer. @Entity public class Customer implements Serializable { //... }If you start your app you see EclipseLink creating something like this in your database.Let’s make this a multitenant entity. Add the following annotations: @Entity @Multitenant @TenantDiscriminatorColumn(name = "companyId", contextProperty = "company-tenant.code") public class Customer implements Serializable { //... }There are multiple usage options available for how an EclipseLink JPA persistence unit can be used in an application with @Multitenant entity types. Since different tenants will have access to only its rows the persistence layer must be configured so that entities from different tenants do not end up in the same cache. If you compare the detailed approaches (Dedicated PC, PC per tenant, PU per tenant) in more detail, you see, that as of today you end up having two possible options with container managed injection of either the PC or the PU. Let’s try the simplest thing first. Dedicated Persistence Unit In this usage there is a persistence unit defined per tenant and the application/container must request the correct PersistenceContext or PersistenceUnit for its tenant. There is one single persistence unit and nothing is shared. Go with the above example and add the following property to your persistence.xml: <property name="company-tenant.code" value="TENANT1" />Give it a try and compare the tables.As you can see, you now have your companyId column. If you insert some data it will always be filled with the property value you assigned in the persistence.xml. Use either a @PersistenceContext or a @PersistenceUnit to access your entities. Using this approach you have a shared cache as usual for your application.@PersistenceContext with Shared Cache (Source: S.Smith)Persistence Context per Tenant If you don’t want to have a single tenant per application you could decide to have a single persistence unit definition in the persistence.xml and a shared persistence unit (EntityManagerFactory and cache) in you application. In this case the tenant context needs to be specified per EntityManager at runtime. In this case you have a shared cache available for regular entity types but the @Multitenant types must be protected in the cache. You do this by specifying some properties: @PersistenceUnit EntityManagerFactory emf; Map props = new HashMap(); props.put("company-tenant.code", "TENANT2"); props.put(PersistenceUnitProperties.MULTITENANT_SHARED_EMF, true); EntityManager em = emf.createEntityManager(props);Shared @PersistenceUnit per tentant (Source: S.Smith)Discriminator Approaches The above examples work with a single discriminator tenant column. You can add the discriminator column to the PK by specifying a primaryKey attribute like the following: @TenantDiscriminatorColumn(name = "companyId", contextProperty = "company-tenant.code", primaryKey = true)It’s also possible to have multiple tenant discriminator columns using multiple tables if you do something like this: @Entity @SecondaryTable(name = "TENANTS") @Multitenant @TenantDiscriminatorColumns({ @TenantDiscriminatorColumn(name = "TENANT_ID", contextProperty = "company-tenant.id", length = 20, primaryKey = true), @TenantDiscriminatorColumn(name = "TENANT_CODE", contextProperty = "company-tenant.code", discriminatorType = DiscriminatorType.STRING, table = "TENANTS") })This leads to a secondary tenants table.Additional Goodies As always you can do the complete configuration within your persistence.xml only, too. For a reference please look at the already mentioned wiki page. One last thing is of interest. You could also map the tenant discriminator column with your entity. You simple have to make sure it isn’t updated or inserted. @Basic @Column(name = "TENANT_ID", insertable = false, updatable = false) private int tenantId; public int getTenantId() { return tenantId; }Looking at the debug output you can see what is happening behind the scenes: INFO: Getting EntityManager INFO: Inserting Test Customer FEIN: INSERT INTO CUSTOMER (ID, TENANT_ID) VALUES (?, ?) bind => [1, 2] FEIN: INSERT INTO TENANTS (ID, TENANT_CODE) VALUES (?, ?) bind => [1, TENANT2] FEIN: SELECT t0.ID, t1.TENANT_CODE, t0.TENANT_ID, t1.ID FROM CUSTOMER t0, TENANTS t1 WHERE (((t1.ID = t0.ID) AND (t1.TENANT_CODE = ?)) AND (t0.TENANT_ID = ?)) bind => [TENANT2, 2] Curious for more Java EE 7 and JPA 2.1 goodies? Keep updated with the development status wiki page for the EclipseLink JPA 2.1 project. Reference: Sneak peak at Java EE 7 – Multitenant Examples with EclipseLink from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close