Featured FREE Whitepapers

What's New Here?

scala-logo

Scala function literals

Functions are an important part of the Scala language. Scala Functions can have a parameter list and can also have a return type. So the first confusing thing is what’s the difference between a function and a method? Well the difference is a method is just a type of function that belongs to a class, a trait or a singleton object. So what’s cool about functions in scala? Well you can define functions inside functions (which are called local functions) and you can also have anonymous functions which can be passed to and returned from other functions. This post is about those anonymous functions which are referred to as function literals.       As stated, one of the cool things about function literals is that you can pass them to other functions. For example, consider snippet below where we pass a function to a filter function for a List. List(1,2,3,4,5).filter((x: Int)=> x > 3) In this case, the function literal is (x: Int)=> x > 3 This will output: resX: List[Int] = List(4, 5). => called ‘right arrow‘ means convert the thing on the left to the thing on the right. The function literal in this example is just one simple statement (that’s what they usually are), but it is possible for function literals to have multiple statements in a traditional function body surrounded by {}. For example, we could say: List(1,2,3,4,5).filter((x: Int)=>{ println('x='+ x); x > 3;}) which gives: x=1 x=2 x=3 x=4 x=5 resX: List[Int] = List(4, 5) Now one of the key features of Scala is to be able to get more done with less code. So with that mindset, let’s see how we can shorten our original function literal. Firstly, we can remove the parameter type. List(1,2,3,4,5).filter((x)=> x > 3) This technique is called target typing. The target use of the expression in this case, what is going to filter is allowed to determine the type of the x parameter. We can further reduce the strain on our fingers by removing the parentheses. This is because the parentheses were only there to show what was been referred to as Int in the parameter typing. But now the typing is inferred the brackets are superflous and can be removed. List(1,2,3,4,5).filter(x => x > 3) Shorten it even more? yeah sure… We can use the placeholder underscore syntax. List(1,2,3,4,5).filter(_ > 3) Underscores have different meanings in Scala depending on the context they are used. In this case, if you are old enough think back to the cheesey game show blankety blank. This gameshow consisted of of sentences with blanks in them and the contestants had to make suggestions for what went into the blank. In this example, the filter function fills in the blanks with the values in the Lists it is being invoked on. So the filter function is the blankety blank contestant and the List (1,2,3,4,5) are what the filter function uses to fill the blank in. So now our code is really neat and short. In Java to achieve the same, it would be: Iterator<Integer> it = new ArrayList<Integer>(1,2,3,4,5).iterator(); while( it.hasNext() ) { Integer myInt = it.next(); if(myInt > 3) it.remove(); }   Reference: Scala function literals from our JCG partner Alex Staveley at the Dublin’s Tech Blog blog. ...
software-development-2-logo

Data Warehouse Design Approaches

In our previous posts we have got to learn about Data Warehousing Objects, different kinds of Data Warehouse schemas and Data Warehouse Basics. Now it time we learn about how to build or design a Data Warehouse. Designing or Building of a Data Warehouse can be done following either one of the approaches. These approaches are notably known as:The Top-Down Approach The Bottom-Up Approach    These approaches are defined by the two of the bearers of Data Warehousing namely Ralph Kimball and Bill Inmon.The Top-Down Approach This approach was proposed by Bill Inmon, as he stated ‘Data warehouse is one part of the overall business intelligence ststem. An enterprise has one data warehouse and date marts source their information from the data warehouse. In the data warehouse, information is stored in 3rd normal form’. In short Bill Inmon advocated a ‘dependent data mart structure’.The above image shows how does the Top-Down model Works. Here are the steps:The data is extracted from different/same data sources. This data is loaded into the staging areas and validated and consolidated for ensuring the level of accuracy and then pushed to the Enterprise Data Warehouse (EDW). Detailed data is regularly extracted from EDW and is temporarily hosted in staging area for aggregation, summarization and then extracted and loaded into Data Warehouse. Once the aggregation and summarization of data is completed the Data marts extract the data into data marts and apply fresh transformations on them. This is done so that the data which comes is in sync with the strutures defined for the data mart.The Bottom-Up Approach This approached was proposed by Ralp Kimball, stated as ‘ Data Warehouse is the conglomerate of all data marts within the enterprise. Information is always stored in the dimensional model.’* Ralp kimball designed the data warehouse with the data marts connected to it with a bus structure. * The bus structure as you can see above, contained all the common elements that are used by data marts such as conformed dimension, measures etc. Basically, Kimball model reverses the Inmon model i.e. Data marts are directly loaded with the data from the source systems and then ETL process is used to load in to Data Warehouse. Here are the steps:The data flow in the bottom up approach starts from extraction of data from operational databases into the staging area where it is processed and loaded into the EDW. The data in EDW is refreshed or replaced by the fresh data being loaded. After EDW is refreshed the current data is once again extracted in staging area and transformations are applied to fit into the data mart structure. The data is the extracted from Data Mart to the staging area aggregated, summarized and so on loaded into EDW and then made available for the end user for analysis.  Reference: Data Warehouse Design Approaches from our JCG partner Farhan Khwaja at the Code 2 Learn blog ...
spring-interview-questions-answers

Spring Dynamic Language Support with Groovy

Groovy is a dynamic and object-oriented programming language running on JVM. It uses a syntax like Java, can be embedded in Java and is compiled to byte-code. Java code can be called from Groovy, and vice versa. Some of Groovy features are Meta and Functional programming, Dynamic typing (with the def keyword), Closures, GroovyBeans, Groovlets, integration with Bean Scripting Framework(BSF), Generics, Anotation and Collection Support.             This article explains fundamental Spring Dynamic Language Support for Groovy via the following ways :By using Java syntax and Spring Stereotype, By using Groovy syntax and Spring Stereotype, By using inline-script feature, By using Spring Groovy language support(lang:groovy).Used Technologies :JDK 1.7.0_09 Spring 3.2.0 Groovy 2.0.4 Maven 3.0.4STEP 1 : CREATE MAVEN PROJECT A maven project is created as below. (It can be created by using Maven or IDE Plug-in).STEP 2 : LIBRARIES Firstly, dependencies are added to Maven’ s pom.xml. <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <spring.version>3.2.0.RELEASE</spring.version> </properties><!-- Spring 3 dependencies --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency><dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency><dependency> <groupId>org.codehaus.groovy</groupId> <artifactId>groovy-all</artifactId> <version>2.0.4</version> </dependency> maven-compiler-plugin(Maven Plugin) is used to compile the project with JDK 1.7 <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <compilerId>groovy-eclipse-compiler</compilerId> <verbose>true</verbose> <source>1.7</source> <target>1.7</target> <encoding>${project.build.sourceEncoding}</encoding> </configuration> <dependencies> <dependency> <groupId>org.codehaus.groovy</groupId> <artifactId>groovy-eclipse-compiler</artifactId> <version>2.6.0-01</version> </dependency> </dependencies> </plugin> maven-shade-plugin(Maven Plugin) can be used to create runnable-jar <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>2.0</version><executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <createDependencyReducedPom>false</createDependencyReducedPom> <configuration> <source>1.7</source> <target>1.7</target> </configuration> <transformers> <transformer implementation='org.apache.maven.plugins.shade.resource.ManifestResourceTransformer'> <mainClass>com.onlinetechvision.exe.Application</mainClass> </transformer> <transformer implementation='org.apache.maven.plugins.shade.resource.AppendingTransformer'> <resource>META-INF/spring.handlers</resource> </transformer> <transformer implementation='org.apache.maven.plugins.shade.resource.AppendingTransformer'> <resource>META-INF/spring.schemas</resource> </transformer> </transformers> </configuration> </execution> </executions> </plugin> STEP 3 : CREATE Employee CLASS Employee Bean is created. package com.onlinetechvision.employee;/** * Employee Bean * * @author onlinetechvision.com * @since 24 Dec 2012 * @version 1.0.0 * */ public class Employee {private String id; private String name; private String surname;public Employee(String id, String name, String surname) { this.id = id; this.name = name; this.surname = surname; }public String getId() { return id; }public void setId(String id) { this.id = id; }public String getName() { return name; }public void setName(String name) { this.name = name; }public String getSurname() { return surname; }public void setSurname(String surname) { this.surname = surname; }@Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((id == null) ? 0 : id.hashCode()); result = prime * result + ((name == null) ? 0 : name.hashCode()); result = prime * result + ((surname == null) ? 0 : surname.hashCode()); return result; }@Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; Employee other = (Employee) obj; if (id == null) { if (other.id != null) return false; } else if (!id.equals(other.id)) return false; if (name == null) { if (other.name != null) return false; } else if (!name.equals(other.name)) return false; if (surname == null) { if (other.surname != null) return false; } else if (!surname.equals(other.surname)) return false; return true; }@Override public String toString() { return 'Employee [id=' + id + ', name=' + name + ', surname=' + surname + ']'; }} METHOD 1 : USING JAVA SYNTAX STEP 4 : CREATE IGroovyEmployeeCacheService INTERFACE IGroovyEmployeeCacheService Interface is created to expose Groovy Cache functionality. package com.onlinetechvision.groovy.srvimport com.onlinetechvision.employee.Employee/** * IGroovyEmployeeCacheService Interface exposes cache functionality. * * @author onlinetechvision.com * @since 24 Dec 2012 * @version 1.0.0 * */ interface IGroovyEmployeeCacheService {/** * Adds employee entry to cache * * @param Employee employee * */ void addToEmployeeCache(Employee employee);/** * Gets employee entry from cache * * @param String id * @return Employee employee */ Employee getFromEmployeeCache(String id);/** * Removes employee entry from cache * * @param Employee employee * */ void removeFromEmployeeCache(Employee employee);} STEP 5 : CREATE GroovyEmployeeCacheService IMPL GroovyEmployeeCacheService Class is created by implementing IGroovyEmployeeCacheService Interface. package com.onlinetechvision.groovy.srvimport com.onlinetechvision.employee.Employee; import org.springframework.stereotype.Service;/** * GroovyEmployeeCacheService Class is implementation of IGroovyEmployeeCacheService Interface. * * @author onlinetechvision.com * @since 24 Dec 2012 * @version 1.0.0 * */ @Service class GroovyEmployeeCacheService implements IGroovyEmployeeCacheService {private Map<String, Employee> cache = new HashMap();/** * Adds employee entry to cache * * @param Employee employee * */ public void addToEmployeeCache(Employee employee) { getCache().put(employee.getId(), employee); println print(employee, 'added to cache...'); }/** * Gets employee entry from cache * * @param String id * @return Employee employee */ public Employee getFromEmployeeCache(String id) { Employee employee = getCache().get(id); println print(employee, 'gotten from cache...'); return employee; }/** * Removes employee entry from cache * * @param Employee employee * */ public void removeFromEmployeeCache(Employee employee) { getCache().remove(employee.getId()); println print(employee, 'removed from cache...'); println 'Groovy Cache Entries :' + getCache(); }public Map<String, Employee> getCache() { return cache; }public void setCache(Map<String, Employee> map) { cache = map; }/** * Prints operation information * * @param Employee employee * @param String description * */ private String print(Employee employee, String desc) { StringBuilder strBldr = new StringBuilder(); strBldr.append(employee) strBldr.append(' '); strBldr.append(desc);return strBldr.toString(); } } STEP 6 : CREATE IEmployeeService INTERFACE IUserService Interface is created for Spring service layer and shows how to integrate Spring and Groovy Service layers. package com.onlinetechvision.spring.srv;import com.onlinetechvision.employee.Employee;/** * IEmployeeService Interface is created to represent Spring Service layer. * * @author onlinetechvision.com * @since 24 Dec 2012 * @version 1.0.0 * */ public interface IEmployeeService {/** * Adds Employee entry to cache * * @param Employee employee * */ void addToGroovyEmployeeCache(Employee employee);/** * Gets Employee entry from cache * * @param String id * @return Employee employee */ Employee getFromGroovyEmployeeCache(String id);/** * Removes Employee entry from cache * * @param Employee employee * */ void removeFromGroovyEmployeeCache(Employee employee);} STEP 7 : CREATE EmployeeService IMPL EmployeeService Class is created by implementing IUserService Interface. package com.onlinetechvision.spring.srv;import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service;import com.onlinetechvision.employee.Employee; import com.onlinetechvision.groovy.srv.IGroovyEmployeeCacheService;/** * EmployeeService Class is implementation of IEmployeeService interface. * * @author onlinetechvision.com * @since 24 Dec 2012 * @version 1.0.0 * */ @Service public class EmployeeService implements IEmployeeService {@Autowired private IGroovyEmployeeCacheService groovyEmployeeCacheService ;/** * Adds Employee entry to cache * * @param Employee employee * */ public void addToGroovyEmployeeCache(Employee employee) { getGroovyEmployeeCacheService().addToEmployeeCache(employee); }/** * Gets Employee entry from cache * * @param String id * @return Employee employee */ public Employee getFromGroovyEmployeeCache(String id) { return getGroovyEmployeeCacheService().getFromEmployeeCache(id); }/** * Removes Employee entry from cache * * @param Employee employee * */ public void removeFromGroovyEmployeeCache(Employee employee) { getGroovyEmployeeCacheService().removeFromEmployeeCache(employee); }public IGroovyEmployeeCacheService getGroovyEmployeeCacheService() { return groovyEmployeeCacheService; }public void setGroovyEmployeeCacheService(IGroovyEmployeeCacheService groovyEmployeeCacheService) { this.groovyEmployeeCacheService = groovyEmployeeCacheService; }} STEP 8 : CREATE applicationContext.xml Spring Configuration file, applicationContext.xml, is created. <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:context='http://www.springframework.org/schema/context' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.0.xsd'><context:component-scan base-package='com.onlinetechvision.spring.srv, com.onlinetechvision.groovy.srv'/></beans> STEP 9 : CREATE Application CLASS Application Class is created to run the application. package com.onlinetechvision.exe;import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext;import com.onlinetechvision.spring.srv.EmployeeService; import com.onlinetechvision.spring.srv.IEmployeeService; import com.onlinetechvision.employee.Employee;/** * Application Class starts the application * * @author onlinetechvision.com * @since 24 Dec 2012 * @version 1.0.0 * */ public class Application { /** * Starts the application * * @param String[] args * */ public static void main(String[] args) { ApplicationContext context = new ClassPathXmlApplicationContext('applicationContext.xml');IEmployeeService employeeService = (IEmployeeService) context.getBean(EmployeeService.class);Employee firstEmployee = new Employee('1', 'Jake', 'Gyllenhaal'); Employee secondEmployee = new Employee('2', 'Woody', 'Harrelson');employeeService.addToGroovyEmployeeCache(firstEmployee); employeeService.getFromGroovyEmployeeCache(firstEmployee.getId()); employeeService.removeFromGroovyEmployeeCache(firstEmployee);employeeService.addToGroovyEmployeeCache(secondEmployee); employeeService.getFromGroovyEmployeeCache(secondEmployee.getId()); }} STEP 10 : BUILD PROJECT After OTV_Spring_Groovy Project is build, OTV_Spring_Groovy-0.0.1-SNAPSHOT.jar is created. STEP 11 : RUN PROJECT After created OTV_Spring_Groovy-0.0.1-SNAPSHOT.jar file is run, output logs are shown as the followwing : Employee [id=1, name=Jake, surname=Gyllenhaal] added to cache... Employee [id=1, name=Jake, surname=Gyllenhaal] gotten from cache... Employee [id=1, name=Jake, surname=Gyllenhaal] removed from cache... Groovy Cache Entries :[:]Employee [id=2, name=Woody, surname=Harrelson] added to cache... Employee [id=2, name=Woody, surname=Harrelson] gotten from cache... So far, first way has been explained. Let us take a look to the other ways : METHOD 2 : USING GROOVY SYNTAX IGroovyEmployeeCacheService Interface and GroovyEmployeeCacheService Impl can be also designed by using Groovy syntax as the following : STEP 12.1 : CREATE IGroovyEmployeeCacheService INTERFACE IGroovyEmployeeCacheService Interface is created by using Groovy syntax. package com.onlinetechvision.groovy.srvimport com.onlinetechvision.employee.Employee/** * IGroovyEmployeeCacheService Interface exposes cache functionality. * * @author onlinetechvision.com * @since 24 Dec 2012 * @version 1.0.0 * */ interface IGroovyEmployeeCacheService {/** * Adds employee entry to cache * * @param Employee employee * */ def addToEmployeeCache(Employee employee);/** * Gets employee entry from cache * * @param String id * @return Employee employee */ def getFromEmployeeCache(String id);/** * Removes employee entry from cache * * @param Employee employee * */ def removeFromEmployeeCache(Employee employee);} STEP 12.2 : CREATE GroovyEmployeeCacheService IMPL GroovyEmployeeCacheService Class is created by using Groovy syntax. package com.onlinetechvision.groovy.srvimport com.onlinetechvision.employee.Employee; import org.springframework.stereotype.Service;/** * GroovyEmployeeCacheService Class is implementation of IGroovyEmployeeCacheService Interface. * * @author onlinetechvision.com * @since 24 Dec 2012 * @version 1.0.0 * */ @Service class GroovyEmployeeCacheService implements IGroovyEmployeeCacheService {def cache = new HashMap();/** * Adds employee entry to cache * * @param Employee employee * */ def addToEmployeeCache(Employee employee) { getCache().put(employee.getId(), employee); println print(employee, 'added to cache...'); }/** * Gets employee entry from cache * * @param String id * @return Employee employee */ def getFromEmployeeCache(String id) { Employee employee = getCache().get(id); println print(employee, 'gotten from cache...'); return employee; }/** * Removes employee entry from cache * * @param Employee employee * */ def removeFromEmployeeCache(Employee employee) { getCache().remove(employee.getId()); println print(employee, 'removed from cache...'); println 'Groovy Cache Entries :' + getCache(); }def getCache() { return cache; }def setCache(Map<String, Employee> map) { cache = map; }/** * Prints operation information * * @param Employee employee * @param String description * */ def print(Employee employee, String desc) { StringBuilder strBldr = new StringBuilder(); strBldr.append(employee) strBldr.append(' '); strBldr.append(desc); } } METHOD 3 : USING INLINE-SCRIPT FEATURE GroovyEmployeeCacheService Impl can be also defined by using inline-script feature as the following : STEP 13.1 : DEFINE GroovyEmployeeCacheService IMPL via applicationContext.xml GroovyEmployeeCacheService Impl Class can be defined in applicationContext.xml. <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:context='http://www.springframework.org/schema/context' xmlns:lang='http://www.springframework.org/schema/lang' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.0.xsdhttp://www.springframework.org/schema/langhttp://www.springframework.org/schema/lang/spring-lang-3.0.xsd'><context:component-scan base-package='com.onlinetechvision.spring.srv'/><lang:groovy id='groovyEmployeeCacheService'> <lang:inline-script> package com.onlinetechvision.groovy.srvimport com.onlinetechvision.employee.Employee; import org.springframework.stereotype.Service;class GroovyEmployeeCacheService implements IGroovyEmployeeCacheService {def cache = new HashMap();def addToEmployeeCache(Employee employee) { getCache().put(employee.getId(), employee); println print(employee, 'added to cache...'); }def getFromEmployeeCache(String id) { Employee employee = getCache().get(id); println print(employee, 'gotten from cache...'); return employee; }def removeFromEmployeeCache(Employee employee) { getCache().remove(employee.getId()); println print(employee, 'removed from cache...'); println 'Groovy Cache Entities :' + getCache(); }def getCache() { return cache; }def setCache(Map map) { cache = map; }def print(Employee employee, String desc) { StringBuilder strBldr = new StringBuilder(); strBldr.append(employee) strBldr.append(' '); strBldr.append(desc); } } </lang:inline-script> </lang:groovy></beans> METHOD 4 : USING SPRING GROOVY LANGUAGE SUPPORT GroovyEmployeeCacheService Impl can be also defined to Spring application-context without using stereotype(@Service) as the following : STEP 14.1 : DEFINE GroovyEmployeeCacheService IMPL via applicationContext.xml GroovyEmployeeCacheService Impl Class can be defined in applicationContext.xml. <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:context='http://www.springframework.org/schema/context' xmlns:lang='http://www.springframework.org/schema/lang' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.0.xsdhttp://www.springframework.org/schema/langhttp://www.springframework.org/schema/lang/spring-lang-3.0.xsd'><context:component-scan base-package='com.onlinetechvision.spring.srv'/><lang:groovy id='groovyEmployeeCacheService' script-source='classpath:com/onlinetechvision/groovy/srv/GroovyEmployeeCacheService.groovy'/></beans> STEP 15 : DOWNLOAD https://github.com/erenavsarogullari/OTV_Spring_Groovy RESOURCES : Groovy User Guide Spring Dynamic Language Support   Reference: Spring Dynamic Language Support with Groovy from our JCG partner Eren Avsarogullari at the Online Technology Vision blog. ...
nosqlunit-logo

Testing Spring Data MongoDB Applications with NoSQLUnit

Spring Data MongoDB is the project within Spring Data project which provides an extension to the Spring programming model for writing applications that uses MongoDB as database. To write tests using NoSQLUnit for Spring Data MongoDB applications, you do need nothing special apart from considering that Spring Data MongoDB uses a special property called _class for storing type information alongside the document. _class property stores the fully qualified classname inside the document for the top-level document as well as for every value if it is a complex type.   Type mapping MappingMongoConverter is used as default type mapping implementation but you can customize even more using @TypeAlias or implementing TypeInformationMapper interface. Application Starfleet has asked us to develop an application for storing all logs of starship crew members into their systems. To implement this requirement we are going to use MongoDB database as backend system and Spring Data MongoDB at persistence layer. Log documents have next json format: Example of Log Document { "_class" : "com.lordofthejars.nosqlunit.springdata.mongodb.log.Log" , "_id" : 1 , "owner" : "Captain" , "stardate" : { "century" : 4 , "season" : 3 , "sequence" : 125 , "day" : 8 } , "messages" : [ "We have entered a spectacular binary star system in the Kavis Alpha sector on a most critical mission of astrophysical research. Our eminent guest, Dr. Paul Stubbs, will attempt to study the decay of neutronium expelled at relativistic speeds from a massive stellar explosion which will occur here in a matter of hours." , "Our computer core has clearly been tampered with and yet there is no sign of a breach of security on board. We have engines back and will attempt to complete our mission. But without a reliable computer, Dr. Stubbs' experiment is in serious jeopardy." ] } This document is modelized into two Java classes, one for whole document and another one for stardate part. Stardate class @Document public class Stardate {private int century; private int season; private int sequence; private int day;public static final Stardate createStardate(int century, int season, int sequence, int day) {Stardate stardate = new Stardate();stardate.setCentury(century); stardate.setSeason(season); stardate.setSequence(sequence); stardate.setDay(day);return stardate;}//Getters and Setters } Log class @Document public class Log {@Id private int logId;private String owner; private Stardate stardate;private List<String> messages = new ArrayList<String>();//Getters and Setters } Apart from model classes, we also need a DAO class for implementing CRUD operations, and spring application context file. MongoLogManager class @Repository public class MongoLogManager implements LogManager {private MongoTemplate mongoTemplate;public void create(Log log) { this.mongoTemplate.insert(log); }public List<Log> findAll() { return this.mongoTemplate.findAll(Log.class); }@Autowired public void setMongoTemplate(MongoTemplate mongoTemplate) { this.mongoTemplate = mongoTemplate; }} application-context file <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.1.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.1.xsd"><context:component-scan base-package="com.lordofthejars.nosqlunit.springdata.mongodb"/> <context:annotation-config/></beans> For this example we have used MongoTemplate class for accessing to MongoDB to not create an overcomplicated example, but in a bigger project I recommend use Spring Data Repository approach by implementing CrudRepository interface on manager classes. Testing As has been told previously, you don’t have to do anything special beyond using class property correctly. Let’s see the dataset used to test the findAll method by seeding _log collection of logs database. all-logs file { "log":[ { "_class" : "com.lordofthejars.nosqlunit.springdata.mongodb.log.Log" , "_id" : 1 , "owner" : "Captain" , "stardate" : { "century" : 4 , "season" : 3 , "sequence" : 125 , "day" : 8 } , "messages" : [ "We have entered a spectacular binary star system in the Kavis Alpha sector on a most critical mission of astrophysical research. Our eminent guest, Dr. Paul Stubbs, will attempt to study the decay of neutronium expelled at relativistic speeds from a massive stellar explosion which will occur here in a matter of hours." , "Our computer core has clearly been tampered with and yet there is no sign of a breach of security on board. We have engines back and will attempt to complete our mission. But without a reliable computer, Dr. Stubbs' experiment is in serious jeopardy." ] } , { "_class" : "com.lordofthejars.nosqlunit.springdata.mongodb.log.Log" , "_id" : 2 , "owner" : "Captain" , "stardate" : { "century" : 4 , "season" : 3 , "sequence" : 152 , "day" : 4 } , "messages" : [ "We are cautiously entering the Delta Rana star system three days after receiving a distress call from the Federation colony on its fourth planet. The garbled transmission reported the colony under attack from an unidentified spacecraft. Our mission is one of rescue and, if necessary, confrontation with a hostile force." ] } ... } See that _class property is set to full qualified name of Log class. Next step is configuring MongoTemplate for test execution. LocalhostMongoAppConfig @Configuration @Profile("test") public class LocalhostMongoAppConfig {private static final String DATABASE_NAME = "logs";public @Bean Mongo mongo() throws UnknownHostException, MongoException { Mongo mongo = new Mongo("localhost"); return mongo; }public @Bean MongoTemplate mongoTemplate() throws UnknownHostException, MongoException { MongoTemplate mongoTemplate = new MongoTemplate(mongo(), DATABASE_NAME); return mongoTemplate; }} Notice that this MongoTemplate object will be instantiated only when test profile is active. And now we can write the JUnit test case: WhenAlmiralWantsToReadLogs @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = "classpath:com/lordofthejars/nosqlunit/springdata/mongodb/log/application-context-test.xml") @ActiveProfiles("test") @UsingDataSet(locations = "all-logs.json", loadStrategy = LoadStrategyEnum.CLEAN_INSERT) public class WhenAlmiralWantsToReadLogs {@ClassRule public static ManagedMongoDb managedMongoDb = newManagedMongoDbRule() .mongodPath( "/Users/alexsotobueno/Applications/mongodb-osx-x86_64-2.0.5") .build();@Rule public MongoDbRule mongoDbRule = newMongoDbRule().defaultManagedMongoDb("logs");@Autowired private LogManager logManager;@Test public void all_entries_should_be_loaded() {List<Log> allLogs = logManager.findAll(); assertThat(allLogs, hasSize(3));}} There are some important points in the previous class to take a look:Because NoSQLUnit uses JUnit Rules you can use @RunWith(SpringJUnit4ClassRunner) freely. Using @ActiveProfiles we are loading the test configuration instead of the production ones. You can use Spring annotations like @Autowired without any problem.Conclusions There is not much difference between writing tests for none Spring Data MongoDB and applications that use it. Only keep in mind to define correctly the _class property.   Reference: Testing Spring Data MongoDB Applications with NoSQLUnit from our JCG partner Alex Soto at the One Jar To Rule Them All blog. ...
neo4j-logo

Optimizing Neo4j Cypher queries

Last week, I spent a good number of hours trying to optimize around 20 Cypher queries that were performing disastrously(36866ms to 155575ms) with data from a live system. After some trial and error, and a lot of input from Michael, I was able to figure out generally what needed to be done to the queries to make them perform better- at the end of this, the worst performing query dropped to 521ms on a cold graph and 1GB heap space (and that query has optional paths- not quite sure how to improve that), and the remaining were all under 50ms- quite a large improvement from original numbers.         Hoping that this might help someone else, here is what I did (mostly guess work and largely unscientific)- perhaps Michael can help explain the internals and correct me on some assumptions. The first thing I did was make sure that every cypher query is using parameters- as stated in the Neo4j documentation, this helps in caching of execution plans. Second, I came across a post in the Neo4j mailing list, where Michael mentioned NOT re-instantiating the ExecutionEngine so that the above said parameterized queries actually do cache. This might seem obvious to many, but it was a fact that easily slipped by, given I have a class called QueryExecutor that contains a method to execute a query with map of parameters, and the method created a new ExecutionEngine for every query. Once this method was written and used many, many times over, it was very easy to forget about. However, this was a very important factor in the overall performance (a mention in the docs would be extremely helpful), and it explained why my queries generally took the same time to execute even when parameterized. Changing this to use a cached ExecutionEngine saw the bottom half of my query time worksheet drop off…0 to 1 ms after being cached- excellent progress. Now, to get to each query, starting with the worst. I decided to optimize on my local machine, with only 1GB heap space allocated, and on a cold graph. So I ignore the improvement in the query execution after caching- I feel that this is a better way to ensure progress- if the first query hit isn’t improving, then you really haven’t optimized it. That way, if it works really well with limited hardware, I am more confident of it working better in production. Apart from timing the query in the code, I optimized using the webadmin console. The indicator that the query was awful is that it just would not return and the console would hang. Optimizing so that it didn’t hang was a major improvement in itself. Highly unscientific, but I recommend it. My first query averaged about 76558ms (the time was obtained by putting in a start and end time around the engine execute method). After the first optimization, it was down to 466ms. Here is the original query:https://gist.github.com/4436272 And this is the optimized one:https://gist.github.com/4436281 There was no need to do this huge match only to filter out results based on the alertDate property on a, so I reduced the match to return the smallest set of data that could be filtered first ie. the path to a. If you were to execute the bad query up to the first match, you would see something like 20K rows returned. After filtering, they are a mere 450 odd rows. So as you can imagine, the second query quickly trims down the number of results you are potentially working with. The second change was something I learnt from Michael on the day, when I asked whether it makes sense to do a giant match or keep pruning with subqueries. His response was ‘The question is, are you using match to actually expand the result set by describing patterns or do you just want to make certain that some relationships are there (or not) aka FILTERING. If the latter, you might want to use the path-expression syntax in a where clause. WHERE a-[:FOO]->b or WHERE NOT(a-[:FOO]->b)' This took a little getting used to as the way I wrote the MATCH clause was exactly how I’d think of it in my head- but now I am able to differentiate between a match for results required versus match to filter. In the query above, I need (ir) in my results, so there is no need to include (a)-[:alert_for_inspection]->(i) in the match; I can just use it in the WHERE to make sure that a does indeed relate to i. Here is another example: https://gist.github.com/4436293 Straight away, you can see that we filter the cm relations by date- there is no need for us to first go and match all sorts of stuff if they don’t fall in the date range. So this part of the query can be rewritten to: start c=node:companies(id={id}) match c <-[parent_company*0..]-(n)with n match n-[:defines_business_process]->(bp)-[:has_cycle]->(cm) where cm.measureDate>={measureStartDate} and cm.measureDate<={measureEndDate} After that, the next filter is on a- same principle applies: with cm match (cm)-[:cycle_metric] ->m-[:metric_activity] ->ma-[:metric_unit] -> (u)-[:alert_for_unit]-(a) where a.alertDate=cm.measureDate and a.fromEntityType={type} This further prunes our result set. Finally add the connections to r (for our results), making sure that paths that do not lead to r but are necessary go into the WHERE clause: with a,ma match (r) < - [:for_inspection_result]-a-[:alert_for_inspection]- > i where (i) < -[:metric_inspection]-(ma) return a.id as alertId, r.value as resultValue Here is the full query: https://gist.github.com/4436328 Original time- 33360ms, after optimization- 246ms. At least for me, most of my queries fell into this pattern, so by day 2, I was able to refactor them very quickly. The interesting thing is, after this, I still felt the response sluggish but the query times printed out in my log were tiny. After elimination, I found that my code actually stuck for a very long time after the query had executed (by executionEngine.execute) but within the first iteration over the resultset.I assume that the results are not collected necessarily during the execute() method, but lazily fetched upon iteration of the resultset- I do not know Cypher internals so I might be completely wrong. But timing the iteration itself serves to point out even more badly written queries. The other bits and pieces are- ORDER BY adds on a lot of time. If you can do without it, this is the first thing you should drop. DISTINCT also adds time, but in many of my cases, it was difficult to drop. If you want to check the absence of optional paths, where I typically do MATCH (u)-[r?:has_a]- (a) WHERE NOT (r is null), instead, re-write as MATCH (u)-[other_stuff]-.. WHERE NOT(u-[:has_a]-a) and it performs much better. However where I have optional paths like MATCH X-[o?:optional]-Y WHERE (o is present, match Y to A and B) OR (o is absent, match X to c and d), I was unable to simplify and these queries still take some time as compared to others without optional paths. Finally- the problem was discovered so late because test data is never quite close to live data. The structure of the graph played a big role- some nodes were heavily connected, others, not so much- and the queries involving those heavily connected nodes were the ones that hurt us most. Try and use production quality data as far as possible to performance test or try and create test data as similar to it. So, to summarize:Always parameterize your queries Cache the ExecutionEngine Figure out what you’re filtering on- try and apply that filter as early as possible with the least possible matches involved, so that your resultset gets progressively smaller as you move further into the query. Keep measuring both timing and results returned in each subquery so you can decide what goes first when the filter is not obvious Examing your MATCH and RETURN clauses- include in the MATCH only those pieces that are required in RETURN. The remaining which would be to filter the results can go into the WHERE If you don’t need ORDER BY, drop it yesterday If you don’t need DISTINCT, get rid of it too Checking the absence/presence of optional paths can be moved from the MATCH into WHERE if you don’t need fancy filtering stuff based on it Time not only the execute() on the query, but also the time to iterate through the results If your webadmin console hangs, you have done a Bad Thing. Drop various parts of your query to figure out the offending one. Try to use live data as far as possible Test on a cold graph with miserly resources- you feel so much better when you see it zip past you on production!  Reference: Optimizing Neo4j Cypher queries from our JCG partner Aldrin & Luanne Misquitta at the Thought Bytes blog. ...
slf4j-logo

SLF4J Logging in Eclipse Plugins

Developing with Maven and pure Java libraries all the time, I never thought it could be a problem to issue a few log statements when developing an Eclipse plugin. But it looks like in the imaginary of an Eclipse developer everything is always inside the Eclipse environment and nothing is outside the Eclipse universe. If you search for the above headline using Google, one of the first articles you’ll find is one about the “platform logging facility”. But what about 3rd libraries? They cannot use an Eclipse-based logging framework.         In my libraries I use the SLF4J API and leave it up to the user to decide what logging implementation (Log4J, Logback, JDK) he or she wants to use. And that’s exactly what I want to do in Eclipse. It was hard to figure out exactly how to do it, but here are the pieces of that puzzle. Phase 1: Development This describes the steps during the development phase of your custom plugin. Step 1: Get your libaries into a P2 repository Everything you want to use in Eclipse has to be installed from a P2 repository. But most of the libaries I use are in a Maven repository. As far as I know there is no such thing as a main P2 repository similar to the “Maven Central,” and all libraries I found in P2 repositories were pretty old. So you have to create one by yourself. Luckily there is a Maven plugin called p2-maven-plugin that converts all your Maven JARs into a single P2 repository. You can upload the plugin to a folder of your website or simply install it from your local hard drive. For this example you’ll need the following libraries:org.slf4j:slf4j-api:1.6.6 org.slf4j:slf4j-log4j12:1.6.6 log4j:log4j:1.2.17 org.ops4j.pax.logging:pax-logging-api:1.7.0 org.ops4j.pax.logging:pax-logging-service:1.7.0 org.ops4j.pax.confman:pax-confman-propsloader:0.2.2Format “groupId:artifactid:version” is as used for the “p2-maven-plugin.” To skip this step you could also use http://www.fuin.org/p2-repository/. Step 2: Install the SLF4J API in the Eclipse IDESelect “Help / Install New Software…”.Add the P2 repository URL and install the “slf4j-api”—you could directly use the folder from Step 1 with a file URL like this: “file:/pathtoyour/p2-repository/”. Add the freshly installed “slf4j.api” to your MANIFEST.MF. Start using SLF4J logs in your code as usual.Phase 2: Production This describes the tasks a user of your custom plugin has to complete to start logging with Log4J. The following assumes that your custom plugin is already installed. Step 1: Install the log libraries in the Eclipse IDESelect “Help / Install New Software…”. Install the “Equinox Target Components” from the Eclipse Update Site. Add the P2 repository URL and install the following plugins:Apache Log4j OPS4J Pax ConfMan–Properties Loader OPS4J Pax Logging–API OPS4J Pax Logging–ServiceStep 2: Configure PAX LoggingSet the location for your log configuration in the “eclipse.ini” as “vmarg'-vmargs -Xms40m -Xmx512m -Dbundles.configuration.location=<config-dir>Create a folder named “services” in the above “config-dir.” Create Log4J properties named “org.ops4j.pax.logging.properties” in “services.”log4j.rootLogger=INFO, FILE log4j.appender.FILE=org.apache.log4j.FileAppender log4j.appender.FILE.File=<path-to-your-log>/example.log log4j.appender.FILE.layout=org.apache.log4j.PatternLayout log4j.appender.FILE.layout.ConversionPattern=%d{yyyy/MM/dd HH:mm:ss,SSS} [%t] %-5p %c %x - %m%n log4j.logger.your.package=DEBUGStep 3: Activate PAX LoggingOpen the “Console” view. Select the “Host OSGI Console.” Start the following bundles: start org.eclipse.equinox.cm start org.ops4j.pax.logging.pax-logging-api start org.ops4j.pax.logging.pax-logging-service start org.ops4j.pax.configmanagerNow you should be able to see your log statements in the configured “example.log” file. Step 4: Changing the configuration If you want to change the configuration in the “org.ops4j.pax.logging.properties”, simply restart the PAX Configmanager in the OSGI console:stop org.ops4j.pax.configmanager start org.ops4j.pax.configmanagerHappy Logging!   Reference: SLF4J Logging in Eclipse Plugins from our JCG partner Michael Schnell at the A Java Developer’s Life blog. ...
java-interview-questions-answers

Be Careful With Cache Managers

If you are using spring and JPA, it is very likely that you utilize ehcache (or another cache provider). And you do that in two separate scenarios: JPA 2nd level cache and spring method caching. When you configure your application, you normally set the 2nd level cache provider of your JPA provider (hibernate, in my case) and you also configure spring with the “cache” namespace. Everything looks OK and you continue with the project. But there’s a caveat. If you follow the most straightforward way, you get two separate cache managers which load the same cache configuration file. This is not bad per-se, but it is something to think about – do you really need two cache manager and the problems that may arise from this? Probably you don’t. So you have to get rid of the redundant manager. To do that, you need to set your spring cache manager as shared: <bean id='ehCacheManager' class='org.springframework.cache.ehcache.EhCacheManagerFactoryBean'> <property name='shared' value='true' /> </bean> This means that spring won’t create a new instance of cache manager, but will reuse the one already created by hibernate. Now, there’s something to think about here – it would depend on the order of bean creation – whether the JPA factory bean or the cache manager factory bean will be first. Luckily, this doesn’t matter for the end result, because SingletonEhCacheRegionFactory reuses an existing cache manager instance if it finds one. So, now you have made your cache manager jvm-singleton. But then there’s another problem that you may encounter if you have multiple applications deployed and you are using JMX. The Cache manager registers itself as a JMX bean. But when you have singletons, multiple applications will try to register the same cache manager multiple times, and that will fail. The result will be a couple of exceptions in the log and the inability to control the cache manager of multiple modules. A side effect of the same problem gets in the way if you use something like Terracotta (there cache manager identity matters). Luckily, you have an easy fix for that. Just add one property to the bean definition shown above: <property name='cacheManagerName' value='${module.name}' /> ${module.name} is a property resolved with a PropertyPlaceholderConfigurer and is configurable per webapp, so each webapp can have a different module name. That way the cache manager will be accessible under the specified name via JMX. Overall, be careful with your cache managers. Even in case you are using different cache, jpa and DI provider, you should verify the scenarios described above. Reference: Be Careful With Cache Managers from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog. ...
java-logo

Inferred exceptions in Java

It’s always nice to borrow and steal concepts and ideas from other languages. Scala’s Option is one idea I really like, so I wrote an implementation in Java. It wraps an object which may or may not be null, and provides some methods to work with in a more kinda-sorta functional way. For example, the isDefined method adds an object-oriented way of checking if a value is null. It is then used in other places, such as the getOrElse method, which basically says “give me what you’re wrapping, or a fallback if it’s null”.         public T getOrElse(T fallback) { return isDefined() ? get() : fallback; } In practice, this would replace tradional Java, such as public void foo() { String s = dao.getValue(); if (s == null) { s = 'bar'; } System.out.println(s); } with the more concise and OO public void foo() { Option<String> s = dao.getValue(); System.out.println(s.getOrElse('bar')); } However, what if I want to do something other than get a fallback value – say, throw an exception? More to the point, what if I want to throw a specific type of exception – that is, both specific in use and not hard-coded into Option? This requires a spot of cunning, and a splash of type inference. Because this is Java, we can start with a new factory – ExceptionFactory. This is a basic implementation that only creates exceptions constructed with a message, but you can of course expand the code as required. public interface ExceptionFactory <E extends Exception> { E create(String message); } Notice the <E extends Exception> – this is the key to how this works. Using the factory, we can now add a new method to Option: public <E extends Exception> T getOrThrow(ExceptionFactory<E> exceptionFactory, String message) throws E { if (isDefined()) { return get(); } else { throw exceptionFactory.create(message); } } Again, notice the throws E – this is inferred from the exception factory. And that, believe it or not, is 90% of what it takes. The one irritation is the need to have exception factories. If you can stomach this, you’re all set. Let’s define a couple of custom exceptions to see this in action. public <E extends Exception> T getOrThrow(ExceptionFactory<E> exceptionFactory, String message) throws E { if (isDefined()) { return get(); } else { throw exceptionFactory.create(message); } } And the suspiciously similar ExceptionB public class ExceptionB extends Exception { public ExceptionB(String message) { super(message); }public static ExceptionFactory<ExceptionB> factory() { return new ExceptionFactory<ExceptionB>() { @Override public ExceptionB create(String message) { return new ExceptionB(message); } }; } } And finally, throw it all together: public class GenericExceptionTest { @Test(expected = ExceptionA.class) public void exceptionA_throw() throws ExceptionA { Option.option(null).getOrThrow(ExceptionA.factory(), "Some message pertinent to the situation"); }@Test public void exceptionA_noThrow() throws ExceptionA { String s = Option.option("foo").getOrThrow(ExceptionA.factory(), "Some message pertinent to the situation"); Assert.assertEquals("foo", s); }@Test(expected = ExceptionB.class) public void exceptionB_throw() throws ExceptionB { Option.option(null).getOrThrow(ExceptionB.factory(), "Some message pertinent to the situation"); }@Test public void exceptionB_noThrow() throws ExceptionB { String s = Option.option("foo").getOrThrow(ExceptionB.factory(), "Some message pertinent to the situation"); Assert.assertEquals("foo", s); } } The important thing to notice, as highlighted in bold above, is the exception declared in the method signature is specific – it’s not a common ancestor (Exception or Throwable). This means you can now use Options in your DAO layer, your service layer, wherever, and throw specific exceptions where and how you need. Download source: You can get the source code and tests from here – genex Sidenote One other interesting thing that came out of writing this was the observation that it’s possible to do this: public void foo() { throw null; }public void bar() { try { foo(); } catch (NullPointerException e) { ... } } It goes without saying that this is not a good idea.   Reference: Inferred exceptions in Java from our JCG partner Steve Chaloner at the Objectify blog. ...
jcg-logo

Java Code Geeks Rebranded

Hello all and happy new year! During the past few weeks you might have noticed some changes here in Java Code Geeks. We have recently finished rebranding and restyling our site! We have upgraded our infrastructure, introduced a new layout and adopted new logos. We believe the site has now a cleaner view and it is much smoother. ...
software-development-2-logo

Classic Mistakes in Software Development and Maintenance

…the only difference between experienced and inexperienced developers is that the experienced ones realize when they’re making mistakes. Jeff Atwood, Escaping from Gilligan’s Island An important part of risk management, and responsible management at all, is making sure that you aren’t doing anything obviously stupid. Steve McConnell’s list of Classic Mistakes is a place to start: a list of common basic mistakes in developing software and in managing development work, mistakes that are made so often, by so many people, that we all need to be aware of them.     McConnell originally created this list in 1996 for his book Rapid Development (still one of the best books on managing software development). The original list of 36 mistakes was updated in 2008 to a total of 42 common mistakes based on a survey of more than 500 developers and managers. The mistakes that have the highest impact, the mistakes that will most likely led to failure, are:Unrealistic expectations Weak personnel Overly optimistic schedules Wishful thinking Shortchanged QA Inadequate design Lack of project sponsorship Confusing estimates with targets Excessive multi-tasking Lack of user involvementMost of the mistakes listed have not changed since 1996 (and were probably well known long before that). Either they’re fundamental, or as an industry we just aren’t learning, or we don’t care. Or we can’t find the time or secure a mandate to do things right, because of the relentless focus on short-term results:Stakeholders won’t naturally take a long-term view: they tend to minimize the often extreme down-the-road headaches that result from the cutting of corners necessitated by the rush, rush, rush mentality. They’ll drive the car without ever changing the oil. Peter Kretzman, Software development’s classic mistakes and the role of the CTO/CIO The second most severe mistake that a development organization can make is to staff the team with weak personnel: hiring fast or cheap rather than holding out for people who have more experience and better skills, but who cost more. Although the impact of making this mistake is usually severe, it happens in only around half of projects – most companies aren’t stupid enough to staff a development team with weak developers, at least not a big, high-profile project. Classic Mistakes in Software Maintenance But a lot of companies staff maintenance teams this way, with noobs and maybe a couple of burned out old-timers who are putting in their time and willing to deal with the demands of maintenance until they retire.You get stuck in maintenance only if you are not good enough to work on new projects. After spending millions of dollars and many developer-years of effort on creating an application, the project is entrusted to the care of the lowest of the low. Crazy! Pete McBreen, Software Craftsmanship Capers Jones (Geriatric Issues of Ageing Software 2007, Estimating Software Costs 2008) has found that staffing a maintenance team with inexperienced people destroys productivity and is one of the worst practices that any organization can follow:Worst Practice Effect on ProductivityNot identifying and cleaning up error-prone code – the 20% of code that contains 80% of bugs -50%Code with embedded data and hard-coded variables – which contributes to “mass update” problems when this data changes -45%Staffing maintenance teams with inexperienced people -40%High complexity code that is hard to understand and change (often the same code that is error-prone) -30%Lack of good tools for source code navigation and test coverage -28%Inefficient or nonexistent change control methods -27%Many of these mistakes are due to not recognizing and not dealing with basic code quality and technical debt issues, figuring out what code is causing you the most trouble and cleaning it up. The rest are basic, obvious management issues. Keep the people who built the system and who understand it and know how and why it works working on it as long as you can. Make it worth their while to stay, give them meaningful things to work on, and make sure that they have good tools to work with. Find ways for them to work together efficiently, with each other, with other developers, with operations and with the customer. These simple, stupid mistakes add up over time to huge costs, when you consider that maintenance makes up between 40% and 80% of total software costs. Like the classic mistakes in new development, mistakes in maintenance are obvious and fixable. We know we shouldn’t do these things, we know what’s going to happen, and yet we keep doing them, over and again, and we’re surprised when things fail. Why?   Reference: Classic Mistakes in Software Development and Maintenance from our JCG partner Jim Bird at the Building Real Software blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close