Featured FREE Whitepapers

What's New Here?

software-development-2-logo

The Default Use Case

You should have a default use case (or a small set of them). No matter what are you making – end-user product, public API, protocol spec, etc. The default use case is the most common thing that your users will do with your product. The focus of your product. And after you define your default use case, you must make sure it is the most obvious and easy thing to do. Yes, this sounds like a very obvious and universal thing, but many products fail to recognize that. Here are a few examples of products that do it right:The default use case of google is searching. They make it pretty obvious – you have one text box and one button (well, two actually). It is the most straightforward thing to do with their interface. Of course that is not the only thing that you can do – you can customize your search, customize your language, you can advertise or visit their side-projects. But all these are options at the side of the screen. The default use case has all the focus The default use case of Amazon is purchasing. Do they make it obvious? Yes – you pick your items and you click the 1-click checkout button. Extremely simple. And the most obvious thing to do. Of course you can customize all the steps in your purchase, you can be a seller yourself or you can write reviews to items, but these are “side-options” that are available if you look for them. But you immediately see the default use case, and it is the easiest thing to do on the site With the Java collections framework you have two main steps of usage: fill the collection and then iterate the collection. Every novice programmer can create an ArrayList, fill it and then print its elements. And for many things this is all you do with a collection. Is it simple to do? Sure – collection.add() and the for-each loop. Are there other useful features? Tons of them. Synchronization, read-only views, search, different collection semantics, etc. But these options are usually not needed, so they are not part of the Collection interface, and they are not the first thing you learn about collections.To prove that this is not obvious, let me give two examples of products not focusing on their default use case:The other day I wanted to buy a ticket for a concert. The site that sold the tickets had 10 steps for purchasing a ticket, some of which included clicking on one of several links within a free text. The other links showed you more details about the concert hall, the price class of the ticket, and other information that is not required for the default use case. Which could be brought down to three steps – select the event, select a seat, pay. I bet they are losing tons of clients because they get lost in the UI. Because the site owners didn’t focus on their default use case at all. The Java DOM API. This is a good example by Joshua Bloch. What would you normally want to do with a DOM API? 1. Parse documents and store them in memory 2. Create documents in memory and write them. How to do the latter? Here is how. You need a Transformer factory, a transformer, and two adapters – a DOMSource and a StreamResult. This is the least straightforward thing to do, and I won’t tell you how much time I had to spend to write that snippet of code. The API should have instead provided that “printDocument(..)” method.It sounds simple, but it is very easy to overlook this important detail, and make a product that is tedious to use. So before you start, define your default use-case(s). And constantly verify that you are on track. This is for product managers, architects, project owners. Am I telling the obvious? Yes, but note the generalization of “products”. Anything has a default use case – not only software. Even a cafeteria has it, and if you have to go through 6 steps to order a simple coffee, they will probably lose you as a customer. So whatever you do, be sure to focus on your default use case. Reference: The Default Use Case from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog. Related Articles :Technical debt & the Boiling Frog How to Fail With Drools or Any Other Tool/Framework/Library Diminishing Returns in software development and maintenance Writing Code that Doesn’t Suck Dealing with technical debt Java Tutorials and Android Tutorials list...
aspectj-logo

Spring Pitfalls: Proxying

Being a Spring framework user and enthusiast for many years I came across several misunderstandings and problems with this stack. Also there are places where abstractions leak terribly and to effectively and safely take advantage of all the features developers need to be aware of them. That is why I am starting a Spring pitfalls series. In the first part we will take a closer look at how proxying works. Bean proxying is an essential and one of the most important infrastructure features provided by Spring. It is so important and low-level that for most of the time we don’t even realize that it exists. However transactions, aspect-oriented programming, advanced scoping, @Async support and various other domestic use-cases wouldn’t be possible without it. So what is proxying? Here is an example: when you inject DAO into service, Spring takes DAO instances and injects it directly. That’s it. However sometimes Spring needs to be aware of each and every call made by service (and any other bean) to DAO. For instance if DAO is marked transactional it needs to start a transaction before call and commit or rolls back afterwards. Of course you can do this manually, but this is tedious, error-prone and mixes concerns. That’s why we use declarative transactions on the first place. So how does Spring implement this interception mechanism? There are three methods from simplest to most advanced ones. I won’t discuss their advantages and disadvantages yet, we will see them soon on a concrete examples. Java dynamic proxies Simplest solution. If DAO implements any interface, Spring will create a Java dynamic proxy implementing that interface(s) and inject it instead of the real class. The real one still exists and the proxy has reference to it, but to the outside world – the proxy is the bean. Now every time you call methods on your DAO, Spring can intercept them, add some AOP magic and call the original method. CGLIB generated classes The downside of Java dynamic proxies is a requirement on the bean to implement at least one interface. CGLIB works around this limitation by dynamically subclassing the original bean and adding interception logic directly by overriding every possible method. Think of it as subclassing the original class and calling super version amongst other things: class DAO { def findBy(id: Int) = //... } class DAO$EnhancerByCGLIB extends DAO { override def findBy(id: Int) = { startTransaction try { val result = super.findBy(id) commitTransaction() result } catch { case e => rollbackTransaction() throw e } } }However, this pseudocode does not illustrate how it works in reality – which introduces yet another problem, stay tuned. AspectJ weaving This is the most invasive but also the most reliable and intuitive solution from the developer perspective. In this mode interception is applied directly to your class bytecode which means the class your JVM runs is not the same as the one you wrote. AspectJ weaver adds interception logic by directly modifying your bytecode of your class, either during build – compile time weaving (CTW) or when loading a class – load time weaving (LTW). If you are curious how AspectJ magic is implemented under the hood, here is a decompiled and simplified .class file compiled with AspectJ weaving beforehand: public void inInterfaceTransactional() { try { AnnotationTransactionAspect.aspectOf().ajc$before$1$2a73e96c(this, ajc$tjp_2); throwIfNotInTransaction(); } catch(Throwable throwable) { AnnotationTransactionAspect.aspectOf().ajc$afterThrowing$2$2a73e96c(this, throwable); throw throwable; } AnnotationTransactionAspect.aspectOf().ajc$afterReturning$3$2a73e96c(this); }With load time weaving the same transformation occurs at runtime, when the class is loaded. As you can see there is nothing disturbing here, in fact this is exactly how you would program the transactions manually. Side note: do you remember the times when viruses were appending their code into executable files or dynamically injecting themselves when executable was loaded by the operating system? Knowing proxy techniques is important to understand how proxying works and how it affects your code. Let us stick with declarative transaction demarcation example, here is our battlefield: trait FooService { def inInterfaceTransactional() def inInterfaceNotTransactional(); } @Service class DefaultFooService extends FooService { private def throwIfNotInTransaction() { assume(TransactionSynchronizationManager.isActualTransactionActive) } def publicNotInInterfaceAndNotTransactional() { inInterfaceTransactional() publicNotInInterfaceButTransactional() privateMethod(); } @Transactional def publicNotInInterfaceButTransactional() { throwIfNotInTransaction() } @Transactional private def privateMethod() { throwIfNotInTransaction() } @Transactional override def inInterfaceTransactional() { throwIfNotInTransaction() } override def inInterfaceNotTransactional() { inInterfaceTransactional() publicNotInInterfaceButTransactional() privateMethod(); } }Handy throwIfNotInTransaction() method… throws exception when not invoked within a transaction. Who would have thought? This method is called from various places and different configurations. If you examine carefully how methods are invoked – this should all work. However our developers’ life tend to be brutal. First obstacle was unexpected: ScalaTest does not support Spring integration testing via dedicated runner. Luckily this can be easily ported with a simple trait (handles dependency injection to test cases and application context caching): trait SpringRule extends AbstractSuite { this: Suite => abstract override def run(testName: Option[String], reporter: Reporter, stopper: Stopper, filter: Filter, configMap: Map[String, Any], distributor: Option[Distributor], tracker: Tracker) { new TestContextManager(this.getClass).prepareTestInstance(this) super.run(testName, reporter, stopper, filter, configMap, distributor, tracker) } }Note that we are not starting and rolling back transactions like the original testing framework. Not only because it would interfere with our demo but also because I find transactional tests harmful – but more on that in the future. Back to our example, here is a smoke test. The complete source code can be downloaded here from proxy-problem branch. Don’t complain about the lack of assertions – here we are only testing that exceptions are not thrown: @RunWith(classOf[JUnitRunner]) @ContextConfiguration class DefaultFooServiceTest extends FunSuite with ShouldMatchers with SpringRule{ @Resource private val fooService: FooService = null test("calling method from interface should apply transactional aspect") { fooService.inInterfaceTransactional() } test("calling non-transactional method from interface should start transaction for all called methods") { fooService.inInterfaceNotTransactional() } }Surprisingly, the test fails. Well, if you’ve been reading my articles for a while you shouldn’t be surprised: Spring AOP riddle and Spring AOP riddle demystified. Actually, the Spring reference documentation explains this in great detail, also check out this SO question. In short – non transactional method calls transactional one but bypassing the transactional proxy. Even though it seems obvious that when inInterfaceNotTransactional() calls inInterfaceTransactional() the transaction should start – it does not. The abstraction leaks. By the way also check out fascinating Transaction strategies: Understanding transaction pitfalls article for more. Remember our example showing how CGLIB works? Also knowing how polymorphism works it seems like using class based proxies should help. inInterfaceNotTransactional() now calls inInterfaceTransactional() overriden by CGLIB/Spring, which in turns calls the original classes. Not a chance! This is the real implementation in pseudo-code: class DAO$EnhancerByCGLIB extends DAO { val target: DAO = ... override def findBy(id: Int) = { startTransaction try { val result = target.findBy(id) commitTransaction() result } catch { case e => rollbackTransaction() throw e } } }Instead of subclassing and instantiating subclassed bean Spring first creates the original bean and then creates a subclass which wraps the original one (somewhat Decorator pattern) in one of the post processors. This means that – again – the self call inside bean bypasses AOP proxy around our class. Of course using CGLIB changes how are bean behaves in few other ways. For instance we can now inject concrete class rather than an interface, in fact the interface is not even needed and CGLIB proxying is required in this circumstances. There are also drawbacks – constructor injection is no longer possible, see SPR-3150, which is a shame. So what about some more thorough tests? @RunWith(classOf[JUnitRunner]) @ContextConfiguration class DefaultFooServiceTest extends FunSuite with ShouldMatchers with SpringRule { @Resource private val fooService: DefaultFooService = null test("calling method from interface should apply transactional aspect") { fooService.inInterfaceTransactional() } test("calling non-transactional method from interface should start transaction for all called methods") { fooService.inInterfaceNotTransactional() } test("calling transactional method not belonging to interface should start transaction for all called methods") { fooService.publicNotInInterfaceButTransactional() } test("calling non-transactional method not belonging to interface should start transaction for all called methods") { fooService.publicNotInInterfaceAndNotTransactional() } }Please pick tests that will fail (pick exactly two). Can you explain why? Again common sense would suggest that everything should pass, but that’s not the case. You can play around yourself, see class-based-proxy branch. We are not here to expose problems but to overcome them. Unfortunately our tangled service class can only be fixed using heavy artillery – true AspectJ weaving. Both compile- and load-time weaving makes the test pass. See aspectj-ctw and aspectj-ltw branches accordingly. You should now be asking yourself several question. Which approach should I take (or: do I really need to use AspectJ?) and why should I even bother? – amongst others. I would say – in most cases simple Spring proxying will suffice. But you absolutely have to be aware of how does the propagation work and when it doesn’t. Otherwise bad things happen. Commits and rollbacks occurring in unexpected places, spanning unexpected amount of data, ORM dirty checking not working, invisible records – believe, this things happen on wild. And remember that topics we have covered here apply to all AOP aspects, not only transactions. Reference: Spring pitfalls: proxying from our JCG partner Tomasz Nurkiewicz at the NoBlogDefFound Blog. Related Articles :Spring Declarative Transactions Example The evolution of Spring dependency injection techniques Domain Driven Design with Spring and AspectJ Spring 3 Testing with JUnit 4 – ContextConfiguration and AbstractTransactionalJUnit4SpringContextTests Aspect Oriented Programming with Spring AOP Java Tutorials and Android Tutorials list...
junit-logo

Unit Testing Using Mocks – Testing Techniques 5

My last blog was the fourth in a series of blogs on approaches to testing code, demonstrating how to create a unit test that isolates the object under test using a stub object. Today’s blog looks at what is sometimes regarded as an opposing technique: unit testing with mock objects. Again, I’m using my simple scenario of retrieving an address from a database:… and testing the AddressService class: @Component public class AddressService {private static final Logger logger = LoggerFactory.getLogger(AddressService.class);private AddressDao addressDao;/** * Given an id, retrieve an address. Apply phony business rules. * * @param id * The id of the address object. */ public Address findAddress(int id) {logger.info("In Address Service with id: " + id); Address address = addressDao.findAddress(id);address = businessMethod(address);logger.info("Leaving Address Service with id: " + id); return address; }private Address businessMethod(Address address) {logger.info("in business method");// Apply the Special Case Pattern (See MartinFowler.com) if (isNull(address)) { address = Address.INVALID_ADDRESS; }// Do some jiggery-pokery here....return address; }private boolean isNull(Object obj) { return obj == null; }@Autowired @Qualifier("addressDao") void setAddressDao(AddressDao addressDao) { this.addressDao = addressDao; } }…by replacing he data access object with a mock object. Before continuing, it would be a good idea to define what exactly a mock object is and how it differs from a stub. If you read my last blog, you’ll remember that I let Martin Fowler define a stub object as: “Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test.” …which is taken from his essay Mocks Aren’t Stubs. So, how do mock object differ to stubs? When you hear people talk about mock objects, they often mention that they’re mocking behaviour or mocking roles, but what does that mean? The answer lies in the way a unit test and a mock object work together to test your object. The mock object scenario goes like this:A mock object is defined in the test. The mock object is injected into your object under test The test specifies which methods on the mock object will be called, plus the arguments and return values. This is known as ‘setting expectations’. The test then runs. The test then asks the mock to verify that all the method calls specified in step three were called correctly. If they were then the test passes. If they weren’t then the test fails.Therefore, mocking behaviour or mocking roles really means checking that your object under test calls methods on a mock object correctly and failing the test if it doesn’t; hence, you’re asserting on the correctness of method calls and the execution path through your code, rather than, in the case of a regular unit test, the return value of the method under test. Although there are several professional mocking frameworks available, for this example I first decided to produce my own AddressDao mock, which fulfils the above requirements. After all, how hard can it be? public class HomeMadeMockDao implements AddressDao {/** The return value for the findAddress method */ private Address expectedReturn;/** The expected arg value for the findAddress method */ private int expectedId;/** The actual arg value passed in when the test runs */ private int actualId;/** used to verify that the findAddress method has been called */ private boolean called;/** * Set and expectation: the return value for the findAddress method */ public void setExpectationReturnValue(Address expectedReturn) { this.expectedReturn = expectedReturn; }public void setExpectationInputArg(int expectedId) { this.expectedId = expectedId; }/** * Verify that the expectations have been met */ public void verify() {assertTrue(called); assertEquals("Invalid arg. Expected: " + expectedId + " actual: " + expectedId, expectedId, actualId); }/** * The mock method - this is what we're mocking. * * @see com.captaindebug.address.AddressDao#findAddress(int) */ @Override public Address findAddress(int id) {called = true; actualId = id; return expectedReturn; } }The unit test code that supports this mock is: public class MockingAddressServiceWithHomeMadeMockTest {/** The object to test */ private AddressService instance;/** * We've written a mock,,, */ private HomeMadeMockDao mockDao;@Before public void setUp() throws Exception { /* Create the object to test and the mock */ instance = new AddressService(); mockDao = new HomeMadeMockDao(); /* Inject the mock dependency */ instance.setAddressDao(mockDao); }/** * Test method for * {@link com.captaindebug.address.AddressService#findAddress(int)}. */ @Test public void testFindAddressWithEasyMock() {/* Setup the test data - stuff that's specific to this test */ final int id = 1; Address expectedAddress = new Address(id, "15 My Street", "My Town", "POSTCODE", "My Country");/* Set the Mock Expectations */ mockDao.setExpectationInputArg(id); mockDao.setExpectationReturnValue(expectedAddress);/* Run the test */ instance.findAddress(id);/* Verify that the mock's expectations were met */ mockDao.verify(); } }Okay, although this demonstrates the steps required to carry out a unit test using a mock object, it’s fairly rough and ready, and very specific to the AddressDao/AddressService scenario. To prove that it’s already been done better, the following example uses easyMock as a mocking framework. The unit test code in this more professional case is: @RunWith(UnitilsJUnit4TestClassRunner.class) public class MockingAddressServiceWithEasyMockTest {/** The object to test */ private AddressService instance;/** * EasyMock creates the mock object */ @Mock private AddressDao mockDao;/** * @throws java.lang.Exception */ @Before public void setUp() throws Exception { /* Create the object to test */ instance = new AddressService(); }/** * Test method for * {@link com.captaindebug.address.AddressService#findAddress(int)}. */ @Test public void testFindAddressWithEasyMock() {/* Inject the mock dependency */ instance.setAddressDao(mockDao); /* Setup the test data - stuff that's specific to this test */ final int id = 1; Address expectedAddress = new Address(id, "15 My Street", "My Town", "POSTCODE", "My Country"); /* Set the expectations */ expect(mockDao.findAddress(id)).andReturn(expectedAddress); replay();/* Run the test */ instance.findAddress(id);/* Verify that the mock's expectations were met */ verify(); } }…which i hope you’ll agree is more progressional than my quick attempt at writing a mock. The main criticism levelled at using mock objects is that they closely couple the unit test code to the implementation of the production code. This is because the code that sets the expectations closely tracks the execution path of the production code. This means that subsequent refactoring of the production code can break a multitude of tests even though the class still fulfills its interface contract. This give rise to the assertion that mock tests are fairly brittle and that you’ll spend time fixing them unnecessarily, which from experience I agree with – although using ‘non-strict’ mocks, which don’t care about the order in which methods expectations are called, alleviates the problem to a degree. On the other hand, once you know how to use a framework like easyMock, producing unit tests that isolate you object under test can be done very quickly and efficiently. In self critiquing this example code, I’d like to point out that I think that using a mock object is overkill in this scenario, plus, you could also easily argue that I’m using a mock as a stub. Several years ago, when I first came across easyMock, I used mocks everywhere, but recently I’ve come to prefer manually writing stubs for application boundary classes, such as DAOs, and objects that merely return data. This is because stub based tests are arguably a lot less brittle than mock based tests especially when all you need to is access data. Why use mocks? Mocks good at testing an application written using the ‘tell don’t ask’ technique, to verify that a method with a void return is called. Reference: Unit Testing Using Mocks – Testing Techniques 5 from our JCG partner Roger Hughes at the Captain Debug blog Related Articles :Testing Techniques – Not Writing Tests The Misuse of End To End Tests – Testing Techniques 2 What Should you Unit Test? – Testing Techniques 3 Regular Unit Tests and Stubs – Testing Techniques 4 Creating Stubs for Legacy Code – Testing Techniques 6 More on Creating Stubs for Legacy Code – Testing Techniques 7 Why You Should Write Unit Tests – Testing Techniques 8 Some Definitions – Testing Techniques 9 Using FindBugs to produce substantially less buggy code Developing and Testing in the Cloud...
junit-logo

When to replace Unit Tests with Integration Test

Its been a while I was thinking about integration vs unit testing. Lots of googling, questions in stack overflow and looking in many books. In this post I would like share with you the information I have found and what decision I arrive at this. I think if you put this question to any Java developer, you will get an answer supporting the unit test instantaneously. They will tell you that if you can not do unit testing, the unit of code you have written is large and you have to bring it down to multiple testable unit. I have found these questions in Stack OverflowIntegration Testing best practices Integration vs Unit Testing Junit: splitting integration test and Unit tests What is an integration test exactly?useful in getting a good understanding of both terminologies. Unit testing is widely followed in Java domain with most of the projects now also enforcing the code coverage and unit testing. Integration testing is relatively new and misunderstood concept. Even through it is practiced much less than, unit testing there can be various uses of it. Lets take a simple user story and revisit both. Lets consider it with a user story User will give an INPUT in a UI form and when the submit button is clicked, the PROCESSED output should be shown in the next page. Since the results should be editable, both INPUT and OUTPUT should be saved in the database. Technical Design Lets assume that our application will grow rapidly in the future and we will design it now keeping that in mind.As shown in the above image it is a standard 4-tier design with view, service, dao. It has a application layer which contains the logic of how to convert the input into output. To implement the story we wrote 5 methods(1 in service, 2 dao to save input and output, 1 contains the business and 1 method in view layer to prepare the input.) Writing some Unit test cases If we were to unit test the story, we need to write 5 tests. As the different layers are dependent, we might need to use mock objects for testing. But apart from one method in application layer that does the original process, is there any need to unit test the other part? For example the method, public void saveInput(Input input){ Session session = sessionFactory.getCurrentSession(); session.save(input); }When you unit test this, you will typically use a mock object of sessionFactory and the code will always work. Hence I don’t see much point in wiring a unit test here. If you observe carefully, apart from the application layer, all the other methods will be similar to what we have discussed. What can we achieve with integration test? Read here as a heads up for the integration test. As we have seen most of the unit tests for this story were not effective. But we can’t skip testing as we want to find out the code coverage and make our code self testing. According to Martin flower in his article about Continuous Integration you should write the code that can test the other code. I fell the good integration tests can do this for you. Lets write a simple integration test for this situation, @Configurable(autowire = Autowire.BY_NAME) @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = {"classpath:applicationContext.xml"}) public class SampleIntTest { @Autowired private SamService samService; @Test public void testAppProcessing(){ Input input = new Input(); //prepare input for the testing. Output output = samService.processInputs(input); //validate outpt gainst the given input. if(!isValidate(input,output)){ Assert.fail("Validation failed!"); } }I have skipped the view layer here as it not so relevant. We are expecting the INPUT from the UI in a bean Input and that’s that we will have in the service layer. Here, With this few line of coding, you can achieve the full functionality of the application from the service layer. It is preferred to use a in-memory database like H2 for integration tests. If the table structure is complicated it may not be possible in that case,you can use a test instance of DB and prepare a script to delete all the data and run it as part of the test so that the DB state is restored back. This is important because in the next run the same data will be saved again. Another advantage of integration tests is that if you are changing the implementation the test need not be changed because it is concerned with the input and output.This is useful in refactoring the code without change in the test code. Also, you can schedule these tests to measure the stability of the application and any regression issues can be caught early. As we have seen, integration test are easy to write and are useful, but we need to make sure it does not replace the unit testing completely. When ever there is a rule base system(like the LogicalProcesser in our case) it should be mandatory because you can’t cover all the scenarios with the integration test. So as always JEE is about making choice asnd sticking to it. For the last few months we were practicing it in our teams and it is really going well. Currently we made integration test mandatory and unit tests and optional. Always review the code coverage and make sure you get good coverage in the core of the system(the LogicalProcesser in our case). Reference: When to replace Unit Tests with Integration Test from our JCG partner Manu PK at the “The Object Oriented Life” Blog. Related Articles :Regular Unit Tests and Stubs – Testing Techniques 4 Code coverage with unit & integration tests Java RESTful API integration testing Spring 3 Testing with JUnit 4 – ContextConfiguration and AbstractTransactionalJUnit4SpringContextTests Mock Static Methods with PowerMock Java Tutorials and Android Tutorials list...
software-development-2-logo

SOLID – Open Closed Principle

Open Closed Principle (OCP) states that, Software entities (Classes, modules, functions) should be OPEN for EXTENSION, CLOSED for MODIFICATION. Lets try to reflect on the above statement- software entities once written shouldn’t be modified to add new functionality, instead one has to extend the same to add new functionality. In other words you don’t touch the existing modules thereby not disturbing the existing functionality, instead you extend the modules to implement the new requirement. So your code is less rigid and fragile and also extensible. OCP term was coined by Bertnard Meyer. How can we confirm to OCP principle? Its simple – Allow the modules (classes) to depend on the abstractions, there by new features can be added by creating new extensions of these abstractions. Let me try to explain with an example: Suppose you are writing a module to approve personal loans and before doing that you want to validate the personal information, code wise we can depict the situation as: public class LoanApprovalHandler { public void approveLoan(PersonalValidator validator) { if ( validator.isValid()) { //Process the loan. } } } public class PersonalLoanValidator { public boolean isValid() { //Validation logic } }So far so good. As you all know the requirements are never the same and now its required to approve vehicle loans, consumer goods loans and what not. So one approach to solve this requirement is to: public class LoanApprovalHandler { public void approvePersonalLoan (PersonalLoanValidator validator) { if ( validator.isValid()) { //Process the loan. } } public void approveVehicleLoan (VehicleLoanValidator validator ) { if ( validator.isValid()) { //Process the loan. } } // Method for approving other loans. } public class PersonalLoanValidator { public boolean isValid() { //Validation logic } } public class VehicleLoanValidator { public boolean isValid() { //Validation logic } }We have edited the existing class to accomodate the new requirements- in the process we ended up changing the name of the existing method and also adding new methods for different types of loan approval. This clearly violates the OCP. Lets try to implement the requirement in a different way: /** * Abstract Validator class * Extended to add different * validators for different loan type */ public abstract class Validator { public boolean isValid(); } /** * Personal loan validator */ public class PersonalLoanValidator extends Validator { public boolean isValid() { //Validation logic. } } /* * Similarly any new type of validation can * be accommodated by creating a new subclass * of Validator */Now using the above validators we can write a LoanApprovalHandler to use the Validator abstraction. public class LoanApprovalHandler { public void approveLoan(Validator validator) { if ( validator.isValid()) { //Process the loan. } } }So to accommodate any type of loan validators we would just have create a subclass of Validator and then pass it to the approveLoan method. That way the class is CLOSED for modification but OPEN for extension. Another example: I was thinking of another hypothetical situation where the use of OCP principle can be of use. The situation is some thing like: “We maintain a list of students with their marks, unique identification(uid) and also name. Then we provide an option to get the percentage in the form of uid-percentage name value pairs.” class Student { String name; double percentage; int uid; public Student(String name, double percentage, int uid) { this.name = name; this.percentage = percentage; this.uid = uid; } }We collect the student list into a generic class: class StudentBatch { private List<Student> studentList; public StudentBatch() { studentList = new ArrayList<Student>(); } public void getSutdentMarkMap(Hashtable<Integer, Double> studentMarkMap) { if (studentMarkMap == null) { //Error } else { for (Student student : studentList) { studentMarkMap.put(student.uid, student.percentage); } } } /** * @param studentList the studentList to set */ public void setStudentList(List<Student> studentList) { this.studentList = studentList; } }Suppose we need to maintain the order of elements in the Map by their insertion order, so we would have to write a new method to get the map in the insertion order and for that we would be using LinkedHashMap. Instead if the method- getStudentMarkMap() was dependent on the Map interface and not the Hashtable concrete implementation, we could have avoided changing the StudentBatch class and instead pass in an instance of LinkedHashMap. public void getSutdentMarkMap(Map<Integer, Double> studentMarkMap) { if (studentMarkMap == null) { //Error } else { for (Student student : studentList) { studentMarkMap.put(student.uid, student.percentage); } } }PS: I know that Hashtable is an obsolete collection and not encouraged to be use. But I thought this would make another useful example for OCP principle. Some ways to keep your code closer to confirming OCP: Making all member variables private so that the other parts of the code access them via the methods (getters) and not directly. Avoiding typecasts at runtime- This makes the code fragile and dependent on the classes under consideration, which means any new class might require editing the method to accommodate the cast for the new class. Really good article written by Robert Martin on OCP. Reference: SOLID- Open Closed Principle from our JCG partner Mohamed Sanaulla at the “Experiences Unlimited” blog. Related Articles :SOLID – Single Responsibility Principle Are frameworks making developers dumb? Not doing Code Reviews? What’s your excuse? Why Automated Tests Boost Your Development Speed Using FindBugs to produce substantially less buggy code Things Every Programmer Should Know Java Tutorials and Android Tutorials list...
software-development-2-logo

Technical debt & the Boiling Frog

I hope everybody among my readers is familiar with the concept of technical debt: If you do a quick hack to implement a feature it might be faster to implement in the short run, but you have to pay interest for the technical debt in the form of higher development and maintenance effort. If you don’t pay back you technical debt by refactoring your code, sooner or later your quick hack will have turned in a quick and highly expensive hack. This metaphor works well to communicate the need for refactorings if at least one person realized the need for it. But in various cases nobody in the project realizes that there is a problem until the team face a huge debt which seems impossible to pay back. I see two possible reasons: Being blind: There is a different level of cleanliness people consider clean enough. In interviews I have repeatedly talked to people considering a 20 line method a good thing and don’t have problem with nested control structures 5 levels deep. Those developers wouldn’t notice anything smelly when looking at code which I would consider CMD (Code of Mass Destruction). This problem is most of the time fairly easy to fix by teaching and coaching. But there is a more insidious possible reason: Being a frog in boiling water: There is a theory that a frog which sits in cold water doesn’t jump out of the water if you heat it really slow until it dies. Wikipedia isn’t decisive if this is actually true, but I definitely see this effect in software development. It looks like this: At some point you find something in your code that looks like it has an easy abstraction. So you build a little class encapsulating that behavior and use that class instead. It works great, so that little class gets used a lot. Sometime later the class gets a new feature to handle a variation of the original case. And it goes on like this, until one day what was a simple abstraction has turned in a complex library, possibly even a framework. And now that framework is a problem on its own. Its getting hard to understand how to use it. And you realize you are a poor little frog sitting in boiling water with no idea how he got there. Hint: It’s a good idea to jump even when it is a little late. Why does this happen? Just as the frog has a problem sensing the small change in temperature and realizing he is getting into trouble the developer doesn’t see he is hurting the quality of the code base until it is to late. Again: Why? Let’s make the example a little more specific. You have blocks of 10 lines of simple repetitive code in N places in your code base. You replace it with a call to a simple class of 40 lines of code. So you save (9*N – 40) lines. On the call site your code gets significantly simpler, of course the class is a little more complex but that’s ok. Now while implementing a new feature you are about to create another of those 10 line blocks. Obviously you want to use the helper class. But it’s not fit for the job. You need to add a feature to it. That’s ok. You also have to add something to the public API to turn that feature on or of. Maybe it’s a new constructor, a new method or an additional parameter. That’s not ok. Until you changed the API of your class the changes where local to the helper class and its usage at the new site. But when you changed the API, you added complexity to all the call sites of your class. Whenever you now call that helper you have to think a little more about the correct API to use. This unfortunately isn’t easy to see. So it can easily happen that your API turns slowly so complex that using it is more painful then just writing the 10 lines it replaces down. Reference: The Boiling Frog from our JCG partner Jens Schauder at the Schauderhaft Blog. Related Articles :Dealing with technical debt Services, practices & tools that should exist in any software development house Measuring Code Complexity How many bugs do you have in your code? Java Tutorials and Android Tutorials list...
software-development-2-logo

What is NoSQL ?

NoSQL is a term used to refer to a class of database systems that differ from the traditional relational database management systems (RDBMS) in many ways. RDBMSs are accessed using SQL. Hence the term NoSQL implies not accessed by SQL. More specifically not RDBMS or more accurately not relational. Some key characteristics of NqSQL databases are :They are distributed, can scale horizontally and can handle data volumes of the order of several terrabytes or petabytes, with low latency. They have less rigid schemas than a traditional RDBMS. They have weaker transactional guarantees. As suggested by the name, these databases do not support SQL. Many NoSQL databases model data as row with column families, key value pairs or documentsTo understand what non relational means, it might be useful to recap what relational means. Theoretically, relational databases comply with Codds 12 rules of relational model. More simply, in RDBMS, a table is relation and database has a set of such relations. A table has rows and columns. Each table has contraints and the database enforces the constraints to ensure the integrity of data.Each row in a table is identified by a primary key and tables are related using foreign keys. You eliminate duplicate data during the process of normalization, by moving columns into separate tables but keeping the relation using foreign keys. To get data out of multiple tables requires joining the tables using the foreign keys. This relational model has been useful in modeling most real world problems and is in widespread use for the last 20 years. In addition, RDBMS vendors have gone to great lengths to ensure that RDBMSs do a great job in maintaining ACID (actomic, consistent, integrity, durable) transactional properties for the data stored. Recovery is supported from unexpected failures. This has lead to relational databases becoming the de facto standard for storing enterprise data. If RDBMSs are so good, Why does any one need NoSQL databases ? Even the largest enterprises have users only in the order of 1000s and data requirements in the order of few terra bytes. But when your application is on the internet, where you are dealing with millions of users and data in the order of petabytes, things start to slow down with a RDBMS. The basic operations with any database are read and write. Reads can be scaled by replicating data to multiple machines and load balancing read requests. However this does not work for writes because data consistency needs to be maintained. Writes can be scaled only by partitioning the data. But this affects read as distributed joins can be slow and hard to implement. Additionally, to maintain ACID properties, databases need to lock data at the cost of performance. The Googles, facebooks , Twitters have found that relaxing the constraints of RDBMSs and distributing data gives them better performance for usecases that involveLarge datasets of the order of petabytes. Typically this needs to stored using multiple machines. The application does a lot of writes. Reads require low latency. Data is semi structured. You need to be able to scale without hitting a bottleneck. Application knows what it is looking for. Adhoc queries are not required.What are the NoSQL solutions out there ? There are a few different types. 1. Key Value Stores They allow clients to read and write values using a key. Amazon’s Dynamo is an example of a key value store. get(key) returns an object or list of objects put(key,object) store the object as a blob Dynamo use hashing to partition data across hosts that store the data. To ensure high availability, each write is replicated across several hosts. Hosts are equal and there is no master. The advantage of Dynamo is that the key value model is simple and it is highly available for writes. 2. Document stores The key value pairs that make up the data are encapsulated as a document. Apache CouchDB is an example of a document store. In CouchDB , documents have fields. Each field has a key and value. A document could be "firstname " : " John ", "lastname " : "Doe" , "street " : "1 main st", "city " : "New york"In CouchDB, distribution and replication is peer to peer. Client interface is RESTful HTTP, that integrated well with existing HTTP loadbalancing solutions. 3. Column based stores Read and write is done using columns rather than rows. The best known examples are Google’s BigTable and the likes of HBase and Cassandra that were inspired by BigTable. The BigTable paper says that BigTable is a sparse, distributed, persistent, multidimensional sorted Map. While that sentence seems complicated, reading each word individually gives clarity.sparse – some cells can be empty distributed – data is partitioned across many hosts persistent – stored to disk multidimensional – more than 1 dimension Map – key and value sorted – maps are generally not sorted but this one isThis sample might help you visualize a BigTable map { row1:{ user:{ name: john id : 123 }, post: { title:This is a post text : xyxyxyxx } } row2:{ user:{ name: joe id : 124 }, post: { title:This is a post text : xyxyxyxx } } row3:{ user:{ name: jill id : 125 }, post: { title:This is a post text : xyxyxyxx } }}The outermost keys row1,row2, row3 are analogues to rows. user and post are what are called column families. The column family user has columns name and id. post has columns title and text. Columnfamily:column is how you refer to a column. For eg user:id or post:text. In Hbase, when you create the table, the column families need to be specified. But columns can be added on the fly. HBase provides high availability and scalability using a master slave architecture. Do I needs a NoSQL store ? You do not need a NoSQL store ifAll your data fits into 1 machine and does not need to be partitioned. You are doing OLTP which required the ACID transaction properties and data consistency that RDBMSs are good at. You need ad hoc querying using a language like SQL. You have complicated relationships between the entities in your applications. Decoupling data from application is important to you.You might want to start considering NoSQL stores ifYour data has grown so large that it can no longer be handled without partitioning. Your RDBMS can no longer handle the load. You need very high write performance and low latency reads. Your data is not very structured. You can have no single point of failure. You can tolerate some data inconsistency.Bottomline is that NoSql stores are a new and complex technology. There are many choices and no standards. There are specific use cases for which NoSql is a good fit. But RDBMS does just fine for most vanilla use cases. Reference: What is NoSQL ? from our JCG partner Manoj Khangaonkar at The Khangaonkar Report. Related Articles :Cassandra vs MongoDB vs CouchDB vs Redis vs Riak vs HBase comparison SQL or NOSQL: That is the question? Using MongoDB with Morphia Java SE 7, 8, 9 – Moving Java Forward...
agile-logo

Understanding the Vertical Slice

One of the biggest challenges in breaking down backlogs is knowing how to split up the work from a backlog into right sized pieces. I’ve already talked about the concept that smaller is better, but I we haven’t really addressed the decision of how to actually divide a backlog up to make it smaller.The default path Most developers trying to break down a backlog into smaller chunks will automatically head down the path of using a “horizontal slice.”This is how we tend to think. What do I mean by a horizontal slice? A horizontal slice is basically a slice through the feature or backlog that horizontally divides up the architecture. Most things are built this way. If you were to build a house, you would probably start by slicing up the project horizontally. You would first pour the foundation. Then put up the walls. Then put on the roof and many more steps, leaving the finishing work for last. This same thinking usually gets applied to breaking up backlogs in Agile development. It would seem pretty silly to build a house where you finished one room completely at a time. Agile software development is different There is a distinct difference though, between developing software in an Agile way and building a house. The big difference is that in Agile software development, true Agile development, you don’t know exactly what you are going to build until you are done building it. With a house this is rarely the case. With a house, you have some blueprints that you have drawn up ahead of time. You know exactly where each wall will be and where each outlet will be. You may have even built houses before that are very similar. When building software, unless you are taking a waterfall approach and planning everything upfront, you don’t know what you are really building until you are done. Before you object to this statement, consider this: This is the point of Agile development. Agile means responding to change. Building a house, you do not expect the customer to say: “Hmm, yeah, I don’t really like that wall there.” “Actually, I am thinking we are going to need 5 bedrooms now.” In software development, you are expecting statements analogous to the above! So what is vertical slicing? Simply put, building one room at a time. But it’s not functional! Who wants a house one room at a time?!? Correct! It is not functional as a house, but we can pour more foundation, change how we are going to do the rest of the rooms and even knock down the walls and start over without incurring a huge cost. The point in building our software “one room at a time,” is that we are giving the customer a chance to see the product as it is being built in a way that matters to them and enables them to test it out. Sure they aren’t going to be able to live in it until it is all done. But, they will have the ability to step into a room and envision it with all their furniture in there. Customers don’t care about foundations and framed in walls. As a developer, you might be able to look at some foundation and framed in walls and envision what the house will look like, but the customer can’t and worse yet, it can’t be tested. Vertical slicing in software development is taking a backlog that might have some database component, some business logic and a user interface and breaking it down into small stepwise progressions where each step cuts through every slice. The idea is that instead of breaking a backlog up into the following:Implement the database layer for A, B and C Implement the business logic layer for A, B and C Implement the user interface for A, B and CThe backlog is broken up into something like:Implement A from end to end Implement B from end to end Implement C from end to endSounds easy enough, why the debate? Because it is NOT easy. I’m not going to lie to you. It is MUCH easier to slice up a backlog horizontally. As developers we tend to think about the horizontal slicing when we plan out the implementation of a backlog. We tend to want to implement things by building one layer at a time. Thinking about how to break apart a backlog into vertical slices requires us to step outside the understanding of the code and implementation and instead think about the backlog in small pieces of working functionality. There is almost always some progression of functionality that can be found for a large backlog. What I mean by this is that there are almost always smaller steps or evolutions in functionality that can be created in order to produce and end result in software development. Sometimes the steps that are required to break up a backlog vertically are going to result in a bit of waste. Sometimes you are going to purposely create a basic user interface that you know you are going to redo parts of as you implement more vertical slices. This is OK! It is better to plan small amounts of rework than to build up an entire feature one horizontal slice at a time and have to rework huge parts of the feature that weren’t planned for. So what is the benefit? You might be thinking to yourself that this sounds like more work without much benefit. So why would I bother to break up a backlog vertically? Is it really that important? I’ve already hinted at some of the benefits of slicing things vertically. The true impetus behind vertical slicing is the very cornerstone of Agile methodology. It is about delivering working functionality as soon as possible. We aren’t going to cover the whole reasoning behind this idea in Agile development. I am assuming that you already subscribe to the idea that delivering working functionality as soon as possible is important and valuable. Based on that premise alone, you can see that horizontal slicing is in direct violation to one of Agile methodology’s core tenants. It is interesting to me how many people are huge proponents of breaking entire systems up into functional pieces that are delivered one piece at a time, but are so opposed to doing it at the micro scale when dealing with individual backlog items. If you are opposed to what I am saying about vertical slicing, you really have to ask yourself whether or not you truly subscribe to the same idea applied at the larger level, because there really isn’t a difference. Reference: Understanding the Vertical Slice from our JCG partner John Sonmez at the Making the Complex Simple blog. Related Articles :Breaking Down an Agile process Backlog Even Backlogs Need Grooming Can we replace requirement specification with better understanding? Agile software development recommendations for users and new adopters Save money from Agile Development 9 Tips on Surviving the Wild West Development Process Not doing Code Reviews? What’s your excuse?...
junit-logo

Regular Unit Tests and Stubs – Testing Techniques 4

My last blog was the third in a series of blogs on approaches to testing code and discussing what you do and don’t have to test. It’s based around my simple scenario of retrieving an address from a database using a very common pattern:…and I proffered the idea that any class that doesn’t contain any logic doesn’t really need unit testing. In this I included my data access object, DAO, preferring instead to integration test this class to ensure it worked in collaboration with the database. Today’s blog covers writing a regular or classical unit test that enforces test subject isolation using stub objects. The code we’ll be testing is, again, the AddressService: @Component public class AddressService {private static final Logger logger = LoggerFactory.getLogger(AddressService.class);private AddressDao addressDao;/** * Given an id, retrieve an address. Apply phony business rules. * * @param id * The id of the address object. */ public Address findAddress(int id) {logger.info("In Address Service with id: " + id); Address address = addressDao.findAddress(id);address = businessMethod(address);logger.info("Leaving Address Service with id: " + id); return address; }private Address businessMethod(Address address) {logger.info("in business method");// Apply the Special Case Pattern (See MartinFowler.com) if (isNull(address)) { address = Address.INVALID_ADDRESS; }// Do some jiggery-pokery here....return address; }private boolean isNull(Object obj) { return obj == null; }@Autowired @Qualifier("addressDao") void setAddressDao(AddressDao addressDao) { this.addressDao = addressDao; } }Michael Feather’s book Working Effectively with Legacy Code states that a test is not a unit test if:It talks to a database. It communicates across a network. It touches the file system. You have to do special things to your environment (such as editing configuration files) to run it.To uphold these rules, you need to isolate your object under test from the rest of your system, and that’s where stub objects come in. Stub objects are objects that are injected into your object and are used to replace real objects in test situations. Martin Fowler defines stubs, in his essay Mocks Aren’t Stubs as: “Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it ‘sent’, or maybe only how many messages it ‘sent’”. Picking a word to describe stubs is very difficult, I could choose dummy or fake, but there are types of replacement object that are known as dummies or fakes – also described by Martin Fowler:Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists. Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).However, I have seen other definitions of the term fake object,for example Roy Osherove in is book The Art Of Unit Testing defines a fakes object as:A fake is a generic term that can be used to describe either a stub or a mock object…because the both look like the real object.…so I, like many others, tend to call all replacement objects either mocks or stubs as there is a difference between the two, but more on that later. In testing the AddressService, we need to replace the real data access object with a stub data access object and in this case, it looks something like this: public class StubAddressDao implements AddressDao {private final Address address;public StubAddressDao(Address address) { this.address = address; }/** * @see com.captaindebug.address.AddressDao#findAddress(int) */ @Override public Address findAddress(int id) { return address; } }Note the simplicity of the stub code. It should be easily readable, maintainable and NOT contain any logic and need a unit test of its own. Once the stub code has been written, next follows the unit test: public class ClassicAddressServiceWithStubTest {private AddressService instance;@Before public void setUp() throws Exception { /* Create the object to test */ /* Setup data that's used by ALL tests in this class */ instance = new AddressService(); }/** * Test method for * {@link com.captaindebug.address.AddressService#findAddress(int)}. */ @Test public void testFindAddressWithStub() {/* Setup the test data - stuff that's specific to this test */ Address expectedAddress = new Address(1, "15 My Street", "My Town", "POSTCODE", "My Country"); instance.setAddressDao(new StubAddressDao(expectedAddress));/* Run the test */ Address result = instance.findAddress(1);/* Assert the results */ assertEquals(expectedAddress.getId(), result.getId()); assertEquals(expectedAddress.getStreet(), result.getStreet()); assertEquals(expectedAddress.getTown(), result.getTown()); assertEquals(expectedAddress.getPostCode(), result.getPostCode()); assertEquals(expectedAddress.getCountry(), result.getCountry()); }@After public void tearDown() { /* * Clear up to ensure all tests in the class are isolated from each * other. */ } }Note that in writing a unit test, we’re aiming for clarity. A mistake often made is to regard test code as inferior to production code with the result that it’s often messier and more illegible. Roy Osherove in The Art of Unit Testing puts forward the idea that test code should be more readable that production code. Clear tests should follow these basic, linear steps:Create the object under test. In the code above this is done in the setUp() method as I’m using the same object under test for all (one) tests. Setup the test. This is done in the test method testFindAddressWithStub() as the data used in a test is specific to that test. Run the Test Tear down the test. This ensures that tests are isolated from each other and can be run IN ANY ORDER.Using a simplistic stub yields the two benefits of isolating the AddressService from the outside world and tests that run quickly. How brittle is this kind of test? If your requirements change then the test and the stub changes – not so brittle after all? As a comparison, my next blog re-writes this test using EasyMock. Reference: Regular Unit Tests and Stubs – Testing Techniques 4 from our JCG partner Roger Hughes at the Captain Debug blog Related Articles :Testing Techniques – Not Writing Tests The Misuse of End To End Tests – Testing Techniques 2 What Should you Unit Test? – Testing Techniques 3 Unit Testing Using Mocks – Testing Techniques 5 Creating Stubs for Legacy Code – Testing Techniques 6 More on Creating Stubs for Legacy Code – Testing Techniques 7 Why You Should Write Unit Tests – Testing Techniques 8 Some Definitions – Testing Techniques 9 Using FindBugs to produce substantially less buggy code Developing and Testing in the Cloud...
devoxx-logo

Devoxx 2011 Impressions

Devoxx 2011 is over, and it was awesome. Finally, after having to spend the weekend with wife and kid (haven’t seen much of me the last week), I’ve found the time to write down some stuff. It was for me the sixth Devoxx, my first one being in 2006 – back when I was still a student and it was called Javapolis. So obviously I have some tradition with Devoxx, which grants me to say that (at least for me) it was one of the best editions ever. It may have to do with the fact I didn’t have a talk this year (Frederik had that honour), and I could enjoy the conference at its fullest. That said, time for a wrap-up!Java == boring?From a first glance at the Devoxx schedule it was clear that Java was not the first-class citizen it used to be on Devoxx. Dynamic languages, Android and HTML 5 were this years favourites. And, in my opinion, the organisators have decided wisely. We’re past the time that every new Spring, Hibernate or JBoss version drive cool and innovative features. EE6 and its features are well told and understood by now. UI frameworks have come and gone (byebye, JSF!). If you think about it, what ‘big hot item’ has happened in the last year in Java land? … Exactly. And that in itself is not a bad thing. Java is as mainstream as mainstream can get. Solid, stable and here to stay for a long time (almost sound like Cobol). Sure, there is plenty of opportunities and work to do if you’re a Java developer. But we have to admit that we’re not amongst the hottest of the hottest these days (and probably have been like that since the advent of Ruby on Rails). Which is why some polyglotism and peeking around other technologies can’t hurt, if you’d ask me. University days I attended Devoxx during both Uni and Conference days. For me personally, I found the Conference sessions wildly more interesting than the Uni talks. Three hour talks are just not my cup of tea. There just too much technical depth and information in these talks. I rather read a book on my own pace than trying for three hours to frantically follow every slide of the talk. Lessons learned for next year. One exception: I really liked the Android Jumpstart talk by Lars Vogel. I have some IOS development experience, and seeing how straightforward Android development is, was interesting to say at least. Keynote: next year not Oracle, pretty please? I think everybody wholeheartedly agreed that the keynote(s) on Wednesday by Oracle were boring as hell. I had high expectations when the first Oracle guy started with the obligatory Oracle Disclaimer, but invited the audience to find the spelling mistakes he added on purpose. But it was downhill from there. Nothing new in the presentations, slides obviously originating from sales and marketing, frantically avoiding the ‘Android’ word (J2ME will be and important focus in the future… uuuh yeah, right!). I mean, these guys get a golden plated chance to get on the stage and show the tech audience that they rock as being the new steward of Java. Anyway, enough words written on this topic. The fact I saw several people sleeping in their seats speaks for itself. Keynote: but Google can certainly return! The Android keynote by Tim Bray on Thursday luckily was of a different kind. Great speaker (been following his blog since I discovered Ruby on Rails years ago), humorous, a new feature with examples here and there (I’m an IPhone guy, but hell Android sure has some sweet stuff up it’s sleaves!) and a real ‘call-to-action’ at the end (got the whole room very, very silent). I was in doubt to get up early again to go to the keynote after the debacle of the day before, but it sure was worth it. On a sidenote: Stephan’s keynote (this year announcing Devoxx France!) was as good as ever. He can definitely return next year. Sessions: too much choice! The hardest part about Devoxx is the choice. With seven talks going on in parallel, on has to choose. And sometimes I had three sessions picked at the same time… damn! Luckily, all talks should be on Parleys by Christmas (and every Devoxx attendee gets access to it!). I saw plenty sessions during Devoxx, but some of them really stood out: Activiti + Vaadin, A match made in heavenNo suprise here. My fellow Activist Frederik did a great co-op presentation with the Vaadin guys. Both Activiti and Vaadin are of course enormously cool frameworks, so the combination surely gives some firework. There were some nice (new) features in the talk too, which they kept secret until the talk (like the annotations to match a form with a Vaadin view). Someone in the audience taped the whole talk and has put in on Youtube. Enjoy! Java: The Good, the Bad, and the Ugly Parts by Joshua BlochAs Joshua Bloch his sessions tend to be the most popular ones (last year, I could’t get in), I made sure to be good in time. The content of the talk was an overview of basically JDK 1.0, and what was in there that helped Java to be where it is today, and what might have hindered it. As you would expect from him, some nice puzzlers thrown in between. One of the best speakers out there, and really knows his stuff. PhoneGap for Hybrid App Development by Brian LeRouxI only knew the idea behind PhoneGap, so this session was exellent to soak up some information. Brian is an awesome speaker, typical start-up attitude with the obligatory cursing and quotes to remember. Altough his demo’s (except one) didn’t work due to his Mac not seeing his Android, followed by cursing adb and all that is holy, I understood the goal and value of PhoneGap. I was really impressed by PhoneGap Build, a cloud environment where you upload your HTML and Javascript, and it gets compiled to every platform you wish (even Blackberry). The demo went well this time, and after uploading his example html, he could just scan the QR code on his screen and the app got installed on his phone. Sweet! People who work with me in real-life, know that I’m fan of blaming others for yak shaving. Brian’s slide on ‘serious business time’ couldn’t have said it better. Meh! It’s only cross site scripting what’s the big deal? by Cambell Murray I had expected a technical talk, but we rather got juicy examples and stories about customers (anonymous of course) on security bugs that lead to huge problems. Really great speaker. It was as if he was just was at a bar and talking while having a beer. It was a clear eye-opener that there is a world outside our comfortable Java code where hackers and criminal entities are extremely inventive and do stuff which we as ‘normal’ developers would never have tought of. Rules for Good UI Design by Joe Nuxoll This session was placed in one of the smaller rooms, but this quickly proved to be a mistake. In no time the room was complety packed, and people sat everywhere they could. This is really a good sign that developers are understanding the need of UI design.Joe is an UI designer for the Tesla S interface who used to work for Apple (which might have helped for the session’s popularity), and the examples were of course chosen from that background. The first part of the talk was a tid boring (but I already read quite a bit on the topic), but the second part was full of examples and tips. The audience size definitely proves that Java Developers are evolving too. WWW: World Wide Wait? A Performance Comparison of Java Web Frameworks by Stijn Van den Enden (and others) I know Stijn personally, so choosing his session was a no-brainer. Again, the topic proved to lure a lot of people, and the room was very fast sold out. The talk took the session of Matt Raible of last year on Devoxx as starting point, but added real numbers and more importantly, real math to the table. Five frameworks (GWT, Wicket, JSF -2 implementations- and Vaadin) were put to the performance test. The performance architecture was thorougly explained, as were the measurment calculations/validations and potential pitfalls. Very profesionally brought. Big winner of this test was GWT, albeit needing most of coding compared to the other. My framework of choice, Vaadin, did very well, and in a chat with Stijn later on he also expressed that the rapid development of Vaadin makes it a framework to consider. JSF was the big loser of the day (at least the MyFaces impl, the Mojarra was better), and proved to scale much, much worse than the others. I had a French (or was it Spanish) project lead sitting next to me, who had expected validation for his ‘choosing the standard’. His sighs and painful yelps with every new graph shown on screen almost made me have compassion. The networking No, I’m not talking about the Wifi (which was horrible), but rather meeting people. As every past edition, Devoxx is a great way to meet up with many people of the Java eco-system. I met a lot of old friend, ex-collegues and met some new people. That alone is worth the trip to Antwerp. Devoxx 2012 Devoxx 2011 was great in every way, and for me it still is the best conference out there – period. Looking forward to next year! Reference: Devoxx 2011 Impressions from our JCG partner Joram Barrez at the “Small Steps with Big Feet” Blog. Related Articles :Devoxx Day 1 DOAG 2011 vs. Devoxx – Value and Attraction Java SE 7, 8, 9 – Moving Java Forward Java EE Past, Present, & Cloud 7 Java Tutorials and Android Tutorials list...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books