Featured FREE Whitepapers

What's New Here?

junit-logo

When to replace Unit Tests with Integration Test

Its been a while I was thinking about integration vs unit testing. Lots of googling, questions in stack overflow and looking in many books. In this post I would like share with you the information I have found and what decision I arrive at this. I think if you put this question to any Java developer, you will get an answer supporting the unit test instantaneously. They will tell you that if you can not do unit testing, the unit of code you have written is large and you have to bring it down to multiple testable unit. I have found these questions in Stack OverflowIntegration Testing best practices Integration vs Unit Testing Junit: splitting integration test and Unit tests What is an integration test exactly?useful in getting a good understanding of both terminologies. Unit testing is widely followed in Java domain with most of the projects now also enforcing the code coverage and unit testing. Integration testing is relatively new and misunderstood concept. Even through it is practiced much less than, unit testing there can be various uses of it. Lets take a simple user story and revisit both. Lets consider it with a user story User will give an INPUT in a UI form and when the submit button is clicked, the PROCESSED output should be shown in the next page. Since the results should be editable, both INPUT and OUTPUT should be saved in the database. Technical Design Lets assume that our application will grow rapidly in the future and we will design it now keeping that in mind.As shown in the above image it is a standard 4-tier design with view, service, dao. It has a application layer which contains the logic of how to convert the input into output. To implement the story we wrote 5 methods(1 in service, 2 dao to save input and output, 1 contains the business and 1 method in view layer to prepare the input.) Writing some Unit test cases If we were to unit test the story, we need to write 5 tests. As the different layers are dependent, we might need to use mock objects for testing. But apart from one method in application layer that does the original process, is there any need to unit test the other part? For example the method, public void saveInput(Input input){ Session session = sessionFactory.getCurrentSession(); session.save(input); }When you unit test this, you will typically use a mock object of sessionFactory and the code will always work. Hence I don’t see much point in wiring a unit test here. If you observe carefully, apart from the application layer, all the other methods will be similar to what we have discussed. What can we achieve with integration test? Read here as a heads up for the integration test. As we have seen most of the unit tests for this story were not effective. But we can’t skip testing as we want to find out the code coverage and make our code self testing. According to Martin flower in his article about Continuous Integration you should write the code that can test the other code. I fell the good integration tests can do this for you. Lets write a simple integration test for this situation, @Configurable(autowire = Autowire.BY_NAME) @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = {"classpath:applicationContext.xml"}) public class SampleIntTest { @Autowired private SamService samService; @Test public void testAppProcessing(){ Input input = new Input(); //prepare input for the testing. Output output = samService.processInputs(input); //validate outpt gainst the given input. if(!isValidate(input,output)){ Assert.fail("Validation failed!"); } }I have skipped the view layer here as it not so relevant. We are expecting the INPUT from the UI in a bean Input and that’s that we will have in the service layer. Here, With this few line of coding, you can achieve the full functionality of the application from the service layer. It is preferred to use a in-memory database like H2 for integration tests. If the table structure is complicated it may not be possible in that case,you can use a test instance of DB and prepare a script to delete all the data and run it as part of the test so that the DB state is restored back. This is important because in the next run the same data will be saved again. Another advantage of integration tests is that if you are changing the implementation the test need not be changed because it is concerned with the input and output.This is useful in refactoring the code without change in the test code. Also, you can schedule these tests to measure the stability of the application and any regression issues can be caught early. As we have seen, integration test are easy to write and are useful, but we need to make sure it does not replace the unit testing completely. When ever there is a rule base system(like the LogicalProcesser in our case) it should be mandatory because you can’t cover all the scenarios with the integration test. So as always JEE is about making choice asnd sticking to it. For the last few months we were practicing it in our teams and it is really going well. Currently we made integration test mandatory and unit tests and optional. Always review the code coverage and make sure you get good coverage in the core of the system(the LogicalProcesser in our case). Reference: When to replace Unit Tests with Integration Test from our JCG partner Manu PK at the “The Object Oriented Life” Blog. Related Articles :Regular Unit Tests and Stubs – Testing Techniques 4 Code coverage with unit & integration tests Java RESTful API integration testing Spring 3 Testing with JUnit 4 – ContextConfiguration and AbstractTransactionalJUnit4SpringContextTests Mock Static Methods with PowerMock Java Tutorials and Android Tutorials list...
software-development-2-logo

SOLID – Open Closed Principle

Open Closed Principle (OCP) states that, Software entities (Classes, modules, functions) should be OPEN for EXTENSION, CLOSED for MODIFICATION. Lets try to reflect on the above statement- software entities once written shouldn’t be modified to add new functionality, instead one has to extend the same to add new functionality. In other words you don’t touch the existing modules thereby not disturbing the existing functionality, instead you extend the modules to implement the new requirement. So your code is less rigid and fragile and also extensible. OCP term was coined by Bertnard Meyer. How can we confirm to OCP principle? Its simple – Allow the modules (classes) to depend on the abstractions, there by new features can be added by creating new extensions of these abstractions. Let me try to explain with an example: Suppose you are writing a module to approve personal loans and before doing that you want to validate the personal information, code wise we can depict the situation as: public class LoanApprovalHandler { public void approveLoan(PersonalValidator validator) { if ( validator.isValid()) { //Process the loan. } } } public class PersonalLoanValidator { public boolean isValid() { //Validation logic } }So far so good. As you all know the requirements are never the same and now its required to approve vehicle loans, consumer goods loans and what not. So one approach to solve this requirement is to: public class LoanApprovalHandler { public void approvePersonalLoan (PersonalLoanValidator validator) { if ( validator.isValid()) { //Process the loan. } } public void approveVehicleLoan (VehicleLoanValidator validator ) { if ( validator.isValid()) { //Process the loan. } } // Method for approving other loans. } public class PersonalLoanValidator { public boolean isValid() { //Validation logic } } public class VehicleLoanValidator { public boolean isValid() { //Validation logic } }We have edited the existing class to accomodate the new requirements- in the process we ended up changing the name of the existing method and also adding new methods for different types of loan approval. This clearly violates the OCP. Lets try to implement the requirement in a different way: /** * Abstract Validator class * Extended to add different * validators for different loan type */ public abstract class Validator { public boolean isValid(); } /** * Personal loan validator */ public class PersonalLoanValidator extends Validator { public boolean isValid() { //Validation logic. } } /* * Similarly any new type of validation can * be accommodated by creating a new subclass * of Validator */Now using the above validators we can write a LoanApprovalHandler to use the Validator abstraction. public class LoanApprovalHandler { public void approveLoan(Validator validator) { if ( validator.isValid()) { //Process the loan. } } }So to accommodate any type of loan validators we would just have create a subclass of Validator and then pass it to the approveLoan method. That way the class is CLOSED for modification but OPEN for extension. Another example: I was thinking of another hypothetical situation where the use of OCP principle can be of use. The situation is some thing like: “We maintain a list of students with their marks, unique identification(uid) and also name. Then we provide an option to get the percentage in the form of uid-percentage name value pairs.” class Student { String name; double percentage; int uid; public Student(String name, double percentage, int uid) { this.name = name; this.percentage = percentage; this.uid = uid; } }We collect the student list into a generic class: class StudentBatch { private List<Student> studentList; public StudentBatch() { studentList = new ArrayList<Student>(); } public void getSutdentMarkMap(Hashtable<Integer, Double> studentMarkMap) { if (studentMarkMap == null) { //Error } else { for (Student student : studentList) { studentMarkMap.put(student.uid, student.percentage); } } } /** * @param studentList the studentList to set */ public void setStudentList(List<Student> studentList) { this.studentList = studentList; } }Suppose we need to maintain the order of elements in the Map by their insertion order, so we would have to write a new method to get the map in the insertion order and for that we would be using LinkedHashMap. Instead if the method- getStudentMarkMap() was dependent on the Map interface and not the Hashtable concrete implementation, we could have avoided changing the StudentBatch class and instead pass in an instance of LinkedHashMap. public void getSutdentMarkMap(Map<Integer, Double> studentMarkMap) { if (studentMarkMap == null) { //Error } else { for (Student student : studentList) { studentMarkMap.put(student.uid, student.percentage); } } }PS: I know that Hashtable is an obsolete collection and not encouraged to be use. But I thought this would make another useful example for OCP principle. Some ways to keep your code closer to confirming OCP: Making all member variables private so that the other parts of the code access them via the methods (getters) and not directly. Avoiding typecasts at runtime- This makes the code fragile and dependent on the classes under consideration, which means any new class might require editing the method to accommodate the cast for the new class. Really good article written by Robert Martin on OCP. Reference: SOLID- Open Closed Principle from our JCG partner Mohamed Sanaulla at the “Experiences Unlimited” blog. Related Articles :SOLID – Single Responsibility Principle Are frameworks making developers dumb? Not doing Code Reviews? What’s your excuse? Why Automated Tests Boost Your Development Speed Using FindBugs to produce substantially less buggy code Things Every Programmer Should Know Java Tutorials and Android Tutorials list...
software-development-2-logo

Technical debt & the Boiling Frog

I hope everybody among my readers is familiar with the concept of technical debt: If you do a quick hack to implement a feature it might be faster to implement in the short run, but you have to pay interest for the technical debt in the form of higher development and maintenance effort. If you don’t pay back you technical debt by refactoring your code, sooner or later your quick hack will have turned in a quick and highly expensive hack. This metaphor works well to communicate the need for refactorings if at least one person realized the need for it. But in various cases nobody in the project realizes that there is a problem until the team face a huge debt which seems impossible to pay back. I see two possible reasons: Being blind: There is a different level of cleanliness people consider clean enough. In interviews I have repeatedly talked to people considering a 20 line method a good thing and don’t have problem with nested control structures 5 levels deep. Those developers wouldn’t notice anything smelly when looking at code which I would consider CMD (Code of Mass Destruction). This problem is most of the time fairly easy to fix by teaching and coaching. But there is a more insidious possible reason: Being a frog in boiling water: There is a theory that a frog which sits in cold water doesn’t jump out of the water if you heat it really slow until it dies. Wikipedia isn’t decisive if this is actually true, but I definitely see this effect in software development. It looks like this: At some point you find something in your code that looks like it has an easy abstraction. So you build a little class encapsulating that behavior and use that class instead. It works great, so that little class gets used a lot. Sometime later the class gets a new feature to handle a variation of the original case. And it goes on like this, until one day what was a simple abstraction has turned in a complex library, possibly even a framework. And now that framework is a problem on its own. Its getting hard to understand how to use it. And you realize you are a poor little frog sitting in boiling water with no idea how he got there. Hint: It’s a good idea to jump even when it is a little late. Why does this happen? Just as the frog has a problem sensing the small change in temperature and realizing he is getting into trouble the developer doesn’t see he is hurting the quality of the code base until it is to late. Again: Why? Let’s make the example a little more specific. You have blocks of 10 lines of simple repetitive code in N places in your code base. You replace it with a call to a simple class of 40 lines of code. So you save (9*N – 40) lines. On the call site your code gets significantly simpler, of course the class is a little more complex but that’s ok. Now while implementing a new feature you are about to create another of those 10 line blocks. Obviously you want to use the helper class. But it’s not fit for the job. You need to add a feature to it. That’s ok. You also have to add something to the public API to turn that feature on or of. Maybe it’s a new constructor, a new method or an additional parameter. That’s not ok. Until you changed the API of your class the changes where local to the helper class and its usage at the new site. But when you changed the API, you added complexity to all the call sites of your class. Whenever you now call that helper you have to think a little more about the correct API to use. This unfortunately isn’t easy to see. So it can easily happen that your API turns slowly so complex that using it is more painful then just writing the 10 lines it replaces down. Reference: The Boiling Frog from our JCG partner Jens Schauder at the Schauderhaft Blog. Related Articles :Dealing with technical debt Services, practices & tools that should exist in any software development house Measuring Code Complexity How many bugs do you have in your code? Java Tutorials and Android Tutorials list...
software-development-2-logo

What is NoSQL ?

NoSQL is a term used to refer to a class of database systems that differ from the traditional relational database management systems (RDBMS) in many ways. RDBMSs are accessed using SQL. Hence the term NoSQL implies not accessed by SQL. More specifically not RDBMS or more accurately not relational. Some key characteristics of NqSQL databases are :They are distributed, can scale horizontally and can handle data volumes of the order of several terrabytes or petabytes, with low latency. They have less rigid schemas than a traditional RDBMS. They have weaker transactional guarantees. As suggested by the name, these databases do not support SQL. Many NoSQL databases model data as row with column families, key value pairs or documentsTo understand what non relational means, it might be useful to recap what relational means. Theoretically, relational databases comply with Codds 12 rules of relational model. More simply, in RDBMS, a table is relation and database has a set of such relations. A table has rows and columns. Each table has contraints and the database enforces the constraints to ensure the integrity of data.Each row in a table is identified by a primary key and tables are related using foreign keys. You eliminate duplicate data during the process of normalization, by moving columns into separate tables but keeping the relation using foreign keys. To get data out of multiple tables requires joining the tables using the foreign keys. This relational model has been useful in modeling most real world problems and is in widespread use for the last 20 years. In addition, RDBMS vendors have gone to great lengths to ensure that RDBMSs do a great job in maintaining ACID (actomic, consistent, integrity, durable) transactional properties for the data stored. Recovery is supported from unexpected failures. This has lead to relational databases becoming the de facto standard for storing enterprise data. If RDBMSs are so good, Why does any one need NoSQL databases ? Even the largest enterprises have users only in the order of 1000s and data requirements in the order of few terra bytes. But when your application is on the internet, where you are dealing with millions of users and data in the order of petabytes, things start to slow down with a RDBMS. The basic operations with any database are read and write. Reads can be scaled by replicating data to multiple machines and load balancing read requests. However this does not work for writes because data consistency needs to be maintained. Writes can be scaled only by partitioning the data. But this affects read as distributed joins can be slow and hard to implement. Additionally, to maintain ACID properties, databases need to lock data at the cost of performance. The Googles, facebooks , Twitters have found that relaxing the constraints of RDBMSs and distributing data gives them better performance for usecases that involveLarge datasets of the order of petabytes. Typically this needs to stored using multiple machines. The application does a lot of writes. Reads require low latency. Data is semi structured. You need to be able to scale without hitting a bottleneck. Application knows what it is looking for. Adhoc queries are not required.What are the NoSQL solutions out there ? There are a few different types. 1. Key Value Stores They allow clients to read and write values using a key. Amazon’s Dynamo is an example of a key value store. get(key) returns an object or list of objects put(key,object) store the object as a blob Dynamo use hashing to partition data across hosts that store the data. To ensure high availability, each write is replicated across several hosts. Hosts are equal and there is no master. The advantage of Dynamo is that the key value model is simple and it is highly available for writes. 2. Document stores The key value pairs that make up the data are encapsulated as a document. Apache CouchDB is an example of a document store. In CouchDB , documents have fields. Each field has a key and value. A document could be "firstname " : " John ", "lastname " : "Doe" , "street " : "1 main st", "city " : "New york"In CouchDB, distribution and replication is peer to peer. Client interface is RESTful HTTP, that integrated well with existing HTTP loadbalancing solutions. 3. Column based stores Read and write is done using columns rather than rows. The best known examples are Google’s BigTable and the likes of HBase and Cassandra that were inspired by BigTable. The BigTable paper says that BigTable is a sparse, distributed, persistent, multidimensional sorted Map. While that sentence seems complicated, reading each word individually gives clarity.sparse – some cells can be empty distributed – data is partitioned across many hosts persistent – stored to disk multidimensional – more than 1 dimension Map – key and value sorted – maps are generally not sorted but this one isThis sample might help you visualize a BigTable map { row1:{ user:{ name: john id : 123 }, post: { title:This is a post text : xyxyxyxx } } row2:{ user:{ name: joe id : 124 }, post: { title:This is a post text : xyxyxyxx } } row3:{ user:{ name: jill id : 125 }, post: { title:This is a post text : xyxyxyxx } }}The outermost keys row1,row2, row3 are analogues to rows. user and post are what are called column families. The column family user has columns name and id. post has columns title and text. Columnfamily:column is how you refer to a column. For eg user:id or post:text. In Hbase, when you create the table, the column families need to be specified. But columns can be added on the fly. HBase provides high availability and scalability using a master slave architecture. Do I needs a NoSQL store ? You do not need a NoSQL store ifAll your data fits into 1 machine and does not need to be partitioned. You are doing OLTP which required the ACID transaction properties and data consistency that RDBMSs are good at. You need ad hoc querying using a language like SQL. You have complicated relationships between the entities in your applications. Decoupling data from application is important to you.You might want to start considering NoSQL stores ifYour data has grown so large that it can no longer be handled without partitioning. Your RDBMS can no longer handle the load. You need very high write performance and low latency reads. Your data is not very structured. You can have no single point of failure. You can tolerate some data inconsistency.Bottomline is that NoSql stores are a new and complex technology. There are many choices and no standards. There are specific use cases for which NoSql is a good fit. But RDBMS does just fine for most vanilla use cases. Reference: What is NoSQL ? from our JCG partner Manoj Khangaonkar at The Khangaonkar Report. Related Articles :Cassandra vs MongoDB vs CouchDB vs Redis vs Riak vs HBase comparison SQL or NOSQL: That is the question? Using MongoDB with Morphia Java SE 7, 8, 9 – Moving Java Forward...
agile-logo

Understanding the Vertical Slice

One of the biggest challenges in breaking down backlogs is knowing how to split up the work from a backlog into right sized pieces. I’ve already talked about the concept that smaller is better, but I we haven’t really addressed the decision of how to actually divide a backlog up to make it smaller.The default path Most developers trying to break down a backlog into smaller chunks will automatically head down the path of using a “horizontal slice.”This is how we tend to think. What do I mean by a horizontal slice? A horizontal slice is basically a slice through the feature or backlog that horizontally divides up the architecture. Most things are built this way. If you were to build a house, you would probably start by slicing up the project horizontally. You would first pour the foundation. Then put up the walls. Then put on the roof and many more steps, leaving the finishing work for last. This same thinking usually gets applied to breaking up backlogs in Agile development. It would seem pretty silly to build a house where you finished one room completely at a time. Agile software development is different There is a distinct difference though, between developing software in an Agile way and building a house. The big difference is that in Agile software development, true Agile development, you don’t know exactly what you are going to build until you are done building it. With a house this is rarely the case. With a house, you have some blueprints that you have drawn up ahead of time. You know exactly where each wall will be and where each outlet will be. You may have even built houses before that are very similar. When building software, unless you are taking a waterfall approach and planning everything upfront, you don’t know what you are really building until you are done. Before you object to this statement, consider this: This is the point of Agile development. Agile means responding to change. Building a house, you do not expect the customer to say: “Hmm, yeah, I don’t really like that wall there.” “Actually, I am thinking we are going to need 5 bedrooms now.” In software development, you are expecting statements analogous to the above! So what is vertical slicing? Simply put, building one room at a time. But it’s not functional! Who wants a house one room at a time?!? Correct! It is not functional as a house, but we can pour more foundation, change how we are going to do the rest of the rooms and even knock down the walls and start over without incurring a huge cost. The point in building our software “one room at a time,” is that we are giving the customer a chance to see the product as it is being built in a way that matters to them and enables them to test it out. Sure they aren’t going to be able to live in it until it is all done. But, they will have the ability to step into a room and envision it with all their furniture in there. Customers don’t care about foundations and framed in walls. As a developer, you might be able to look at some foundation and framed in walls and envision what the house will look like, but the customer can’t and worse yet, it can’t be tested. Vertical slicing in software development is taking a backlog that might have some database component, some business logic and a user interface and breaking it down into small stepwise progressions where each step cuts through every slice. The idea is that instead of breaking a backlog up into the following:Implement the database layer for A, B and C Implement the business logic layer for A, B and C Implement the user interface for A, B and CThe backlog is broken up into something like:Implement A from end to end Implement B from end to end Implement C from end to endSounds easy enough, why the debate? Because it is NOT easy. I’m not going to lie to you. It is MUCH easier to slice up a backlog horizontally. As developers we tend to think about the horizontal slicing when we plan out the implementation of a backlog. We tend to want to implement things by building one layer at a time. Thinking about how to break apart a backlog into vertical slices requires us to step outside the understanding of the code and implementation and instead think about the backlog in small pieces of working functionality. There is almost always some progression of functionality that can be found for a large backlog. What I mean by this is that there are almost always smaller steps or evolutions in functionality that can be created in order to produce and end result in software development. Sometimes the steps that are required to break up a backlog vertically are going to result in a bit of waste. Sometimes you are going to purposely create a basic user interface that you know you are going to redo parts of as you implement more vertical slices. This is OK! It is better to plan small amounts of rework than to build up an entire feature one horizontal slice at a time and have to rework huge parts of the feature that weren’t planned for. So what is the benefit? You might be thinking to yourself that this sounds like more work without much benefit. So why would I bother to break up a backlog vertically? Is it really that important? I’ve already hinted at some of the benefits of slicing things vertically. The true impetus behind vertical slicing is the very cornerstone of Agile methodology. It is about delivering working functionality as soon as possible. We aren’t going to cover the whole reasoning behind this idea in Agile development. I am assuming that you already subscribe to the idea that delivering working functionality as soon as possible is important and valuable. Based on that premise alone, you can see that horizontal slicing is in direct violation to one of Agile methodology’s core tenants. It is interesting to me how many people are huge proponents of breaking entire systems up into functional pieces that are delivered one piece at a time, but are so opposed to doing it at the micro scale when dealing with individual backlog items. If you are opposed to what I am saying about vertical slicing, you really have to ask yourself whether or not you truly subscribe to the same idea applied at the larger level, because there really isn’t a difference. Reference: Understanding the Vertical Slice from our JCG partner John Sonmez at the Making the Complex Simple blog. Related Articles :Breaking Down an Agile process Backlog Even Backlogs Need Grooming Can we replace requirement specification with better understanding? Agile software development recommendations for users and new adopters Save money from Agile Development 9 Tips on Surviving the Wild West Development Process Not doing Code Reviews? What’s your excuse?...
junit-logo

Regular Unit Tests and Stubs – Testing Techniques 4

My last blog was the third in a series of blogs on approaches to testing code and discussing what you do and don’t have to test. It’s based around my simple scenario of retrieving an address from a database using a very common pattern:…and I proffered the idea that any class that doesn’t contain any logic doesn’t really need unit testing. In this I included my data access object, DAO, preferring instead to integration test this class to ensure it worked in collaboration with the database. Today’s blog covers writing a regular or classical unit test that enforces test subject isolation using stub objects. The code we’ll be testing is, again, the AddressService: @Component public class AddressService {private static final Logger logger = LoggerFactory.getLogger(AddressService.class);private AddressDao addressDao;/** * Given an id, retrieve an address. Apply phony business rules. * * @param id * The id of the address object. */ public Address findAddress(int id) {logger.info("In Address Service with id: " + id); Address address = addressDao.findAddress(id);address = businessMethod(address);logger.info("Leaving Address Service with id: " + id); return address; }private Address businessMethod(Address address) {logger.info("in business method");// Apply the Special Case Pattern (See MartinFowler.com) if (isNull(address)) { address = Address.INVALID_ADDRESS; }// Do some jiggery-pokery here....return address; }private boolean isNull(Object obj) { return obj == null; }@Autowired @Qualifier("addressDao") void setAddressDao(AddressDao addressDao) { this.addressDao = addressDao; } }Michael Feather’s book Working Effectively with Legacy Code states that a test is not a unit test if:It talks to a database. It communicates across a network. It touches the file system. You have to do special things to your environment (such as editing configuration files) to run it.To uphold these rules, you need to isolate your object under test from the rest of your system, and that’s where stub objects come in. Stub objects are objects that are injected into your object and are used to replace real objects in test situations. Martin Fowler defines stubs, in his essay Mocks Aren’t Stubs as: “Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it ‘sent’, or maybe only how many messages it ‘sent’”. Picking a word to describe stubs is very difficult, I could choose dummy or fake, but there are types of replacement object that are known as dummies or fakes – also described by Martin Fowler:Dummy objects are passed around but never actually used. Usually they are just used to fill parameter lists. Fake objects actually have working implementations, but usually take some shortcut which makes them not suitable for production (an in memory database is a good example).However, I have seen other definitions of the term fake object,for example Roy Osherove in is book The Art Of Unit Testing defines a fakes object as:A fake is a generic term that can be used to describe either a stub or a mock object…because the both look like the real object.…so I, like many others, tend to call all replacement objects either mocks or stubs as there is a difference between the two, but more on that later. In testing the AddressService, we need to replace the real data access object with a stub data access object and in this case, it looks something like this: public class StubAddressDao implements AddressDao {private final Address address;public StubAddressDao(Address address) { this.address = address; }/** * @see com.captaindebug.address.AddressDao#findAddress(int) */ @Override public Address findAddress(int id) { return address; } }Note the simplicity of the stub code. It should be easily readable, maintainable and NOT contain any logic and need a unit test of its own. Once the stub code has been written, next follows the unit test: public class ClassicAddressServiceWithStubTest {private AddressService instance;@Before public void setUp() throws Exception { /* Create the object to test */ /* Setup data that's used by ALL tests in this class */ instance = new AddressService(); }/** * Test method for * {@link com.captaindebug.address.AddressService#findAddress(int)}. */ @Test public void testFindAddressWithStub() {/* Setup the test data - stuff that's specific to this test */ Address expectedAddress = new Address(1, "15 My Street", "My Town", "POSTCODE", "My Country"); instance.setAddressDao(new StubAddressDao(expectedAddress));/* Run the test */ Address result = instance.findAddress(1);/* Assert the results */ assertEquals(expectedAddress.getId(), result.getId()); assertEquals(expectedAddress.getStreet(), result.getStreet()); assertEquals(expectedAddress.getTown(), result.getTown()); assertEquals(expectedAddress.getPostCode(), result.getPostCode()); assertEquals(expectedAddress.getCountry(), result.getCountry()); }@After public void tearDown() { /* * Clear up to ensure all tests in the class are isolated from each * other. */ } }Note that in writing a unit test, we’re aiming for clarity. A mistake often made is to regard test code as inferior to production code with the result that it’s often messier and more illegible. Roy Osherove in The Art of Unit Testing puts forward the idea that test code should be more readable that production code. Clear tests should follow these basic, linear steps:Create the object under test. In the code above this is done in the setUp() method as I’m using the same object under test for all (one) tests. Setup the test. This is done in the test method testFindAddressWithStub() as the data used in a test is specific to that test. Run the Test Tear down the test. This ensures that tests are isolated from each other and can be run IN ANY ORDER.Using a simplistic stub yields the two benefits of isolating the AddressService from the outside world and tests that run quickly. How brittle is this kind of test? If your requirements change then the test and the stub changes – not so brittle after all? As a comparison, my next blog re-writes this test using EasyMock. Reference: Regular Unit Tests and Stubs – Testing Techniques 4 from our JCG partner Roger Hughes at the Captain Debug blog Related Articles :Testing Techniques – Not Writing Tests The Misuse of End To End Tests – Testing Techniques 2 What Should you Unit Test? – Testing Techniques 3 Unit Testing Using Mocks – Testing Techniques 5 Creating Stubs for Legacy Code – Testing Techniques 6 More on Creating Stubs for Legacy Code – Testing Techniques 7 Why You Should Write Unit Tests – Testing Techniques 8 Some Definitions – Testing Techniques 9 Using FindBugs to produce substantially less buggy code Developing and Testing in the Cloud...
devoxx-logo

Devoxx 2011 Impressions

Devoxx 2011 is over, and it was awesome. Finally, after having to spend the weekend with wife and kid (haven’t seen much of me the last week), I’ve found the time to write down some stuff. It was for me the sixth Devoxx, my first one being in 2006 – back when I was still a student and it was called Javapolis. So obviously I have some tradition with Devoxx, which grants me to say that (at least for me) it was one of the best editions ever. It may have to do with the fact I didn’t have a talk this year (Frederik had that honour), and I could enjoy the conference at its fullest. That said, time for a wrap-up!Java == boring?From a first glance at the Devoxx schedule it was clear that Java was not the first-class citizen it used to be on Devoxx. Dynamic languages, Android and HTML 5 were this years favourites. And, in my opinion, the organisators have decided wisely. We’re past the time that every new Spring, Hibernate or JBoss version drive cool and innovative features. EE6 and its features are well told and understood by now. UI frameworks have come and gone (byebye, JSF!). If you think about it, what ‘big hot item’ has happened in the last year in Java land? … Exactly. And that in itself is not a bad thing. Java is as mainstream as mainstream can get. Solid, stable and here to stay for a long time (almost sound like Cobol). Sure, there is plenty of opportunities and work to do if you’re a Java developer. But we have to admit that we’re not amongst the hottest of the hottest these days (and probably have been like that since the advent of Ruby on Rails). Which is why some polyglotism and peeking around other technologies can’t hurt, if you’d ask me. University days I attended Devoxx during both Uni and Conference days. For me personally, I found the Conference sessions wildly more interesting than the Uni talks. Three hour talks are just not my cup of tea. There just too much technical depth and information in these talks. I rather read a book on my own pace than trying for three hours to frantically follow every slide of the talk. Lessons learned for next year. One exception: I really liked the Android Jumpstart talk by Lars Vogel. I have some IOS development experience, and seeing how straightforward Android development is, was interesting to say at least. Keynote: next year not Oracle, pretty please? I think everybody wholeheartedly agreed that the keynote(s) on Wednesday by Oracle were boring as hell. I had high expectations when the first Oracle guy started with the obligatory Oracle Disclaimer, but invited the audience to find the spelling mistakes he added on purpose. But it was downhill from there. Nothing new in the presentations, slides obviously originating from sales and marketing, frantically avoiding the ‘Android’ word (J2ME will be and important focus in the future… uuuh yeah, right!). I mean, these guys get a golden plated chance to get on the stage and show the tech audience that they rock as being the new steward of Java. Anyway, enough words written on this topic. The fact I saw several people sleeping in their seats speaks for itself. Keynote: but Google can certainly return! The Android keynote by Tim Bray on Thursday luckily was of a different kind. Great speaker (been following his blog since I discovered Ruby on Rails years ago), humorous, a new feature with examples here and there (I’m an IPhone guy, but hell Android sure has some sweet stuff up it’s sleaves!) and a real ‘call-to-action’ at the end (got the whole room very, very silent). I was in doubt to get up early again to go to the keynote after the debacle of the day before, but it sure was worth it. On a sidenote: Stephan’s keynote (this year announcing Devoxx France!) was as good as ever. He can definitely return next year. Sessions: too much choice! The hardest part about Devoxx is the choice. With seven talks going on in parallel, on has to choose. And sometimes I had three sessions picked at the same time… damn! Luckily, all talks should be on Parleys by Christmas (and every Devoxx attendee gets access to it!). I saw plenty sessions during Devoxx, but some of them really stood out: Activiti + Vaadin, A match made in heavenNo suprise here. My fellow Activist Frederik did a great co-op presentation with the Vaadin guys. Both Activiti and Vaadin are of course enormously cool frameworks, so the combination surely gives some firework. There were some nice (new) features in the talk too, which they kept secret until the talk (like the annotations to match a form with a Vaadin view). Someone in the audience taped the whole talk and has put in on Youtube. Enjoy! Java: The Good, the Bad, and the Ugly Parts by Joshua BlochAs Joshua Bloch his sessions tend to be the most popular ones (last year, I could’t get in), I made sure to be good in time. The content of the talk was an overview of basically JDK 1.0, and what was in there that helped Java to be where it is today, and what might have hindered it. As you would expect from him, some nice puzzlers thrown in between. One of the best speakers out there, and really knows his stuff. PhoneGap for Hybrid App Development by Brian LeRouxI only knew the idea behind PhoneGap, so this session was exellent to soak up some information. Brian is an awesome speaker, typical start-up attitude with the obligatory cursing and quotes to remember. Altough his demo’s (except one) didn’t work due to his Mac not seeing his Android, followed by cursing adb and all that is holy, I understood the goal and value of PhoneGap. I was really impressed by PhoneGap Build, a cloud environment where you upload your HTML and Javascript, and it gets compiled to every platform you wish (even Blackberry). The demo went well this time, and after uploading his example html, he could just scan the QR code on his screen and the app got installed on his phone. Sweet! People who work with me in real-life, know that I’m fan of blaming others for yak shaving. Brian’s slide on ‘serious business time’ couldn’t have said it better. Meh! It’s only cross site scripting what’s the big deal? by Cambell Murray I had expected a technical talk, but we rather got juicy examples and stories about customers (anonymous of course) on security bugs that lead to huge problems. Really great speaker. It was as if he was just was at a bar and talking while having a beer. It was a clear eye-opener that there is a world outside our comfortable Java code where hackers and criminal entities are extremely inventive and do stuff which we as ‘normal’ developers would never have tought of. Rules for Good UI Design by Joe Nuxoll This session was placed in one of the smaller rooms, but this quickly proved to be a mistake. In no time the room was complety packed, and people sat everywhere they could. This is really a good sign that developers are understanding the need of UI design.Joe is an UI designer for the Tesla S interface who used to work for Apple (which might have helped for the session’s popularity), and the examples were of course chosen from that background. The first part of the talk was a tid boring (but I already read quite a bit on the topic), but the second part was full of examples and tips. The audience size definitely proves that Java Developers are evolving too. WWW: World Wide Wait? A Performance Comparison of Java Web Frameworks by Stijn Van den Enden (and others) I know Stijn personally, so choosing his session was a no-brainer. Again, the topic proved to lure a lot of people, and the room was very fast sold out. The talk took the session of Matt Raible of last year on Devoxx as starting point, but added real numbers and more importantly, real math to the table. Five frameworks (GWT, Wicket, JSF -2 implementations- and Vaadin) were put to the performance test. The performance architecture was thorougly explained, as were the measurment calculations/validations and potential pitfalls. Very profesionally brought. Big winner of this test was GWT, albeit needing most of coding compared to the other. My framework of choice, Vaadin, did very well, and in a chat with Stijn later on he also expressed that the rapid development of Vaadin makes it a framework to consider. JSF was the big loser of the day (at least the MyFaces impl, the Mojarra was better), and proved to scale much, much worse than the others. I had a French (or was it Spanish) project lead sitting next to me, who had expected validation for his ‘choosing the standard’. His sighs and painful yelps with every new graph shown on screen almost made me have compassion. The networking No, I’m not talking about the Wifi (which was horrible), but rather meeting people. As every past edition, Devoxx is a great way to meet up with many people of the Java eco-system. I met a lot of old friend, ex-collegues and met some new people. That alone is worth the trip to Antwerp. Devoxx 2012 Devoxx 2011 was great in every way, and for me it still is the best conference out there – period. Looking forward to next year! Reference: Devoxx 2011 Impressions from our JCG partner Joram Barrez at the “Small Steps with Big Feet” Blog. Related Articles :Devoxx Day 1 DOAG 2011 vs. Devoxx – Value and Attraction Java SE 7, 8, 9 – Moving Java Forward Java EE Past, Present, & Cloud 7 Java Tutorials and Android Tutorials list...
jcg-logo

Best Of The Week – 2011 – W47

Hello guys, Time for the “Best Of The Week” links for the week that just passed. Here are some links that drew Java Code Geeks attention: * Monitor and diagnose Java applications: An amazing collection of Developerworks articles on monitoring and diagnosing Java applications using both tools available in the JDK and third party software. Also check out Monitoring OpenJDK from the CLI. * Analysis Of Web API Versioning Options: An analysis of the various strategies for versioning Web API in the cloud. The fundamental principle is that you shouldn’t break existing clients, because you don’t know what they implement, and you don’t control them. * Java 7 Working With Directories: DirectoryStream, Filter and PathMatcher: This tutorial presents some new functionality in Java 7 regarding directories. With DirectoryStream, Filter and PathMatcher developers are now able to perform easily complex directory related operations. Also check out Manipulating Files in Java 7 and Java 7 Feature Overview. * Why you have less than a second to deliver exceptional performance: This article discusses the performance of web sites (in terms of response times), highlights the importance of having an exceptional performance and explains why it is quite difficult to achieve something like this. * Why do so many technical recruiters suck?: A real life story indicating why today’s technical recruiters are probably not the way to find good developers. * Setting Up Measurement of Garbage Collection in Apache Tomcat: In this article, Garbage Collection measurement is set up for Apache Tomcat. First, some performance tuning basics are discussed and then the author shows how to measure GC performance in Tomcat (and any JVM for that matter). Also check out Change Without Redeploying With Eclipse And Tomcat and Zero-downtime Deployment (and Rollback) in Tomcat. * Continuously Improving as a Developer: Personally, one of the best articles on the subject of becoming a better developer. Includes not only the typical stuff (read code, read more books, write code, use Social Media etc.), but also more subtle things like working out to increase productivity and leveraging the Pareto rule (80/20 principle) when learning a new technology. * A really simple but powerful rule engine: A great tutorial on how to create a custom simple and lightweight rule engine. First, some rules engine use cases are presented and then the implementation of the engine follows based on some general requirements.. * Is Implementing Continuous Delivery the Key to Success?: In this article, the author discusses Continuous Delivery and how it can affect the product delivery and overall software architecture. The goal is to create an efficient delivery mechanism that will give allow developers to collect useful feedback in a structured and continual way. * Android SDK: Using the Text to Speech Engine: This tutorial shows you how to use the embedded text to speech (TTS) engine that is provided with the Android SDK. Also check out Android Text-To-Speech Application. * Using Gossip Protocols for Failure Detection, Monitoring, Messaging and Other Good Things: This article provides a soft introduction to Gossip Protocols which can offer a decentralized way to manage large clusters. Gossip protocols, which maintain relaxed consistency requirements amongst a very large group of nodes, may be used for Failure Detection, Monitoring, as a Form of Messaging etc. That’s all for this week. Stay tuned for more, here at Java Code Geeks. Cheers, Ilias Related Articles:Best Of The Week – 2011 – W46 Best Of The Week – 2011 – W45 Best Of The Week – 2011 – W44 Best Of The Week – 2011 – W43 Best Of The Week – 2011 – W42 Best Of The Week – 2011 – W41 Best Of The Week – 2011 – W40 Best Of The Week – 2011 – W39 Best Of The Week – 2011 – W38 Best Of The Week – 2011 – W37...
junit-logo

What Should you Unit Test? – Testing Techniques 3

I was in the office yesterday, talking about testing to one of my colleagues who was a little unconvinced by writing unit tests. One of the reasons that he was using was that some tests seem meaningless, which brings me on the the subject of what exactly you unit test, and what you don’t need to bother with. Consider a simple immutable Name bean below with a constructor and a bunch of getters. In this example I’m going to let the code speak for itself as I hope that it’s obvious that any testing would be pointless. public class Name {private final String firstName; private final String middleName; private final String surname;public Name(String christianName, String middleName, String surname) { this.firstName = christianName; this.middleName = middleName; this.surname = surname; }public String getFirstName() { return firstName; }public String getMiddleName() { return middleName; }public String getSurname() { return surname; } }…and just to underline the point, here is the pointless test code: public class NameTest {private Name instance;@Before public void setUp() { instance = new Name("John", "Stephen", "Smith"); }@Test public void testGetFirstName() { String result = instance.getFirstName(); assertEquals("John", result); }@Test public void testGetMiddleName() { String result = instance.getMiddleName(); assertEquals("Stephen", result); }@Test public void testGetSurname() { String result = instance.getSurname(); assertEquals("Smith", result); } }The reason it’s pointless testing this class is that the code doesn’t contain any logic; however, the moment you add something like this to the Name class: public String getFullName() {if (isValidString(firstName) && isValidString(middleName) && isValidString(surname)) { return firstName + " " + middleName + " " + surname; } else { throw new RuntimeException("Invalid Name Values"); } }private boolean isValidString(String str) { return isNotNull(str) && str.length() > 0; }private boolean isNotNull(Object obj) { return obj != null; }…then the whole situation changes. Adding some logic in the form of an if statement generates a whole bunch of tests: @Test public void testGetFullName_with_valid_input() {instance = new Name("John", "Stephen", "Smith");final String expected = "John Stephen Smith";String result = instance.getFullName(); assertEquals(expected, result); }@Test(expected = RuntimeException.class) public void testGetFullName_with_null_firstName() {instance = new Name(null, "Stephen", "Smith"); instance.getFullName(); }@Test(expected = RuntimeException.class) public void testGetFullName_with_null_middleName() {instance = new Name("John", null, "Smith"); instance.getFullName(); }@Test(expected = RuntimeException.class) public void testGetFullName_with_null_surname() {instance = new Name("John", "Stephen", null); instance.getFullName(); }@Test(expected = RuntimeException.class) public void testGetFullName_with_no_firstName() {instance = new Name("", "Stephen", "Smith"); instance.getFullName(); }@Test(expected = RuntimeException.class) public void testGetFullName_with_no_middleName() {instance = new Name("John", "", "Smith"); instance.getFullName(); }@Test(expected = RuntimeException.class) public void testGetFullName_with_no_surname() {instance = new Name("John", "Stephen", ""); instance.getFullName(); }So, given that I’ve just said that you shouldn’t need to test objects that do not contain any logic statements, and in a list of logic statements I’d include if and switch together with all the operators (+-*-), and a whole bundle of things that could change and objects state. Given this premise, I’d then suggest that it’s pointless writing a unit test for the address data access object (DAO) in the Address project I’ve been talking about in my last couple of blogs. The DAO is defined by the AddressDao interface and implemented by the JdbcAddress class: public class JdbcAddress extends JdbcDaoSupport implements AddressDao {/** * This is an instance of the query object that'll sort out the results of * the SQL and produce whatever values objects are required */ private MyQueryClass query;/** This is the SQL with which to run this DAO */ private static final String sql = "select * from addresses where id = ?";/** * A class that does the mapping of row data into a value object. */ class MyQueryClass extends MappingSqlQuery<address> {public MyQueryClass(DataSource dataSource, String sql) { super(dataSource, sql); this.declareParameter(new SqlParameter(Types.INTEGER)); }/** * This the implementation of the MappingSqlQuery abstract method. This * method creates and returns a instance of our value object associated * with the table / select statement. * * @param rs * This is the current ResultSet * @param rowNum * The rowNum * @throws SQLException * This is taken care of by the Spring stuff... */ @Override protected Address mapRow(ResultSet rs, int rowNum) throws SQLException {return new Address(rs.getInt("id"), rs.getString("street"), rs.getString("town"), rs.getString("post_code"), rs.getString("country")); } }/** * Override the JdbcDaoSupport method of this name, calling the super class * so that things get set-up correctly and then create the inner query * class. */ @Override protected void initDao() throws Exception { super.initDao(); query = new MyQueryClass(getDataSource(), sql); }/** * Return an address object based upon it's id */ @Override public Address findAddress(int id) { return query.findObject(id); }}In the code above, the only method in the interface is: @Override public Address findAddress(int id) { return query.findObject(id); }…which is really a simple getter method. This seems okay to me as there really should not be any business logic in a DAO, that belongs in the AddressService, which should have a plentiful supply of unit tests. You may want to make a decision on whether or not you want to write unit tests for the MyQueryClass. To me this is a borderline case, so I look forward to any comments… I’m guessing that someone will disagree with this approach, say you should test the JdbcAddress object and that’s true, I’d personally write an integration test for it to make sure that the database I’m using is okay, that it understands my SQL and that the two entities (DAO and database) can talk to each other, but I won’t bother unit testing it. To conclude, unit tests must be meaningful, and a good a definition of ‘meaningful’ is that object under test must contain some independent logic. Reference: What Should you Unit Test? – Testing Techniques 3 from our JCG partner Roger Hughes at the Captain Debug blog Related Articles :Testing Techniques – Not Writing Tests The Misuse of End To End Tests – Testing Techniques 2 Regular Unit Tests and Stubs – Testing Techniques 4 Unit Testing Using Mocks – Testing Techniques 5 Creating Stubs for Legacy Code – Testing Techniques 6 More on Creating Stubs for Legacy Code – Testing Techniques 7 Why You Should Write Unit Tests – Testing Techniques 8 Some Definitions – Testing Techniques 9 Using FindBugs to produce substantially less buggy code Developing and Testing in the Cloud...
software-development-2-logo

How to Fail With Drools or Any Other Tool/Framework/Library

What I like most at conferences are reports of someone’s failure to do or implement something for they’re the best sources of learning. And How to Fail with Drools (in Norwegian) by C. Dannevig of Know IT at JavaZone 2011 is one of them. I’d like to summarize what they learned and extend it for introduction of a tool, framework, or library in general based on my own painful experiences. They decided to switch to the Drools rule management system (a.k.a. JBoss Rules) v.4 from their homegrown rules implementation to centralize all the rules code at one place, to get something simpler and easier to understand, and to improve the time to market by not requiring a redeploy when a rule is added. However Drools turned out to be more of a burden than help for the following reasons:Too little time and resources were provided for learning Drools, which has a rather steep learning curve due to being based on declarative programming and rules matching (some background), which is quite alien to the normal imperative/OO programmers. Drools’ poor support for development and operations – IDE only for Eclipse, difficult debugging, no stacktrace upon failure. Their domain model was not well aligned with Drools and required lot of effort to make it usable by the rules. The users were used to and satisfied with the current system and wanted to keep the parts facing them such as the rules management UI instead of Drools’ own UI thus decreasing the value of the software (while increasing the overall complexity, we could add).At the end they’ve removed Drools and refactored their code to get all rules to one place, using only plain old Java – which works pretty well for them. Lessons Learned from Introducing Tools, Frameworks, and Libraries While the Know IT team encountered some issues specific to Drools, their experience has a lot in common with many other cases when a tool, a framework, or a library are introduced to solve some tasks and problems but turn out to be more of a problem themselves. What can we learn from these failures to deliver the expected benefits for the expected cost? (Actually such initiatives will be often labeled as a success even though the benefits are smaller and cost (often considerably) larger than planned.) Always think twice – or three or four times – before introducing a [heavyweight] tool or framework. Especially if it requires a new and radically different way of thinking or working. Couldn’t you solve it in a simpler way with plain old Java/Groovy/WhateverYouGot? Using an out of the box solution sounds very *easy* – especially at sales meetings – but it is in fact usually pretty *complex*. And as Rich Hickey recently so well explained in his talk, we should strive to minimize complexity instead of prioritizing the relative and misleading easiness (in the sense of “easy to approach, to understand, to use”). I’m certain that many of us have experienced how an “I’ll do it all for you, be happy and relax” tool turns into a major obstacle and source of pain – at least I have experienced that with Websphere ESB 6.0. (It required heavy tooling that only few mastered, was in reality version 1.0 and a lot of the promised functionality had to be implemented manually anyway etc.) We should never forget that introducing a new library, framework or tool has its cost, which we usually tend to underestimate. The cost has multiple dimensions:Complexity – complexity is the single worst thing in IT projects, are you sure that increasing it will pay off? Complexity of infrastructure, of internal structure, … . Competence – learning curve (which proved to be pretty high for Drools), how many people know it and availability of experts that can help in the case of troubles Development – does the tool somehow hinder development, testing or debugging, f.ex. by making it slower, more difficult, or by requiring special tooling (especially if it isn’t available)? (Think of J2EE x Spring) Operations – what’s the impact on observability of the application in production (high for Drools if it doesn’t provide stack traces for failures), on troubleshooting, performance, deployment process, …? Defects and limitations – every tool has them, even though seemingly mature (they had already version 4 of Drools); you usually run into limitations quite late, it’s difficult if not impossible to discover them up front – and it’s hard to estimate how flexible the authors have made it (it’s especially bad if the solution is closed-source) Longevity – will the tool be around in 1, 5, 10 years? What about backwards compatibility, support for migration to higher versions? (The company I worked for decided to stop support for Websphere ESB in its infrastructure after one year and we had to migrate away from it – what wasted resources!) Dependencies – what dependencies does it have, don’t they conflict with something else in the application or its environment? How it will be in 10 years?And I’m sure I missed some dimensions. So be aware that the actual cost of using something is likely few times higher than your initial estimate. Another lesson is that support for development is a key characteristics of any tool, framework, library. Any slowdown which it introduces must be multiplied at least by a 106 because all those slowdowns spread over the team and lifetime of the project will add up a lot. I experienced that too many times – a framework that required redeploy after every other changes, an application which required us to manually go through a wizard to the page we wanted to test, slow execution of tests by an IDE. The last thing to bear in mind is that you should be aware whether a tool and the design behind it is well aligned with your business domain and processes (including the development process itself). If there are mismatches, you will need to pay for them – just thing about OOP versus RDBMS (don’t you know somebody who starts to shudder upon hearing “ORM”?). Conclusion Be aware that everything has its cost and make sure to account for it and beware our tendency to be overly optimistic when estimating both benefits and cost (perhaps hire a seasoned pessimist or appoint a devil’s advocate). Always consider first using the tools you already have, however boring that might sound. I don’t mean that we should never introduce new stuff – just want to make you more cautious about it. I’ve recently followed a few discussions on how “enterprise” applications get unnecessarily and to their own harm bloated with libraries and frameworks and I agree with them that we should be more careful and try to keep things simple. The tool cost dimensions above may hopefully help you to expose the less obvious constituents of the cost of new tools. Reference: How to Fail With Drools or Any Other Tool/Framework/Library from our JCG partner Jakub Holy at the “Holy Java” Blog. Related Articles :Are frameworks making developers dumb? Open Source Java Libraries and Frameworks – Benefits and Dangers Those evil frameworks and their complexity Java Tutorials and Android Tutorials list...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close