Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Fix That Code Immediately!

You are working on that fresh project and you see a bad piece of code somewhere. The wrong way to approach it is “nah, that’s someone else’s code, I’m not doing anything about it”, “I don’t have time to fix that – I have other tasks”, “I’ll surely break things if I change this”. The problem is – bad code accumulates. Even if the piece is a small one, it adds up over time and soon enough you have that “big legacy project that was written by some incompetent guys and no one wants to support”. Someone once said that in six months all projects are “legacy”, because they have a lot of accumulated bad code. Or in other words – technical debt. That’s why you should fix it immediately. When you see some piece of crap, or something that’s not exaactly a great practice – fix it. Now. Or it will be too late, because then other code will start depending on that, and new code will follow the same practice (sometimes with copy/paste), and fixing it will be a nightmare. Let’s address the wrong statements above:“nah, that’s someone else’s code, I’m not doing anything about it” – so, what? You are in that project, you have the “right” to modify it. If the other person has written their code in a bad way, it might be that they even don’t know it’s bad – so they won’t fix it. And I don’t think they will get offended if you fix it. They might, but that’s not your problem. “I don’t have time to fix that – I have other tasks” – this is a task as well. And you can raise an issue/ticket in your issue tracker that says “refactor X”, and then log hours there. You can delay it until the next sprint (if agile). Problems with management insisting on making new things rather than fixing the old ones? Tell them to go read “Refactoring”…or Spolsky…or this blog. (It won’t help, but anyway) “I’ll surely break things if I change this” – possibly, yes. Hm, wait, you have unit-tests, right? And integration tests, and build-acceptance tests? If not – go ahead and fix that first. Then you won’t be so afraid of breaking things.Code reviews are also important in regard to this problem. If all code that gets committed is code-reviewed, the chance that a bad piece will go in unnoticed is decreasing. It still happens, but more rarely. The only problem with that approach is – how can you be certain a piece of code is bad? Well, here comes experience, knowledge of best practices, patterns. I can’t give you a recipe for that. But you should have a couple of people in your team that are capable of identifying bad code. If there is none – get Code Complete (and Effective Java (if your language is Java)). So – fix that code immediately. It saves time and headaches, and makes you a bit more proud of the project, rather than “that’s some piece of crap that some incompetent folks wrote, I was just doing some tasks on the side”. Because you can’t say that – if the project is crap, it’s your fault as well. Important notes :You should not change something just because you think it’s bad. Show it to your peers and technical leads. If it is something more than a couple of lines – discuss it more extensively and make a story about it. But do that as soon as possible. The advice is not about code that is complex and hard to read because of that. if (specialCaseX) {//do magic} is probably there because of some complex business requirement. If you want to improve things, research that and add a comment.Reference: Fix That Code Immediately! from our JCG partner  Bozhidar Bozhanov at the Bozho’s tech blog. Related Articles :The Three Ways to Work with Code The Default Use Case Technical debt & the Boiling Frog Dealing with technical debt Services, practices & tools that should exist in any software development house, part 1 Java Tutorials and Android Tutorials list...
junit-logo

Some Definitions – Testing Techniques 9

I think that I’m coming to the end of my series of blogs on testing techniques, and it feels like it’s been along haul. One of the things that has become clearer to me is that approaches to testing are still in their infancy and as such are a definite source of contention or discussion amongst developers – and that’s a good thing. I suspect that we are at a point in the history of our profession where the discipline of writing tests is only just gaining ground and will one day be common place and taught as part of elementary programming classes (1). Today’s blog provides a summary of the of the terms used in my previous blogs in this series. Unit Tests I covered the definition of a unit test in part two of this series of blogs, giving the views of Shane Warden, who in his book The Art of Agile Development states that unit tests should run at a rate of “hundreds per second”; Michael Feathers, who in his book Working Effectively with Legacy Code states that a unit test is not a unit test if:It talks to a database. It communicates across a network. It touches the file system. You have to do special things to your environment (such as editing configuration files) to run it.I also quoted Roy Osherove, who in his book The Art Of Unit Testing sums up a good unit test as: “an automated piece of code that invokes the method or class being tested and then checks some assumptions about the logical behaviour of that method or class. A unit test is almost always written using a unit testing framework. It can be written easily and runs quickly. It’s fully automated, trustworthy, readable and maintainable”. Unit tests can be summed up using the FIRST acronym: Fast, Independent, Repeatable, Self Validating and Timely. When to Use Unit Tests Unit tests are the bread and butter of testing. If you employ Test Driven Development, TDD, then you write failing tests before you write your production code. If you don’t use TDD, then at least write your tests at the same time as your production code i.e. write a method and then write its tests. This technique doesn’t involve the paradigm shift that accompanies TDD, but it’s much better than writing tests after you’ve written all the code, which is usually considered tedious by developers. There should be one test for every scenario, which translated into plain English means every path through your code: both sides of every if statement and every case of a switch statement. In short, every project should have hundreds of Unit Tests and you should have the confidence that if you change a part of the code then you won’t break something. Stubs Stubs are used to isolate an object under test from the rest of the system. They are objects that are injected into your object to replace real objects in test situations. Martin Fowler defines stubs, in his essay Mocks Aren’t Stubs as: “Stubs provide canned answers to calls made during the test, usually not responding at all to anything outside what’s programmed in for the test. Stubs may also record information about calls, such as an email gateway stub that remembers the messages it ‘sent’, or maybe only how many messages it ‘sent’” …whilst a similar definition, taken from The Art of Unit Testing is: “A stub is a controllable replacement for an existing dependency (or collaborator) in the system. By using stub, you can test your code without using the dependency directly.” Mocks Mocks are replacement objects used to imitate, or mock, the behaviour or roles of production objects. This really means checking that an object under test calls methods on a mock object as expected with the test failing if it doesn’t; hence, you’re asserting on the correctness of method calls and the execution path through your code, rather than, in the case of a regular unit test, the return value of the method under test. Integration Tests Integration tests are the opposite of unit tests. The idea behind integration tests is to prove that your objects collaborate with each other and the system around them. To paraphrase Michael Feathers, integration tests CAN:talk to a database. communicate across a network. touch the file system. require you to do special things to your environment (such as editing configuration files) to run it.Roy Osherove in the Art of Unit Testing states that an “integration test means testing two or more or more dependent software modules together as a group”. For me this constrains the definition a little too much after all, in testing objects in a single module you can access a database or file system whilst figuring out if your objects can collaborate. In projects I’ve previously worked on, there’s usually been a module specifically written to hold integration tests. This is because integration tests are less numerous than unit tests (perhaps at a ratio of 1:10) and, by virtue of the fact that they access their environment, are usually substantially slower and therefore corralling all integration tests into their own Maven module means that they don’t have to run every time you build a module, speeding up build and development time. End to End Integration Tests I’ve covered End to End tests in detail in the second blog in this series, so to summarise, they can be defined as a special case of integration tests in that the test commences at, or just behind, a system boundary and executes through all layers of the system. Where the system boundary, or just behind the system boundary is a matter for debate. In terms of a Spring MVC app, there’s no reason why an end to end test shouldn’t start at your controller code ignoring the browser and dispatcher servlet. After all, I suspect that the Guys at Spring have tested their code thoroughly, so why should you waste time testing it? Also, testing what the front end looks like s a whole different kettle of fish. (1) I often suspect that testing techniques are not actually taught in collages and universities – but I’ve no evidence of this. If there are any academics out there who can tell me that unit testing is taught, encouraged and an integral part of their Computer Science Degree courses then I’d be happy to hear from them. Reference: Some Definitions – Testing Techniques 9 from our JCG partner  Roger Hughes at the Captain Debug’s Blog. Related Articles :Testing Techniques – Not Writing Tests The Misuse of End To End Tests – Testing Techniques 2 What Should you Unit Test? – Testing Techniques 3 Regular Unit Tests and Stubs – Testing Techniques 4 Unit Testing Using Mocks – Testing Techniques 5 Creating Stubs for Legacy Code – Testing Techniques 6 More on Creating Stubs for Legacy Code – Testing Techniques 7 Why You Should Write Unit Tests – Testing Techniques 8...
devops-logo

Devops has made Release and Deployment Cool

Back 10 years or so when Extreme Programming came out, it began to change the way that programmers thought about testing. XP made software developers accountable for testing their own code. XPers gave programmers practices like Test-First Development and simple, free community-based automated testing tools like xUnit and FIT and Fitnesse. XP made testing cool. Programmers started to care about how to write good automated tests and achieving high levels of test code coverage and about optimizing feedback loops from testing and continuous integration. Instead of throwing code over the wall to a test team, programmers began to take responsibility for reviewing and testing their own code and making sure that it really worked. It’s taken some time, but most of these ideas have gone mainstream, and the impact has been positive for software development and software quality. Now Devops is doing the same thing with release and deployment. People are finding new ways to make it simpler and easier to release and deploy software, using better tools and getting developers and ops staff together to do this. And this is a good thing. Because release and deployment, maybe even more than testing, has been neglected by developers. It’s left to the end because it’s the last thing that you have to do – on some big, serial life cycle projects, you can spend months designing and developing something before you get around to releasing the code. Release and deployment is hard – it involves all sorts of finicky technical details and checks. To do it you have to understand how all the pieces of the system are laid out, and you need to understand the technology platform and operational environment for the system, and how Ops needs the system to be setup and monitored, how they need it wired in, what tools they use and how these tools work, and work through operational dependencies, and compliance and governance requirements. You have to talk to different people in a different language, learn and care about their wants and needs and pain points. It’s hard to get all of this right, and it’s hard to test it, and you’re under pressure to get the system out. Why not just give Ops the JARs and EARs and WARs and ZIPs (and your phone number in case anything goes wrong) and let them figure it out? We’re back to throwing the code over a wall again. Devops, by getting Developers and Operations staff working together and sharing technology and solving problems together, is changing this. It’s making developers, and development managers like me, pay more attention to release and deployment (and post-deployment) requirements. Not just getting it done. Getting developers and QA and Operations staff to think together about how to make release and deployment and configuration simpler and faster, about what could go wrong and then making sure that it doesn’t go wrong, for every release – not just when there is a problem or if Ops complains. Replacing checklists and instructions with automated steps. Adding post-release health checks. Building on Continuous Integration to Continuous Delivery, making it easier and safer and less expensive to release to test as well as production. This is all practical, concrete work, and a natural next step for teams that are trying to design and build software faster. One difference between the XP and Devops stories is that there’s a lot more vendor support in Devops than there was in the early days of Agile development. Commercial vendors for products like Chef and Puppet and UrbanCode (which has rebranded their Anthill Pro build and release toolset the DevOps Platform) and ThoughtWorks Studios with Go, and even IBM and HP are involved in Devops and pushing Devops ideas forward. This is good and bad. Good – because this means that there are tools that people can use and people who can help them understand how to use them. And there’s somebody to help sponsor the conferences and other events that bring people together to explore Devops problems. Bad, because in order to understand and appreciate what’s going on and what’s really useful in Devops you have to wade through a growing amount of marketing noise. It’s too soon to say yet whether the real thought leaders and evangelists will be drowned out by vendor product managers and consultants – much like the problem that the Agile development community faces today. Reference: Devops has made Release and Deployment Cool from our JCG partner  Jim Bird at the Building Real Software blog. Related Articles :Iterationless Development – the latest New New Thing Diminishing Returns in software development and maintenance Why Automated Tests Boost Your Development Speed How to solve production problems 9 Tips on Surviving the Wild West Development Process...
scala-logo

Scala’s version fragility make the Enterprise argument near impossible

I have been working with Scala for more than five years. In those five years, I’ve seen Scala evolve and seen the ecosystem and community evolve. An attribute of Scala is that the Scala compiler generates fragile byte-code. This means that all the code in an executable (JAR or WAR) must be compiled with the same library and compiler versions. If you’re using Scala 2.9.1 in your project, you must compile against a version of Lift that’s compiled against Scala 2.9.1 and all the Lift dependencies must also be compiled against that version of Scala. This is a reason that Lift has precious few Scala library dependencies. It’s also a reason that Lift is sprawling… there are a lot of modules in Lift that need to be cross-compiled and rather than having an external ecosystem of modules, we need to make them part of Lift to make sure they are all compiled against all the versions of Scala that Lift supports. The by-product of this issue is that it’s nearly impossible to test Lift or a Lift-based app against a pre-release of Scala. This is one of the many reasons that there’s almost always a “critical bug fix” release of Scala a few weeks after any major release. The first chance that the broader community gets to use the ecosystem of Scala libraries and frameworks is after a release and after the cascading compilation/test/release A year+ back, during the 2.8 development cycle, some of us got together on the “Fresh Scala“ project. Basically, the idea was to have continuous integration of the major libraries in Scala-land and weekly milestones so that the community and the ecosystem could give better feedback on new versions of Scala. Due to a lack of volunteer time, we never got the CI system built and here we are a year and a half later with the same Scala version fragility issues, but one thing has changed. Scala is no longer exclusively an academic and community effort. Scala the language and some associated libraries are now part of an enterprise-targeted suite from TypeSafe. TypeSafe is a huge (20+ heads) venture funded company that is supposed to be bringing Scala to the enterprise. But Scala’s version fragility creates two huge costs for complex enterprise-type systems:All dependencies must be compiled against the exact same version of Scala and every referenced Scala library which makes in-house builds when there are multiple teams amazingly complex or time-consuming External libraries (like Lift) cannot have multiple layers of dependencies because you need to match the exact version of every library and Scala version all the way downSo, if you’re in an organization with more than 2 or 3 development teams that are all generating internal Scala code, you can generally agree on the version of Scala and the various libraries that you’re going to use. When a new version of Scala or the libraries are released, you can get a couple of people in a room and choose to upgrade (or not.) But once you get get past 3 teams, the coordination efforts become significant. Trying to get 10 different teams to coordinate versions of Scala and Lift and ScalaZ and Rogue and Dispatch, etc. is going to be a significant cost. As a library builder, the version fragility issue means that our library is not nearly as good as it could be and we can’t service our community the way we want. In terms of servicing the community, we cannot have predictable releases of Lift because we try to time our releases such that Lift will be released soon after a new version of Scala is released. Given that Scala release schedules are not predictable and given that we need to wait for the dependencies (e.g., ScalaCheck and Specs) to be upgraded, we can’t answer simple questions like “when will Lift support Scala 2.8.2?” More importantly, there are few external Lift modules and they are generally not available except for specific versions of Scala and Lift and there are always issues when there’s a version mismatch. Modules like Rogue and Reactive Web are available for a small subset of Lift/Scala version combinations. There are dozens of folks who have smaller modules for Lift, but none of them are cross compiled for the last year’s worth of Scala and Lift releases. This basically means you can use Rogue (something I’d suggest) or you can use Reactive Web (something else I’d suggest), but you can’t use them in the same project because there’s no guarantee that they are compiled against the same versions of Lift and Scala, nor is there any guarantee that they will be compiled against the versions of Scala and Lift that you want to use. So, the Scala library and framework ecosystem is about as big as it can get given the Scala version fragility issue. Because its not possible to have multiple layers of library dependencies in Scala, there will either be monolithic projects like Lift or islands of projects like ScalaZ. And at this point, Scala does not have enough native libraries and frameworks to make it enterprise-worthy. Yes, you can use Java libraries, but as others have found, the cost of wrapping Java libraries in Scala is often not worth the cost/effort. What have I done about the issue? Over the years, I’ve raised the issue privately and it has not been addressed. I have tried to organize a project to address part of the issue, but haven’t had the volunteer uptake for the project. I have had discussions with Paul Phillips about what could be done and Paul, Jevgeni and I even worked out a system that could be integrated into the Scala compiler to address the issue in 2010 when we were at the JVM Language Summit. The best answer so far is the Migration Manager, a closed source tool that may address some of the issues (I haven’t tested it because I’m not going to use any non-OSS software for compiling or testing Lift to avoid any legal issues.) What we need:We need something like Scala Fresh so that there’s continuous integration across the major (and not-so-major) libraries and frameworks in the Scala ecosystem Something baked into the Scala compiler that makes the version fragility issue go awayWhat you can do:Blog about the issue and raise awareness about the version fragility issue and how it has impacted you Next time you’re on a TypeSafe “what would you pay us money for?” call, please tell them that a vibrant Scala ecosystem is critical to wider adoption and the version fragility issue is significantly impacting both internal development efforts and ecosystem growth. In order for your company to consider buying anything commercial from TypeSafe, you need broader internal adoption of Scala and that comes from reducing the transaction costs of using Scala including eliminating the version fragility issue in the open source parts of Scala.Why am I writing this post today? The version fragility issue has come up a bunch of times on the Lift list and the Lift committers list in the last week. It’s a non-trivial piece of pain and rather than being “go with the flow” about it, I figured I’d write about it and let folks know that it’s pain and it’s pain that could go away. Reference: Scala’s version fragility make the Enterprise argument near impossible from our JCG partner  David Pollak at the Good Stuff blog. Related Articles :Scala Tutorial – code blocks, coding style, closures, scala documentation project Scala use is less good than Java use for at least half of all Java projects Scala Tutorial – scripting, compiling, main methods, return values of functions Scala Tutorial – Tuples, Lists, methods on Lists and Strings Testing with Scala How Scala changed the way I think about my Java Code...
junit-logo

Why You Should Write Unit Tests – Testing Techniques 8

I’ve had lots of reaction to my recent blog on ‘What you Should Test’, some agreeing with me for varying reasons and others thinking that I’m totally dangerous for suggesting that certain classes may not need unit tests. Having dealt with What to test, today’s blog deals with Why you should write unit tests, and today’s example code is based upon a true story: only names, dates and facts have been changed. A client recently asked for a emergency release of some code to display a message on the screen, for legal reasons, on appropriate pages of their web site. The scenario was that a piece of information should be displayed on the screen if it existed in the database – a very simple scenario, which can be covered by a few simple lines of code. However, in the rush to write the code, the developer didn’t write any unit tests and the code contained a bug that wasn’t spotted until the patch reached UAT. You may ask what the bug was, and it was something that we’ve all done at some point in our careers: adding an unwanted semi-colon ‘;’ to the end of a line. I’m going to demonstrate a re-written, stunt double, version of the code using my AddressService scenario that I’ve used in my previous ‘Testing Techniques’ blogs as outlined by the UML diagram below:In this demonstration the functionality has changed, but the logic and sample code structure has essentially remained the same. In the AddressService world, the logic runs like this:Get an address from the database. If the address exists then format it and return the resulting string. If the addres does not exist then return null. If the formatting fails, don’t worry about it and return null.The re-written AddressService.findAddress(…) looks something like this: @Component public class AddressService {private static final Logger logger = LoggerFactory .getLogger(AddressService.class);private AddressDao addressDao;public String findAddressText(int id) {logger.info("In Address Service with id: " + id); Address address = addressDao.findAddress(id);String formattedAddress = null;if (address != null); try { formattedAddress = address.format(); } catch (AddressFormatException e) { // That's okay in this business case so ignore it }logger.info("Leaving Address Service with id: " + id); return formattedAddress; }@Autowired @Qualifier("addressDao") void setAddressDao(AddressDao addressDao) { this.addressDao = addressDao; } }Did you spot the bug? I didn’t when I reviewed the code… Just in case, I’ve annotated a screen shot below:The point of demonstrating a trivial bug, a simple mistake that anyone can make, is to highlight the importance of writing a few unit tests because unit tests would have spotted the problem and saved a whole load of time and expense. Of course, each organisation is different, but releasing the above code caused the following sequence of events:The application is deployed to Dev, Test, and UAT. The test team checked that the modified screen works okay and passes the change. Other screens are regression tested and found to be incorrect. All failing screens are noted. An urgent bug report is raised. The report passes through various management levels. The report gets passed to me (and I miss lunch) to investigate the possible problem. The report gets sent to three other members of the team to investigate (four pairs of eyes are better than one) The offending semi-colon is found and fixed. The code is retested in dev and checked in to source control. The application is built and deployed to Dev, Test, and UAT. The test team checks that the modified screen works okay and passes the change. Other screens are regression tested and pass. The emergency fix is passed.The above chain of events obviously wastes a good number of man hours, costs a shed load of cash, unnecessarily raises peoples stress levels, and tarnishes our reputation with the customer: all of which are very good reasons for writing unit tests. To prove the point, I’ve written the three missing unit tests and it only took me an extra 15 minutes development time, which compared with the number of man hours wasted seems a good use of developer time. @RunWith(UnitilsJUnit4TestClassRunner.class) public class WhyToTestAddressServiceTest {private AddressService instance;@Mock private AddressDao mockDao;@Mock private Address mockAddress;/** * @throws java.lang.Exception */ @Before public void setUp() throws Exception {instance = new AddressService(); instance.setAddressDao(mockDao); }/** * This test passes with the bug in the code * * Scenario: The Address object is found in the database and can return a * formatted address */ @Test public void testFindAddressText_Address_Found() throws AddressFormatException {final int id = 1; expect(mockDao.findAddress(id)).andReturn(mockAddress); expect(mockAddress.format()).andReturn("This is an address");replay(); instance.findAddressText(id); verify(); }/** * This test fails with the bug in the code * * Scenario: The Address Object is not found and the method returns null */ @Test public void testFindAddressText_Address_Not_Found() throws AddressFormatException {final int id = 1; expect(mockDao.findAddress(id)).andReturn(null);replay(); instance.findAddressText(id); verify(); }/** * This test passes with the bug in the code * * Scenario: The Address Object is found but the data is incomplete and so a * null is returned. */ @Test public void testFindAddressText_Address_Found_But_Cant_Format() throws AddressFormatException {final int id = 1; expect(mockDao.findAddress(id)).andReturn(mockAddress); expect(mockAddress.format()).andThrow(new AddressFormatException());replay(); instance.findAddressText(id); verify(); } }And finally,at the risk of sounding smug I have to confess that although in this case, the bug wasn’t mine, I have released similar bugs into the wild in the past – before I learnt to write unit tests… The source code is available from GitHub at: git://github.com/roghughe/captaindebug.git Reference: Why You Should Write Unit Tests – Testing Techniques 8 from our JCG partner  Roger Hughes at the Captain Debug’s Blog. Related Articles :Testing Techniques – Not Writing Tests The Misuse of End To End Tests – Testing Techniques 2 What Should you Unit Test? – Testing Techniques 3 Regular Unit Tests and Stubs – Testing Techniques 4 Unit Testing Using Mocks – Testing Techniques 5 Creating Stubs for Legacy Code – Testing Techniques 6 More on Creating Stubs for Legacy Code – Testing Techniques 7 Some Definitions – Testing Techniques 9...
junit-logo

More on Creating Stubs for Legacy Code – Testing Techniques 7

In my last blog, I talked about dealing with the badly behaved untestable(1) SitePropertiesManager class and how to create stubs by extracting an interface. But what happens when you don’t have access to the source code of the legacy class because it’s locked away inside a third party JAR file? The answer is one of those things that you really don’t think about, but when you see it you realise that it’s fairly obvious. To demonstrate this, I’m going to re-write the code from my last blog (2) that tests my simple AddressService. The scenario is the same, the AddressService has to load a site property and decide whether or not to return an address: public Address findAddress(int id) {logger.info("In Address Service with id: " + id);Address address = Address.INVALID_ADDRESS;if (isAddressServiceEnabled()) { address = addressDao.findAddress(id); address = businessMethod(address); }logger.info("Leaving Address Service with id: " + id); return address; }private boolean isAddressServiceEnabled() {return new Boolean(propManager.findProperty("address.enabled")); }…except, I’m going to pretend that SitePropertiesManager is locked away inside a JAR file. All the points about making legacy code more testable that I raised previously still stand: you need to move to dependency injection using a SpringFactoryBean implementation and stop relying on the static factory method getInstance(). You also need a way of creating a stub that allows you to isolate you code from the database and file system that’s happily used by our rogue class SitePropertiesManager. In this case, as the class is locked inside a JAR file, you can’t simply extract an interface, you have to be slightly more cunning and use inheritance. Writing a stub using inheritance is pretty trivial and only takes a few lines of code as demonstrated below: public class StubSitePropertiesUsingInheritance extends SitePropertiesManager {private final Map<String, String> propMap = new HashMap<String, String>();public void setProperty(String key, String value) { propMap.put(key, value); }@Override public String findProperty(String propertyName) { return propMap.get(propertyName); } }The big idea here is that I can now polymorphically inject my stub instance in to my AddressService class without it knowing that it’s been fooled. public class LegacyAddressServiceUsingInheritanceTest {private StubAddressDao addressDao;private StubSitePropertiesUsingInheritance stubProperties;private LegacyAddressService instance;@Before public void setUp() { instance = new LegacyAddressService();stubProperties = new StubSitePropertiesUsingInheritance(); instance.setPropertiesManager(stubProperties); }@Test public void testAddressSiteProperties_AddressServiceDisabled() {/* Set up the AddressDAO Stubb for this test */ Address address = new Address(1, "15 My Street", "My Town", "POSTCODE", "My Country"); addressDao = new StubAddressDao(address); instance.setAddressDao(addressDao);stubProperties.setProperty("address.enabled", "false");Address expected = Address.INVALID_ADDRESS; Address result = instance.findAddress(1);assertEquals(expected, result); }@Test public void testAddressSiteProperties_AddressServiceEnabled() {/* Set up the AddressDAO Stubb for this test */ Address address = new Address(1, "15 My Street", "My Town", "POSTCODE", "My Country"); addressDao = new StubAddressDao(address); instance.setAddressDao(addressDao);stubProperties.setProperty("address.enabled", "true");Address result = instance.findAddress(1);assertEquals(address, result); } }You may well ask: why not always use inheritance and the answer is that the downside to this technique is that the test code is tightly coupled to the wild SitePropertiesManager class. This is not too much of a problem in this case and, being the pragmatic programmer, I guess that this doesn’t really matter as having code that’s neat, tested and reliable is better than having code that’s loosely couple, but without unit tests. (1) Designed without taking unit testing in to consideration. (2) The source code is available from GitHub at:     git://github.com/roghughe/captaindebug.git Reference: More on Creating Stubs for Legacy Code – Testing Techniques 7 from our JCG partner  Roger Hughes at the Captain Debug’s Blog . Related Articles :Testing Techniques – Not Writing Tests The Misuse of End To End Tests – Testing Techniques 2 What Should you Unit Test? – Testing Techniques 3 Regular Unit Tests and Stubs – Testing Techniques 4 Unit Testing Using Mocks – Testing Techniques 5 Creating Stubs for Legacy Code – Testing Techniques 6 Why You Should Write Unit Tests – Testing Techniques 8 Some Definitions – Testing Techniques 9...
java-logo

Java Recursion basics

For those who don’t know what recursion is (and like a good laugh), click on this link: Google search: Recursion and click on the “did you mean…” item. Hopefully you’ve finally figured out that recursion is anything that refers to itself (if not, then you may be stuck browsing google forever trying to find out what recursion is!). A fairly common example of recursion is the Fibonacci numbers. The pattern for Fibonacci numbers is to add the 2 previous terms together for the next term, starting with one and one. Below is the recurrence relationship for Fibonacci numbers: F(1) = F(2) = 1 F(n) = F(n-1)+F(n-2) A recurrence relationship is any relationship where the original function refers to itself. So how do we find F(5)? F(5) = F(4) + F(3) F(5) = [F(3)+F(2)] + [F(2)+F(1)] F(5) = [F(2)+F(1)] + 1 + 1 + 1 F(5) = 1+1+1+1+1 F(5) = 5 Seem like a lot of work? Well, to the computer it’s fairly fast most of the time. Later on, I’ll tell you about Dynamic Programming so we can speed this up when you want to compute large Fibonacci numbers. So what are all the parts of a recursive function? First of all, what is a recursive function? A recursive function is any function that calls itself (either directly or indirectly). Here’s a simple example in Java: public static void doIt() { doIt(); }Of course, this will eventually cause a stack-over flow error, so it’s not recommended you try this code for real. All useful recursive functions have this general property: reduce the problem until it’s so easy the computer can solve it. To fulfill this, recursive functions must have:Base cases defined (cases where the solution is obvious, and can’t be reduced any further) Reduction step (place to simplify the given problem) Recursion call to itself (basically solve the simpler case)In the Fibonacci recursive function above, you can see that it was recursing until it was just adding up 1′s. This is because in the Fibonacci sequence 1 is the base case, so we just had to add up 1 some number of times to get F(n). In theory, all recursive functions can be written iteratively, and all iterative functions can be written recursively. However, in practice you’ll find that one or the other of these philosophies will work better depending on the case. Let’s look at the Factorial function recursively, and compare it to its iterative relatives. Factorial(N) = 1*2*3*…*N Basically, multiply all the integers from 1 to N to get the factorial of N. Implemented iteratively, your code would look something like this: public static int iterativeFactorial(int n) { int answer = 1; for (int i = 1; i < n; i++) { answer *= i; } return answer; }We can also write the recursive equivalent of this function: F(1) = 1 F(N) = F(N-1)*N can you see how this would yield the same results as the iterative factorial? Here’s the code to compute factorials recursively: public static int recursiveFactorial(int n) { if (n == 1) { return 1; } else { return n * recursiveFactorial(n-1); } }So, in terms of performance, how does recursion stack up against the iterative solution here? Sadly, the answer is quite poorly. Recursive function here requires lots of memory to store the method stack and keep track of all the variables in each recursive call, while iterative solutions only have to keep track of the current state. So why even bother with recursion? Because many times when recursion is used correctly it’s performance can out-strip that of iterative solutions, and recursive functions can also be easier to write (sometimes). Dynamic Programming Dynamic programming is a form of recursion, but it’s implemented iteratively. Remember our Fibonacci computer above? F(5) = F(2) + F(1) + F(2) + F(2)+F(1) F(5) = 3 * F(2) + 2 * F(1) We have quite a few “over-computations” here. It was only necessary to compute F(2) once, and F(1) once. In this case, it wasn’t too computationally tasking to compute these few terms, but there will be some situations where it will become almost impossible to recompute the solutions hundreds of times. So instead of re-computing, we store the answer away. public static int dynamicFibonacci(int n) { int[] prevSolutions = new int[n]; if (n == 1 || n == 2) { return 1; } prevSolutions[0] = 1; prevSolutions[1] = 1; for (int i = 2; i < prevSolutions.length; i++) { prevSolutions[i] = prevSolutions[i-1] + prevSolutions[i-2]; } return prevSolutions[n-1]; }So, take F(5) again. If we did it the recursive way, it would have been 8 calls to recursiveFibonacci. However, here we only computed F(1),F(2),F(3),F(4), and F(5) once. This gain of 3 less calls may not seem like much, but what if we tried computing F(50)? dynamicFibonacci will only compute 50 numbers, but recursiveFibonacci could take over 1000 (of course, I haven’t counted so I don’t know if it’s over 1000 or not). The last note on dynamic programming is that it only helps in cases were we have tons of over-lap. Remember the recursiveFactorial function? If we called recursiveFactorial(50) and dynamicFactorial(50), they will take roughly the same time to run because we’re making the same number of computations. This is because there’s no over-lap what-so ever. This is also why sorting algorithms are a poor choice to implement with dynamic programming: if you analyze most sorting algorithms, they have almost no overlapping solutions, thus is a poor candidate for dynamic programming. Here are some questions about recursion and dynamic programming:Implement the recursiveFactorial method (and you thought I had forgotten to put this in there) For the given recursive relationship, write a recursive method that will find F(N) What does this recursive relationship mean in iterative terms? Write an iterative solution to this problem. Is this recursive relationship a good candidate for dynamic programming? Why/why not? Is there a better way to solve this problem than the iterative or recursive solutions? What is it (if there is one)? Hint: think of Carl GaussFor problems 2-5, use this recursive relationship: F(1) = 1 F(N) = F(N-1) + N Answers are to come… Reference: Recursion from our JCG partners at the Java Programming Forums Related Articles :Java Micro-Benchmarking: How to write correct benchmarks Logging exceptions root cause first Programming antipatterns Top 10 Java Books you don’t want to miss The Java Logging Mess Do it short but do it right !...
jcg-logo

Java Examples & Code Snippets by Java Code Geeks – Official Launch

Hi all, Here at Java Code Geeks we are striving to create the ultimate Java to Java developers resource center. In that direction and during the past few months we have made partnerships, we have set up a Java and Android tutorials page and we have created open source software. But we did not stop there. We are now proud to announce the official launch of our Java Examples & Code Snippets dedicated site. There you will find a wealth of Java snippets that will help you understand basic Java concepts, use the JDK API and kick start your applications by leveraging existing Java technologies.    The main categories currently are:Java Basics Core Java Enterprise Java Desktop Java AndroidOur goal was to make the snippets as easy to use as possible. For this reason, the vast majority of them can be used as a standalone application that showcases how to use the particular API. The snippets are ready to go, just copy and paste, run the application and see the results. We hope that this effort will be a great aid to the community and we are really glad to have helped be created. We would be delighted if you helped spreading the word and allowing more and more developers to come in contact with our content. Don’t forget to share!Java Examples & Code Snippets Happy coding everyone! Cheers, The Java Code Geeks team...
java-logo

Musing on mis-usings: ‘Powerful use, Damaging misuse’

There’s an old phrase attributed to the former British Prime Minister Benjamin Disraeli which states there are three types of lies: “lies, damn lies and statistics”.  The insinuation here is that statistics are so easy to make up they are unreliable.  However, statistics are extensively used in empiracle science so surely they have some merit? In fact, they have a lot of merit.  But only when they are used corrrectly.  The problem is they are easy to misuse.  And when misused, misinformation happens which in turn does more more harm than good. There are strong parallels to this narrative in the world of software engineering. Object orientated languages introduced the notion of inheritance, a clever idea to promote code reuse. However, inheritance – when misused – can easily lead to complex hierarchies and can make it difficult to change objects. The misuse of inheritance can reek havoc and since all it takes to use inheritance (in Java) is to be able to spell the word “extends”, it’s very easy to reek such havoc if you don’t know what you are doing. A similar story can be told with polymorphism and with design patterns. We all know the case of someone hell bent on using a pattern and thinking more about the pattern than the problem they are trying to solve.  Even if they understand the difference between a Bridge and an Adapter it is still quite possible that some part of the architecture may be over engineered. Perhaps it’s worth bearing in mind that every single one of the GOF design pattern is already in JDK, so if you really want it in your architecture you don’t have to look very far – otherwise only use when it makes sense to use it. This ‘Powerful use, damaging misuse’ anti-pattern is ubiquitous in Java systems.  Servlet Filters are a very handy feature for manipulating requests and reponses, but that’s all they are meant to do.  There is nothing in the language to stop a developer treating the Filter as a classical object, adding public APIs and business logic to the Filter.  Of course the filter is never meant to be used this way and when they are trouble inevitably happens.  But the key point is that it’s easy for a developer to take such a powerful feature, misuse it and damage architectures.  ‘Powerful use, damaging misuse’ happens very easy with Aspects, even Exceptions (we have all seen cases where exceptions were thrown and it would have made more sense to just return a boolean) and with many other features.  When it is so easy to make mistakes, inevitably they will happen.  The Java compiler isn’t going to say – ‘wait a sec do you really understand this concept?’ and codestyle tools aren’t sophisticated enough to spot misuse of advanced concepts. In addition, no company has the time to get the most senior person to review every line of code.  And even the most Senior Engineer will make mistakes.  Now, much of what has been written here is obvious and has already been well documentated.   Powerful features generally have to be well understood to be properly used.  The question I think worth asking is if there is any powerful feature or engineering concept in a Java centric architecture which is not so easy to misuse? I suggest there is at least one, namely:  Encapsulation.  Firstly, let’s consider if encapsulation didn’t exist.  Everything would be public or global (as in Javascript).  As soon as access scope narrows, encapsulation is happening which is usually a good thing. Is it possible to make an architecture worse by encapsulating behaviour?  Well it’s damn hard to think of a case where it could.  If you make a method private, it may be harder to unit test. But is it really?  It’s always easy to unit test the method which calls it, which will be in the same class and logical unit. There’s a lesson to be learnt here. As soon as you design anything which something else uses, whether it be a core component in your architecture, a utility library class or a REST API you are going to tell the world about, ask youself:How easy is it for people to misuse this? Is it at the risky levels of inheritance or the safer levels of encapsulation? What are the consequences of misuse? And what can you do to minimise misuse and its consequences? Aim to increase ‘powerful use’ and minimise ‘damaging misuse’! Reference: Musing on mis-usings: ‘Powerful use, Damaging misuse’ from our JCG partner Alex Staveley at the Dublin’s Tech Blog blog. Related Articles :Java 7 Feature Overview Java SE 7, 8, 9 – Moving Java Forward Recycling objects to improve performance Java Secret: Loading and unloading static fields Java Best Practices Series Laws of Software Design...
google-app-engine-logo

Multitenancy in Google AppEngine (GAE)

Multitenancy is a topic that has been discussed for many years, and there are many excellent references that readily available, so I will just present a brief introduction. Multitenancy is a software architecture where a single instance of the software runs on a server, serving multiple client organizations (tenants). With a multitenant architecture, an application can be designed to virtually partition its data and configuration (business logic), and each client organization works with a customized virtual application instance. It suits SaaS (Software as a Service) cloud computing very well; however, they can be very complex to implement. The architect must be aware of security, access control, etc. Multitenancy can exist in several different flavors: Multitenancy in DeploymentFully isolated business logic (dedicated server customized business process) Virtualized Application Servers (dedicated application server, single VM per app server) Shared virtual servers (dedicated application server on shared VM) Shared application servers (threads and sessions)This spectrum of different installations can be seen here:Multitenancy and DataDedicated physical server (DB resides in isolated physical hosts) Shard virtualized host (separate DBs on virtual machines) Database on shared host (separate DB on same physical host) Dedicated schema within shared databases (same DB, dedicated schema/table) Shared tables (same DB and schema, segregated by keys – rows)Before jumping into the APIs, it is important to understand how Google’s internal data storage solution work. Introducing Google’s BigTable technology: It is a storage solution for Google’s own applications such as Search, Google Analytics, gMail, AppEngine, etc BigTable is NOT:A database A horizontally sharded data A distributed hash tableIt IS: a sparse, distributed, persistent multidimensional sorted map. In basic terms, it is a hash of hashes (map of maps, or a dict of dicts). AppEngine data is in one “table” distributed across multiple computers. Every entity has a Key by which it is uniquely identified (Parent + Child + ID), but there is also metadata that tells which GAE application (appId) an Entity belongs to.From the graph above, BigTable distributes its data in a format called tablets, which are basically slices of the data. These tablets live on different servers in the cloud. To index into a specific record (record and entity mean pretty much the same thing) you use a 64KB string, called a Key. This key has information about the specific row and column value you want to read from. It also contains a timestamp to allow for multiple versions of your data to be stored. In addition, records for a specific entity group are located contiguously. This facilitates scanning for records. Now we can dive into how Google implements Multitenancy. Implemented in release 1.3.6 of App Engine, the Namespace API (see resources) is designed to be very customizable, with hooks into your code that you can control, so you can set up multi-tenancy tailored to your application’s needs. The API works with all of the relevant App Engine APIs (Datastore, Memcache, Blobstore, and Task Queues). In GAE terms, namespace == tenant At the storage level of datastore, a namespace is just like an app-id. Each namespace essentially looks to the datastore as another view into the application’s data. Hence, queries cannot span namespaces (at least for now) and key ranges are different per namespace. Once an entity is created, it’s namespace does not change, so doing a namespace_manager.set(…) will have no effect on its key. Similarly, once a query is created, its namespace is set. Same with memcache_service() and all other GAE APIS. Hence it’s important to know which objects have which namespaces. In my mind, since all of GAE user’s data lives in BigTable, it helps to visualize a GAE Key object as: Application ID | Ancestor Keys | Kind Name | Key Name or ID All these values provide an address to locate your application’s data. Similarly, you can imagine the multitenant key as: Application ID | Namespace| Ancestor Keys | Kind Name | Key Name or ID Now let’s briefly discuss the API (Python):Function Name Arguments APIget_namespace None Returns the current namespace, or returns an empty string if the namespace is unset.set_namespace namespace: A value of None unsets the default namespace value. Otherwise, ([0-9A-Za-z._-]{0,100}) Sets the namespace for the current HTTP requestvalidate_namespace value: string containing the namespace being evaluated. Raises the BadValueError if not ([0-9A-Za-z._-]{0,100}). exception=BadValueError Raises the BadValueError exception if the namespace string is not valid.Here is a quick example:Datastore Example tid = getTenant()namespace = namespace_manager.get_namespace()try: namespace_manager.set_namespace('tenant-' + str(tid)) # Any datastore operations done here user = User('Luis', 'Atencio') user.put()finally:# Restore the saved namespace namespace_manager.set_namespace(namespace)The important thing to notice here is the pattern that GAE provides. It will the exact same thing for the Java APIs. The finally block is immensely important as it restores the namespace to what is was originally (before the request). Omitting the finally block will cause the namespace to be set for the duration of the request. That means that any API access whether it is datastore queries or Memcache retrieval will use the namespace previously set. Furthermore, to query for all the namespaces created, GAE provides some meta queries, as such:Metaqueries from google.appengine.ext.db.metadata import Namespaceq = Namespace.all() if start_ns: q.filter('__key__ >=', Namespace.key_for_namespace(start_ns)) ifend_ns: q.filter('__key__ <=', Namespace.key_for_namespace(end_ns))results = q.fetch(limit) # Reduce the namespace objects into a list of namespace names tenants = map(lambda ns: ns.namespace_name, results) return tenantsResources: http://www.youtube.com/watch?v=zRwPSFpLX8I&feature=related BigTable. https://docs.google.com/viewer?url=http%3A%2F%2Flabs.google.com%2Fpapers%2Fbigtable-osdi06.pdf http://www.youtube.com/watch?v=tx5gdoNpcZM http://msdn.microsoft.com/en-us/library/aa479086.aspx#mlttntda_topic2 http://www.slideshare.net/pnicolas/cloudmultitenancy http://code.google.com/appengine/articles/life_of_write.htmlReference: Multitenancy in Google AppEngine (GAE) from our JCG partner Luis Atencio at the Reflective Thought blog.Related Articles :Google App Engine Java Capabilities and Namespaces API Google App Engine: Host application in your own domain App Engine Java Development with Netbeans Android App Engine Integration...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books