Featured FREE Whitepapers

What's New Here?

software-development-2-logo

Dealing with technical debt

We’re drowning in technical debt. We have a mountain to climb and don’t really know where to start. Sound familiar? For many of us working on legacy code bases this is the day-to-day reality. But what to do about it? How did we get here? Technical debt is always the fault of those “other guys”. Those idiot developers that were here a few years ago. Morons. Obviously couldn’t code their way out of a game of life if their on-going existence depended on it. I hate to tell you but: we are those other guys. The decisions we make today will look foolish tomorrow. We’ll have more information then, a different perspective; we’ll know how the product and technology were going to evolve. We can’t know that today, so many of our decisions will turn out to be wrong. Where to start Classes are like best-selling novels – some are spectacularly more popular / more debt-laden than others. One class, one package, one module – will be much worse than the others. There’ll be a handful of classes, packages etc… that are much worse than all the rest. How does this happen? Well, one class ends up with a bit of technical debt. Next time I come to change that class, I’m too lazy to fix the debt, so I just hack something together to get the new feature done. Then next time round, there’s a pile of debt – only that guy’s too busy to fix it so he adds in a couple of kludges and leaves it. Before you know it, this one class has become a ten thousand line monster that’s pure technical debt. It’s like the broken-windows theory – if code is already crappy, its much easier to just make it a little more crappy. If the code’s clean, it’s a big step to add in a hack. So little by little, technical debt accumulates in areas that were already full of debt. I suspect technical debt in code follows a power law – most classes have a little bit of debt, but a few are really shitty, with one diabolical class in particular:Where to start? Resist the temptation to make easy changes to relatively clean classes – start with the worst offender. It will be the hardest to fix, it might take a long time – but you’ll get the best bang-for-buck by fixing the most debt-heavy piece of crap code. If you can fix the worst offender, your debt will have to find somewhere else to hide. The 80/20 rule There’s a cliché that 80% of the cost of software is maintenance, only 20% is the initial build. Let’s imagine a team that has 1200 hours a month to spend on useful work. For that 1200 hours of useful work, we’ll spend four times that much over the lifetime of the software maintaining it – from the 80/20 rule. Although we completed 1200 hours of feature work this month, we committed ourselves to 4800 hours of maintenance over the lifetime of the code. That means next month, we have to spend a little of the 4800 hours of maintenance, with the rest of the time spent on useful, feature-adding work. However, adding new features commits us to even more maintenance work. The following month, we’ve got nearly twice the code to maintain so spend nearly twice the amount of time maintaining it and even less time producing value-adding features. Month-by-month we spend more and more time dealing with the crap that was there before and less and less time adding new features.Does this sound familiar? This is what technical debt feels like. After a couple of years the pace has dropped; you’re spending half your time refactoring and fixing the junk that was there before. “If only we could get rid of this technical debt”, you cry. What is technical debt? We can all point to examples of crappy, debt-laden code we’ve seen. But what’s the impact of technical debt? Technical debt is simply an inability to quickly make changes to an existing system. This is the cost to the business of technical debt – what should be quick changes take an unpredictably long time. What do we do when we remove technical debt? We generalize and find more abstract solutions. We clarify and simplify. We remove duplication and unnecessary complexity. The net effect of reducing technical debt, is to reduce inventory. Perhaps the amount of code – our inventory – is a good approximation for the amount of technical debt in a system. If I’m confronted with a million lines of code and need to make a change, it will probably take a while. However, if I’m only confronted by 1000 lines of code the change will be much quicker. But, if I’m confronted by zero lines of code, then there’s zero cost – I can do whatever I like. The cost of making a change to a system is roughly proportional to the size of the system. Large, complex systems take longer to make changes to than small, self-contained ones. All code is a liability – the more code you have, the bigger the debt. When we’re paying back technical debt – are we really just reducing inventory? Is what feels like technical debt actually interest payments on all the inventory we hold? What are the options? Big bang One option is to down-tools and fix the debt. Not necessarily throw everything out and rewrite, but spend some time cleaning up the mess. The big bang approach to dealing with technical debt. It’s pretty unusual for the business to agree to a plan like this – no new features for a year? Really? With no new features for a year what would all those product managers do all day? From the 80/20 rule, the lifetime cost for any piece of code is four times what it cost to create. If it took three months to make, it will take a year to pay back. So wait, we’re gonna down tools for a year and only pay back three months of technical debt? Seriously? We’ll be marginally better off – but we’ll still be in a debt-laden-hell-hole and we’ll have lost a year’s worth of features. No way! Dedicated Team Even if you try to do big bang, it ends up becoming the dedicated team approach. As a compromise, you get a specific team together to fix the debt, meanwhile everyone else carries on churning out new features. One team are removing debt; while another team are re-adding it. What are the chances that debt is being removed faster than it’s being added? Exactly. Nil. It makes sense – you need a team removing debt four times bigger than the team adding new features just to stay still. Boy Scout You could adopt a policy of trying to remove technical debt little and often – the boy scout approach. On every single development task try and remove debt near where you’re working. If there are no tests, add some. If the tests are poor, improve them. If the code’s badly factored, refactor it. The boy scout rule – leave the camp cleaner than you found it. This is generally much easier to sell, there’s only minimal impact on productivity: it’s much cheaper to make changes to a part of the system you understand and are already working in than to open up whole new ones. But over time you can massively slow down the rate at which debt grows. Inevitably the system will still grow, inevitably the amount of debt will increase. But if you can minimise the maintenance cost you’ll keep the code small and nimble for as long as possible. Professionalism If we can lessen the maintenance cost of code even just a little we can save ourselves a fortune over the life of the code. If we can reduce the multiple to just three times the initial cost, so our 1200 hours work only costs 3600 hours in maintenance, we’ve saved enough development capacity to build another feature of the same size! For free! Hey, product manager, if we do our job better it’ll take no longer and you’ll get free features. Who doesn’t want free features? If we can create well-crafted, DRY, SOLID code with good test coverage we have a good chance of minimising the lifetime maintenance cost. This is the best way we can keep our productivity up, to try and avoid getting mired in technical debt and keep the code base responsive to changing requirements. It’s the only way we can remain productive and agile. Frankly, anything else is just unprofessional. If you’re deliberately committing your company to spend excessive amounts maintaining your shitty code – what the fuck, exactly, are they paying you for? Reference: Dealing with technical debt from our JCG partner at the Actively Lazy blog. Related Articles :The true cost of technical debt If I had more time I would have written less code Services, practices & tools that should exist in any software development house On the importance of communication in the workplace You can’t be Agile in Maintenance? This comes BEFORE your business logic! You can’t be Agile in Maintenance? Java Tutorials and Android Tutorials list...
jetbrains-intellijidea-logo

Eclipse 3.6 vs IntelliJ IDEA 10.5: Pros and Cons

After having worked with Eclipse for over 5 years I’ve came to use IntelliJ IDEA intensively on a J2EE project in three months and took this as an opportunity to compare the two. You can’t really compare 5 years and 3 months but I still believe that it is long enough to get a pretty good overview of what a tool is like. For the impatient: IntelliJ is a very good tool, its killing feature for me is its excellent support for other languages such as Groovy (e.g. for unit tests) and Clojure. Many details are more worked-out and with a higher usability then in Eclipse, f.ex. search & replace with match highlighting and replacement preview. Its support for navigability and refactoring across multiple languages (Java, JSP, JSF, HQL, Spring config in my case) is also an absolutely great feature for productivity. And of course I have to add it credits for being a Czech product [1] (interestingly enough, NetBeans also comes from the Czech Republic [2]; it’s a pity Eclipse hasn’t this link too. My main issue with IntelliJ is its performance. First, running tests is slow because IntelliJ only does (re)compile the test/source when you hit the run button as opposed to Eclipse’ incremental compilation. And that makes TDD very painful. (I tried to use the old Eclipse Mode plugin but it has problems with IntelliJ 9/10.) Second, sometimes the UI freezes* and you have to wait seconds or tens of seconds for it to respond again (even after disabling most plugins and some analysis). It doesn’t happen too often but often enough to be noticed, to be annoying, and to interrupt the development flow. (*) Update: UI freezes may be a specific issue of Mac 64b 1.6 JDK So I guess I’ll use either Eclipse or IntelliJ with respect to the needs of the project at hand and hope for IntelliJ to resolve its performance issues (as NetBeans did). The Software ComparedEclipse 3.6 – I’ve worked with Eclipse since 3.0 or may be even before on many commercial projects IntelliJ IDEA Ultimate (the commercial, full-featured edition; II community is good enough unless you need special support for frameworks like JavaEE, Spring, and Hibernate – see the editions comparison)What’s Cool in IntelliJ IDEA The things I’ve stumbled upon and considered them noteworthy (there are certainly more of such goodies):Great support for Groovy and Clojure (and others). I’ve used Groovy to write unit tests for my Java project and it worked pretty well (only click + Alt+Enter on a non-existing method to create it didn’t work unless the target class was a nested (static) class in the test itself) Out-of-the-box support for Spring*, namely you can click a bean class name in the applicationContext.xml to jump to it, deprecated classes are struck out, bean definitions are validated against available constructors and setters Refactoring Move can move more members/methods at once Move method is aware of the current class’ field of the target type so it is able to automatically insert fieldOfTargetType.movedMethod() – st- I miss a lot in Eclipse Rename takes care also of JSF EL expressions in JSPs and other non-Java references (I suppose it is more clever than just a simple text search & replace) Completion proposals are displayed as you type (without pressing a hotkey – I love that) AND they include types that haven’t been imported yet (@BeforeClass in a test…) (Auto)Completion proposals over multiple languages CSS classes in JSPs (and in CSS/style it proposes e.g. color names) Click-through in JSF EL expressions (well, at least sometimes) Usage search can find also method usages in JSPs, Spring config etc. Debugging The Variables window automatically shows not only the local variables but also expressions based on them that are used in the code such as “array_variable.length”- a good time saver JavaDoc: Closing tag completion – I’ve always missed that so much in Eclipse! When you generate a foreach loop (“itco” + tab) and change the source collection then it updates the element type automatically (in “for (Type t: sourceColl)”) Really helpful RegExp integration in find & replace in file – when typing, it shows both the first matched expression and what it will be replaced with General: Good at guessing resonable names for variables, … Possibility to define a module for a subdirectory of the main module => you may have a project using Java 1.4 with tests in Java 5+. Works great with Maven multi-module projects too. The Project view displays directly Java types so you can distinguish a class from an interface at the first glance (Eclipse shows a file icon and you need to expand it first) The Java file structure view can show a “property” instead of a getter and a setter, making it shortet and easier to find what’s really important(*) The Ultimate edition only (likely) I’d recommend reading also the responses to the StackOverflow question Things possible in IntelliJ that aren’t possible in Eclipse? – among others they mention click-through anything, autocomplete taking into account more of the context (e.g. the name of the variable), the rename method refactoring updating also JSPs, Spring config, etc.. In general I’d say that IntelliJ has strong focus on usability and productivity, tries to understand what developers usually do and need and helps them with that and is pretty good at it. The authors claim it to be “The Most Intelligent Java IDE” and I think they do not exaggerate (or at least not too much). Not So Cool (In no particular order.)Eclipse only needs two hotkeys: Completion (^space) for class/property/templates/surround-with and Quick Fix (^1 – the most powerful tool in E.) for fixes such as imports, refactorings etc. In II you’ve several hotkeys for completion, one for live templates, one for fixes (intentions) … – I’ve never managed to remember them all and to decide which one I should use in a particular situation No JavaDoc popup on mouse over (need ^J) The Live Template editor sucks, at least under Mac (can’t type end-of-line, a curly bracket on Norwegian keyboard with Alt+Shift+8, backspace, …). Fortunately you can select a code in an editor and Tools | Save as Live Template (though you should likely un-indent it first) No favorite static imports – for the first static method of a particular class I have to: 1) write the start of the method name; 2) press Ctrl+Alt+Space (Class name completion, don’t ask me why this); 3) select the desired method such as CoreMatchers.allOf and press Alt+Enter as suggested in the pop-up’s “status bar” -> select Import statically. Since that on, all the static methods of the class will appear in the normal Ctrl+Space completion list (that’s nice, though). In Eclipse I can add my belowed JUnit/Hamcrest/Mockito friends to favorite imports and have them always available. Slowness Slow testing – changed files are compiled just before a test is run while in Eclipse they have been compiled as soon as they were saved Sometimes II freezes for seconds/10s of seconds :’( Update: UI freezes may be a specific issue of Mac 64b 1.6 JDK Running analysis is SLOW (Checkstyle, …) and can kill your IDE (and you too if you’re of a weaker heart) The UI is little buggy, at least on Mac – dialogs not hidden when you click on another menu – not a big issue but annoying anyway Running webapp on Tomcat works great for some colleagues but not all – useless logging without any details, the server doesn’t start, no hints for how to solve, the Server -> Output window contains confusing “Disconnected from server”, Tomcat Log window contains only INFO logs (where are my debug logs?!), the file logs/catalina.out doesn’t exist anymore, Tomcat failure visible in browser yet nothing in the logs … JavaDoc – ‘#method’ + ^space in Eclipse generates {@link #method} automatically, not so in II; even worse, class lookup doesn’t work at all in II w/ot typing a @link first. I’ve found a workaround via a live template but I have to type its name and invoke it manually anyway. I miss Eclipse’ auto-dissapearing views (just click anywhere in the editor and they’ll disappear – though in II you can use Shift+Esc and if you un-pin a view then clicking in&out of it will hide it) and the ability to maximize any view with double-click The number of plugins for IntelliJ is smaller than for Eclipse though all the main projects likely target it tooI could perhaps live with the small annoyances (or may be learn the proper way to do what I’m trying to achieve?) but the performance issues are hard to accept. Useful ResourcesII docs: Intentions II docs: Code assistance – overview of the different features for code completion etc. StackOverflow: Things possible in IntelliJ that aren’t possible in Eclipse? StackOverflow: Hidden Features IntelliJ IDEA StackOverflow: IntelliJ Static Import CompletionReference: Comparison of Eclipse 3.6 and IntelliJ IDEA 10.5: Pros and Cons from our JCG partner Jakub Holy at the Holy Java blog. Related Articles :Eclipse Shortcuts for Increased Productivity Eclipse Memory Analyzer (MAT) What to do with IDE project files Eclipse: How attach Java source Java Tutorials and Android Tutorials list...
jcg-logo

Best Of The Week – 2011 – W45

Hello guys, Time for the “Best Of The Week” links for the week that just passed. Here are some links that drew JavaCodeGeeks attention: * Developing a Service Provider using Java API: A tutorial showing how to develop a service provider using only the JDK classes, implementing a “poor man’s approach” to the Service Locator pattern. Also check out Java EE6 CDI, Named Components and Qualifiers. * I am a programmer: A very mature post on job security, job satisfaction and job payment, describes the most common scenarios of programmer’s workplaces and the advantages/disadvantages of each one of them. * The end of the geek culture: Here the author argues that the “geek culture” has come to an end, and that is because developers currently put less stocks in knowing one technology deep inside but instead broaden their skills. In other words, the times of specialists and experts in one narrow area have gone away. * Opportunistic Refactoring: Martin Fowler in this article encourages refactoring as an opportunistic activity,  i.e. it should be done whenever and wherever code needs to cleaned up – by whoever. Boy-scout rule applies here: always leave the code behind in a better state than you found it. Also check out Services, practices & tools that should exist in any software development house. * Security Vulnerabilities in Amazon and Eucalyptus: This article presents some security vulnerabilities in both Amazon and Eucalyptus infrastractures, which could be used to get complete control of the victim’s account and it’s associated stored data. The issues have been resolved but they definitely showcase one of the largest downsides of relying on a private cloud infrastructure. * Best Practices for Securing Apache Tomcat 7: A list of tips for securing your Tomcat installation, such as disabling the shutdown port, using the Security Lifecyle Listener, specifying the interface for the connectors etc. Also see Multiple Tomcat Instances on Single Machine and Zero-downtime Deployment (and Rollback) in Tomcat. * Apache Harmony Finale: The Apache Harmony project codebase has been put into the Apache Attic, i.e. further development has been stopped and only a read-only version of the code is provided. The project provided a clean-room viral-free implementation of the JDK and JVM and also strove to provide modularity. Harmony was also used by Google for their runtime library for Android. * Comparing Java 7 Garbage Collectors Under Extreme Load: An interesting comparison of the Java 7 Garbage Collectors under extreme load. which raises some concerns about the performance of the new G1 collector. * Coding Guidelines: Finding the Art in the Science: This article examines coding standards and provides some universal guidelines on how to produce more readable and thus more maintenable code. Quidelines include correctly used whitespace and fonts, conventional English usage, moderate use of comments etc. * Why would a developer invest time in your startup’s platform?: This article provides some tips on startups which wish to offer a platform to developers. Among them are to build a killer use case first, and then generalise it into a platform and to ruthlessly cut platform features which don’t apply to the current use case. * IT Projects: 400% Over-Budget and only 25% of Benefits Realized: This article presents the results of a Harvard Business Review study which show some alarming and disappointing numbers regarding IT projects realization. In short, projects running over budget and reaping little benefits. Also check out How many bugs do you have in your code?. * Startup Lesson: Why a Vacation is not just good for you: This is a story that displays the importance of allowing a sense of can-do attitude for a team and letting people realize the company is bigger than a single person and that everyone is replaceable. That’s all for this week. Stay tuned for more, here at JavaCodeGeeks. Related Articles:Best Of The Week – 2011 – W44 Best Of The Week – 2011 – W43 Best Of The Week – 2011 – W42 Best Of The Week – 2011 – W41 Best Of The Week – 2011 – W40 Best Of The Week – 2011 – W39 Best Of The Week – 2011 – W38 Best Of The Week – 2011 – W37 Best Of The Week – 2011 – W36 Best Of The Week – 2011 – W35...
java-logo

Recycling objects to improve performance

Overview In a previous article I stated that the reason the deserialization of objects was faster was due to using recycled objects. This is potentially surprising for two reasons, 1) the belief that creating objects is so fast these days, it doesn’t matter or is just as fast as recycling yourself, 2) None of the serialization libraries use recycling by default. This article explores deserialization with and without recycling objects. How it not only is slower to create objects, but it slows down the rest of your program by pushing data out of your CPU caches. While this talks about deserializaton, the same applies to parsing text or reading binary files, as the actions being performed are the same. The test In this test, I deserialize 1000 Price objects, but also time how long it takes to copy a block of data. The copy represents work which the application might have to perform after deserializing.The test is timed one million times and those results sorted. The X-Axis shows the percentile timing. e.g. the 90% values is the 90% worst value (or 10% of values are higher). As you can see, the deserialization take longer if it has to create objects as it goes, however sometimes it takes much much long. This is perhaps not so surprising as creating objects means doing more work and possibly being delayed by a GC. However, it is the increase in the time to copy a block of data which is surprising. This demonstrates that not only is the deserialization slower, but any work which needs the data cache is also slower as a result. (Which is just about anything you might do in a real application) Performances tests rarely show you the impact on the rest of your application. In more detail Examining the higher percentile (longest times) you can see that the performance consistently bad if the deserialization has to wait for the GC.And the performance of the copy increases significantly in the worst case.The code Recycling example code Reference: Recycling objects to improve performance from our JCG partner Peter Lawrey at the Vanilla Java blog. Related Articles :Java Secret: Loading and unloading static fields C++ or Java, which is faster for high frequency trading? How to get C like performance in Java Low GC in Java: Use primitives instead of wrappers Java Tutorials and Android Tutorials list...
software-development-2-logo

Essential Stories for any Enterprise Application Product Backlog

Most of the customers I work with are huge companies. When trying to get an application accepted in such an environment some are a real no brainer. Like Websphere Application Server. While others like Jira are really hard to get some resources for. I couldn’t help wondering, what the reasons are for this. Let’s face it, from the simple examples above it is obviously not related to anything known as quality. Through careful reengineering I discovered a list of essential features an application must have and compiled it as backlog items ready to use for your product backlog. As a procurement manager I want the application to be expensive, in order to live of the bonus I guess when I negotiate a 1% reduction in price. Acceptance criteria:The price of a minimal installation is at least 5 figures The price for a full installation is at least 500.000 Euro Bonus points when there is a mandatory support optionAs a procurement manager I want the application to scale only by upgraded to an enterprise super deluxe edition, so my daughter can have a horse for Christmas. Acceptance criteria:A demo setup needs at least 8GB RAM and 4 cores A system for 100 users needs at least 5 such machines, plus the same number of machines as a hot backup which you need at least once a month. You need 20 machines when 10 of the 100 users want to use it concurrently.As an administrator I want the application to have only minimal documentation if at all, so I can claim to be an expert after reading all of it in an hour and charge a higher salary. Acceptance criteria:The documentation is preferable non existent. Lengthy documentation is acceptable as long as it written so bad that nobody gains any knowledge from reading it.As an administrator I want the application to be void of any user community, so nobody can provide easy free solutions to problems I claimed to be really hard. Acceptance criteria:The apropriate tag at stackoverflow has a maximum of 200 followers and less then 1000 questions. If you are looking for a real expert (one that actualy understands the product) you have to pay other my monthly salary as an hourly wage or look in a mental asylum.As an administrator I want the application to rely heavily on as many other products as possible, so the beneficial effects of the application on my workday are multiplied. Acceptance criteria:The application needs a database management system installation from a specific vendor. The database itself needs to qualify as an enterprise application according to this criteria The application needs at a queuing system, even when it doesn’t have any interface to any other system but itself.As a person responsible for deploying clients I want the application only to run on IE6 or earlier so the people stop asking me for upgrades to Windows 7 or god behold these Apple thingies. Acceptance criteria:When the application is run in an IE 7 or above a message appears: “You are not running a compatible browser, please upgrade to IE6? When any other browser is used the application should react with a http 500 or it should crash the browser.As the manager responsible for deployment of the system I want the system to be still in development, so I never have to install anything. Acceptance criteria:The application is labeled early beta or preferable with “latest build from Toms machine”As a consultant recommending the application I want the application to be really hard to install and equally hard to keep alive, so my contract stays safe for the next years to come. Acceptance criteria:The short installation overview has at least 50 steps. A skilled person can’t install the system in less then 3 days.As a network administrator I want the application to really hog the network, so I can get a larger budget for new shiny hardware. Acceptance criteria:The application uses protocols like HTTP in a way that makes caching completely impossible. The application downloads itself completely on every restart.As a user I want the application to be really slow so I can’t blame the application for not getting anything done Acceptance criteria:Every interaction with the application (like moving the mouse) causes the application to freeze for at least one second. Every use case requires at least 10 such interactions.As a user I want the application to start up really slow so I can get some coffee and drink it too in the meantime. Acceptance criteria:Startup needs at least 10 minutes The application needs to restart at least twice a day.As a CIO I want the application to use single sign on so I can claim we are doing it without bothering with what it actually means. Acceptance criteria:The application contains its own single sign on system The applications SSO solution is completely incompatible to anything else (other wise we would get asked to integrate them)As a person responsible for the security of the system I want the application to have cryptography because it is soooo coool. Acceptance criteria:The application contains some cryptography code, preferable with nice acronym like ROT13 The cryptography is really hard to configure, so something easy like SSL isn’t acceptable All private keys involved need to get emailed to the support distribution list, along with the password (thats what our processes require)Reference: Essential Stories for any Enterprise Application Product Backlog from our JCG partner Jens Schauder at Schauderhaft blog. Related Articles :You can’t be Agile in Maintenance? Even Backlogs Need Grooming Breaking Down an Agile process Backlog On the importance of communication in the workplace Java Tutorials and Android Tutorials list...
powermock-logo

Mock Static Methods with PowerMock

In a recent blog, I tried to highlight the benefits of using dependency injection and expressing the idea that one of the main benefits of this technique is that it allows you to test your code more easily by providing a high degree of isolation between classes, and coming to the conclusion that lots of good tests equals good code. But, what happens when you don’t have dependency injection and you’re using a third party library that contains classes of a certain vintage that contains static methods? One way is to isolate those classes by writing a wrapper or adaptor around them and using this to provide isolation during testing; however, there’s also another way: using PowerMock. PowerMock is a mocking framework that extends other mocking frameworks to provide much needed additional functionality. To para-phase an old advert: “it refreshes the parts that other mocking frameworks fail to reach”. This blog takes a look at PowerMock’s ability to mock static methods, providing an example of mocking the JDK’s ResourceBundle class, which as many of you know uses ResourceBundle.getBundle(…) to, well… load resource bundles. I, like many other bloggers and writers, usually present some highly contrived scenario to highlight the problem. Today is different, I’ve simply got a class that uses a ResourceBundle called: UsesResourceBundle: public class UsesResourceBundle {private static Logger logger = LoggerFactory.getLogger(UsesResourceBundle.class);private ResourceBundle bundle;public String getResourceString(String key) {if (isNull(bundle)) { // Lazy load of the resource bundle Locale locale = getLocale();if (isNotNull(locale)) { this.bundle = ResourceBundle.getBundle("SomeBundleName", locale); } else { handleError(); } }return bundle.getString(key); }private boolean isNull(Object obj) { return obj == null; }private Locale getLocale() {return Locale.ENGLISH; }private boolean isNotNull(Object obj) { return obj != null; }private void handleError() { String msg = "Failed to retrieve the locale for this page"; logger.error(msg); throw new RuntimeException(msg); } }You can see that there’s one method: getResourceString(…), which given a key will retrieve a resource string from a bundle. In order to make this work a little more efficiently, I’ve lazily loaded my resource bundle, and once loaded, I call bundle.getString(key) to retrieve my resource. To test this I’ve written a PowerMock JUnit test: import static org.easymock.EasyMock.expect; import static org.junit.Assert.assertEquals; import static org.powermock.api.easymock.PowerMock.mockStatic; import static org.powermock.api.easymock.PowerMock.replayAll; import static org.powermock.api.easymock.PowerMock.verifyAll;import java.util.Locale; import java.util.MissingResourceException; import java.util.ResourceBundle;import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.powermock.api.easymock.annotation.Mock; import org.powermock.core.classloader.annotations.PrepareForTest; import org.powermock.modules.junit4.PowerMockRunner;@RunWith(PowerMockRunner.class) @PrepareForTest(UsesResourceBundle.class) public class UsesResourceBundleTest {@Mock private ResourceBundle bundle;private UsesResourceBundle instance;@Before public void setUp() { instance = new UsesResourceBundle(); }@Test public final void testGetResourceStringAndSucceed() {mockStatic(ResourceBundle.class); expect(ResourceBundle.getBundle("SomeBundleName", Locale.ENGLISH)).andReturn(bundle);final String key = "DUMMY"; final String message = "This is a Message"; expect(bundle.getString(key)).andReturn(message);replayAll(); String result = instance.getResourceString(key); verifyAll(); assertEquals(message, result); }@Test(expected = MissingResourceException.class) public final void testGetResourceStringWithStringMissing() {mockStatic(ResourceBundle.class); expect(ResourceBundle.getBundle("SomeBundleName", Locale.ENGLISH)).andReturn(bundle);final String key = "DUMMY"; Exception e = new MissingResourceException(key, key, key); expect(bundle.getString(key)).andThrow(e);replayAll(); instance.getResourceString(key); }@Test(expected = MissingResourceException.class) public final void testGetResourceStringWithBundleMissing() {mockStatic(ResourceBundle.class); final String key = "DUMMY"; Exception e = new MissingResourceException(key, key, key); expect(ResourceBundle.getBundle("SomeBundleName", Locale.ENGLISH)).andThrow(e);replayAll(); instance.getResourceString(key); }}In the code above I’ve taken the unusual step of including the import statements. This is to highlight that we’re using PowerMock’s versions of the import statics and not EasyMock’s. If you accidentally import EasyMock’s statics, then the whole thing just won’t work. There are four easy steps in setting up a test that mocks a static call: 1. Use the PowerMock JUnit runner: @RunWith(PowerMockRunner.class)2. Declare the test class that we’re mocking: @PrepareForTest(UsesResourceBundle.class)3. Tell PowerMock the name of the class that contains static methods: mockStatic(ResourceBundle.class);4. Setup the expectations, telling PowerMock to expect a call to a static method: expect(ResourceBundle.getBundle("SomeBundleName", Locale.ENGLISH)).andReturn(bundle);The rest is plain sailing, you set up expectations for other standard method calls and the tell PowerMock/EasyMock to run the test, verifying the results: final String key = "DUMMY"; final String message = "This is a Message"; expect(bundle.getString(key)).andReturn(message);replayAll(); String result = instance.getResourceString(key); verifyAll();PowerMock can do lots more, such as mocking constructors and private method calls. More on that later perhaps… Reference: Using PowerMock to Mock Static Methods from our JCG partner Roger at Captain Debug’s Blog. Related Articles :Rules in JUnit 4.9 (beta 3) Testing an Object’s Internal State with PowerMock Servlet 3.0 Async Processing for Tenfold Increase in Server Throughput Testing with Scala Java Tools: Source Code Optimization and Analysis Java Tutorials and Android Tutorials list...
osgi-alliance-logo

OSGI and Spring Dynamic Modules – Simple Hello World

In this pose st, we’ll take the first implementation we made using OSGi and use Spring Dynamic Modules to improve the application. Spring Dynamic Modules (Spring Dm) makes the development of OSGi-based applications a lot more easier. With that, the deployment of services is a lot easier. You can inject services like any other Spring beans. So let’s start with Spring dm. First of all, you need to download the Spring Dm Distribution. For this article, I used the distributions with dependencies and I will only use this libraries : com.springsource.net.sf.cglib-2.1.3.jar com.springsource.org.aopalliance-1.0.0.jar log4j.osgi-1.2.15-SNAPSHOT.jar com.springsource.slf4j.api-1.5.0.jar com.springsource.slf4j.log4j-1.5.0.jar com.springsource.slf4j.org.apache.commons.logging-1.5.0.jar org.springframework.aop-2.5.6.SEC01.jar org.springframework.beans-2.5.6.SEC01.jar org.springframework.context-2.5.6.SEC01.jar org.springframework.core-2.5.6.SEC01.jar spring-osgi-core-1.2.1.jar spring-osgi-extender-1.2.1.jar spring-osgi-io-1.2.1.jarOf course, you can replace the Spring 2.5.6 libraries with the Spring 3.0 libraries. But for this article, Spring 2.5.6 will be enough. So, start with the service bundle. If we recall, this bundle exported a single service : package com.bw.osgi.provider.able; public interface HelloWorldService { void hello(); }package com.bw.osgi.provider.impl; import com.bw.osgi.provider.able.HelloWorldService; public class HelloWorldServiceImpl implements HelloWorldService { @Override public void hello(){ System.out.println("Hello World !"); } }There is no changes to do here. Now, we can see the activator : package com.bw.osgi.provider; import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext; import org.osgi.framework.ServiceRegistration; import com.bw.osgi.provider.able.HelloWorldService; import com.bw.osgi.provider.impl.HelloWorldServiceImpl; public class ProviderActivator implements BundleActivator { private ServiceRegistration registration; @Override public void start(BundleContext bundleContext) throws Exception { registration = bundleContext.registerService( HelloWorldService.class.getName(), new HelloWorldServiceImpl(), null); } @Override public void stop(BundleContext bundleContext) throws Exception { registration.unregister(); } }So, here, we’ll make simple. Let’s delete this class, this is not useful anymore with Spring Dm. We’ll let Spring Dm export the bundle for us. We’ll create a Spring context for this bundle. We just have to create a file provider-context.xml in the folder META-INF/spring. This is a simple context in XML file but we use a new namespace to register service, “http://www.springframework.org/schema/osgi“. So let’s start : <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:osgi="http://www.springframework.org/schema/osgi" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/osgihttp://www.springframework.org/schema/osgi/spring-osgi.xsd"> <bean id="helloWorldService" class="com.bw.osgi.provider.impl.HelloWorldServiceImpl"/> <osgi:service ref="helloWorldService" interface="com.bw.osgi.provider.able.HelloWorldService"/> </beans>The only thing specific to OSGi is the osgi:service declaration. This line indicates that we register the helloWorldService as an OSGi service using the interface HelloWorldService as the name of the service. If you put the context file in the META-INF/spring folder, it will be automatically detected by the Spring Extender and an application context will be created. We can now go to the consumer bundle. In the first phase, we created that consumer : package com.bw.osgi.consumer; import javax.swing.Timer; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import com.bw.osgi.provider.able.HelloWorldService; public class HelloWorldConsumer implements ActionListener { private final HelloWorldService service; private final Timer timer; public HelloWorldConsumer(HelloWorldService service) { super(); this.service = service; timer = new Timer(1000, this); } public void startTimer(){ timer.start(); } public void stopTimer() { timer.stop(); } @Override public void actionPerformed(ActionEvent e) { service.hello(); } }At this time, there is no changes to do here. Instead of the injection with constructor we could have used an @Resource annotation, but this doesn’t work in Spring 2.5.6 and Spring Dm (but works well with Spring 3.0). And now the activator : package com.bw.osgi.consumer; import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext; import org.osgi.framework.ServiceReference; import com.bw.osgi.provider.able.HelloWorldService; public class HelloWorldActivator implements BundleActivator { private HelloWorldConsumer consumer; @Override public void start(BundleContext bundleContext) throws Exception { ServiceReference reference = bundleContext.getServiceReference(HelloWorldService.class.getName()); consumer = new HelloWorldConsumer((HelloWorldService) bundleContext.getService(reference)); consumer.startTimer(); } @Override public void stop(BundleContext bundleContext) throws Exception { consumer.stopTimer(); } }The injection is not necessary anymore. We can keep the start of the timer here, but once again, we can use the features of the framework to start and stop the timer. So let’s delete the activator and create an application context to create the consumer and start it automatically and put in the META-INF/spring folder : <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:osgi="http://www.springframework.org/schema/osgi" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/osgihttp://www.springframework.org/schema/osgi/spring-osgi.xsd"> <bean id="consumer" class="com.bw.osgi.consumer.HelloWorldConsumer" init-method="startTimer" destroy-method="stopTimer" lazy-init="false" > <constructor-arg ref="eventService"/> </bean> <osgi:reference id="eventService" interface="com.bw.osgi.provider.able.HelloWorldService"/> </beans>We used the init-method and destroy-method attributes to start and stop the time with the framework and we use the constructor-arg to inject to reference to the service. The reference to the service is obtained using osgi:reference field and using the interface as a key to the service. That’s all we have to do with this bundle. A lot more simple than the first version isn’t it ? And more than the simplification, you can see that the sources aren’t depending of either OSGi or Spring Framework, this is plain Java and this is a great advantage. The Maven POMs are the same than in the first phase except that we can cut the dependency to osgi. The provider : <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>OSGiDmHelloWorldProvider</groupId> <artifactId>OSGiDmHelloWorldProvider</artifactId> <version>1.0</version> <packaging>bundle</packaging> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.0.2</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <extensions>true</extensions> <configuration> <instructions> <Bundle-SymbolicName>OSGiDmHelloWorldProvider</Bundle-SymbolicName> <Export-Package>com.bw.osgi.provider.able</Export-Package> <Bundle-Vendor>Baptiste Wicht</Bundle-Vendor> </instructions> </configuration> </plugin> </plugins> </build> </project>The consumer : <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>OSGiDmHelloWorldConsumer</groupId> <artifactId>OSGiDmHelloWorldConsumer</artifactId> <version>1.0</version> <packaging>bundle</packaging> <dependencies> <dependency> <groupId>OSGiDmHelloWorldProvider</groupId> <artifactId>OSGiDmHelloWorldProvider</artifactId> <version>1.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.0.2</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <extensions>true</extensions> <configuration> <instructions> <Bundle-SymbolicName>OSGiDmHelloWorldConsumer</Bundle-SymbolicName> <Bundle-Vendor>Baptiste Wicht</Bundle-Vendor> </instructions> </configuration> </plugin> </plugins> </build> </project>And we can build the two bundles using maven install. So let’s test our stuff in Felix : wichtounet@Linux-Desktop:~/Desktop/osgi/felix$ java -jar bin/felix.jar _______________ Welcome to Apache Felix Gogog! install file:../com.springsource.slf4j.org.apache.commons.logging-1.5.0.jar Bundle ID: 5 g! install file:../com.springsource.slf4j.log4j-1.5.0.jar Bundle ID: 6 g! install file:../com.springsource.slf4j.api-1.5.0.jar Bundle ID: 7 g! install file:../log4j.osgi-1.2.15-SNAPSHOT.jar Bundle ID: 8 g! install file:../com.springsource.net.sf.cglib-2.1.3.jar Bundle ID: 9 g! install file:../com.springsource.org.aopalliance-1.0.0.jar Bundle ID: 10 g! install file:../org.springframework.core-2.5.6.SEC01.jar Bundle ID: 11 g! install file:../org.springframework.context-2.5.6.SEC01.jar Bundle ID: 12 g! install file:../org.springframework.beans-2.5.6.SEC01.jar Bundle ID: 13 g! install file:../org.springframework.aop-2.5.6.SEC01.jar Bundle ID: 14 g! install file:../spring-osgi-extender-1.2.1.jar Bundle ID: 15 g! install file:../spring-osgi-core-1.2.1.jar Bundle ID: 16 g! install file:../spring-osgi-io-1.2.1.jar Bundle ID: 17 g! start 5 7 8 9 10 11 12 13 14 15 16 17 log4j:WARN No appenders could be found for logger (org.springframework.osgi.extender.internal.activator.ContextLoaderListener). log4j:WARN Please initialize the log4j system properly. g! install file:../OSGiDmHelloWorldProvider-1.0.jar Bundle ID: 18 g! install file:../OSGiDmHelloWorldConsumer-1.0.jar Bundle ID: 19 g! start 18 g! start 19 g! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! stop 19 g!As you can see, it works perfectly ! In conclusion, Spring Dm really makes easier the development with OSGi. With Spring Dm you can also start bundles. It also allows you to make web bundles and to use easily the services of the OSGi compendium. Here are the sources of the two projects :OSGiDmHelloWorldProvider Sources OSGiDmHelloWorldConsumer SourcesHere are directly the two buildeds Jars :OSGiDmHelloWorldProvider-1.0.jar OSGiDmHelloWorldConsumer-1.0.jarAnd here are the complete folder including Felix and Spring Dm : osgi-hello-world.tar.gz Reference: OSGI and Spring Dynamic Modules – Simple Hello World from our JCG partner Baptiste Wicht at @Blog(“Baptiste Wicht”). Related Articles :OSGi – Simple Hello World with services OSGi Using Maven with Equinox Real modular web applications: Why there is no standard for developing them? Java Modularity Approaches – Modules, modules, modules...
osgi-alliance-logo

OSGi – Simple Hello World with services

In this post, we’ll develop a simple Hello World application with OSGi. We will use Felix as OSGi container. In the next post, we’ll continue with this application and use Spring Dynamic Modules to improve it. To make the development interesting, we will create two bundles:A bundle providing a service of HelloWorld A bundle consuming the service to print hello at regular interval time.So let’s start with the first bundle. What we need first is a very simple service providing a simple print in the console : package com.bw.osgi.provider.able; public interface HelloWorldService { void hello(); }package com.bw.osgi.provider.impl; import com.bw.osgi.provider.able.HelloWorldService; public class HelloWorldServiceImpl implements HelloWorldService { @Override public void hello(){ System.out.println("Hello World !"); } }We cannot make easier. Then, we need to export the Service using an activator : package com.bw.osgi.provider; import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext; import org.osgi.framework.ServiceRegistration; import com.bw.osgi.provider.able.HelloWorldService; import com.bw.osgi.provider.impl.HelloWorldServiceImpl; public class ProviderActivator implements BundleActivator { private ServiceRegistration registration; @Override public void start(BundleContext bundleContext) throws Exception { registration = bundleContext.registerService( HelloWorldService.class.getName(), new HelloWorldServiceImpl(), null); } @Override public void stop(BundleContext bundleContext) throws Exception { registration.unregister(); } }A lot more code here. For those who aren’t confident with OSGi, some explanations. The start method will be called when the module is started and the stop when it’s stopped. In the start method, we register our service in the bundle context using the name of the interface as the name of the exported service. The third parameter, null, indicate that we doesn’t give any configuration for this service. In the stop method, we just unregister the service. Now, our first bundle is ready. We can build it. For that we’ll use Maven and the maven-bundle-plugin to build the OSGi Bundle directly. Here is the pom.xml file for the project. <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>OSGiDmHelloWorldProvider</groupId> <artifactId>OSGiDmHelloWorldProvider</artifactId> <version>1.0</version> <dependencies> <dependency> <groupId>org.apache.felix</groupId> <artifactId>org.osgi.core</artifactId> <version>1.4.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.0.2</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <extensions>true</extensions> <configuration> <instructions> <Bundle-SymbolicName>OSGiDmHelloWorldProvider</Bundle-SymbolicName> <Export-Package>com.bw.osgi.provider.able</Export-Package> <Bundle-Activator>com.bw.osgi.provider.ProviderActivator</Bundle-Activator> <Bundle-Vendor>Baptiste Wicht</Bundle-Vendor> </instructions> </configuration> </plugin> </plugins> </build> </project>And then, use mvn install to build it. We’ll work in a folder called osgi, so we’ll copy this new bundle in the osgi folder. We can already test it in the OSGi container. If you dont’ have already Felix, let’s download it here. You have to choose the “Felix Distribution”. Then extract it to the osgi folder we created before. You must now have this folder structure : osgi - felix - OSGiDmHelloWorldProvider-1.0.jarSo we can go in the felix folder and launch Felix : wichtounet@Linux-Desktop:~/Desktop/osgi/felix$ java -jar bin/felix.jar _______________ Welcome to Apache Felix Gogog!And install our bundle : g! install file:../OSGiDmHelloWorldProvider-1.0.jar Bundle ID: 5The install is good installed, we can try to start it and see its status : g! start 5 g! bundle 5 Location file:../OSGiDmHelloWorldProvider-1.0.jar State 32 Version 1.0.0 LastModified 1279124741320 Headers [Tool=Bnd-0.0.357, Bundle-Activator=com.bw.osgi.provider.ProviderActivator, Export-Package=com.bw.osgi.provider.able, Build-Jdk=1.6.0_20, Bundle-Version=1.0.0, Created-By=Apache Maven Bundle Plugin, Bundle-ManifestVersion=2, Manifest-Version=1.0, Bundle-Vendor=Baptiste Wicht, Bnd-LastModified=1279124686551, Bundle-Name=Unnamed - OSGiDmHelloWorldProvider:OSGiDmHelloWorldProvider:bundle:1.0, Built-By=wichtounet, Bundle-SymbolicName=OSGiDmHelloWorldProvider, Import-Package=com.bw.osgi.provider.able,org.osgi.framework;version="1.5"] BundleContext org.apache.felix.framework.BundleContextImpl@2353f67e BundleId 5 SymbolicName OSGiDmHelloWorldProvider RegisteredServices [HelloWorldService] ServicesInUse nullAll is fine. Our service is good registered Now we’ll try to consume this service in our second bundle. Our consumer class will be very simple : package com.bw.osgi.consumer; import javax.swing.Timer; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import com.bw.osgi.provider.able.HelloWorldService; public class HelloWorldConsumer implements ActionListener { private final HelloWorldService service; private final Timer timer; public HelloWorldConsumer(HelloWorldService service) { super(); this.service = service; timer = new Timer(1000, this); } public void startTimer(){ timer.start(); } public void stopTimer() { timer.stop(); } @Override public void actionPerformed(ActionEvent e) { service.hello(); } }And now, we must create the activator to get the service and then give it to the consumer. That will give use something like this : package com.bw.osgi.consumer; import org.osgi.framework.BundleActivator; import org.osgi.framework.BundleContext; import org.osgi.framework.ServiceReference; import com.bw.osgi.provider.able.HelloWorldService; public class HelloWorldActivator implements BundleActivator { private HelloWorldConsumer consumer; @Override public void start(BundleContext bundleContext) throws Exception { ServiceReference reference = bundleContext.getServiceReference(HelloWorldService.class.getName()); consumer = new HelloWorldConsumer((HelloWorldService) bundleContext.getService(reference)); consumer.startTimer(); } @Override public void stop(BundleContext bundleContext) throws Exception { consumer.stopTimer(); } }We get a service reference from the bundle context using the name of the class. After that, we get the service instance from the bundle context. We create also a little pom.xml file to build the bundle : <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>OSGiDmHelloWorldConsumer</groupId> <artifactId>OSGiDmHelloWorldConsumer</artifactId> <version>1.0</version> <packaging>bundle</packaging> <dependencies> <dependency> <groupId>org.apache.felix</groupId> <artifactId>org.osgi.core</artifactId> <version>1.0.0</version> </dependency> <dependency> <groupId>OSGiDmHelloWorldProvider</groupId> <artifactId>OSGiDmHelloWorldProvider</artifactId> <version>1.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.0.2</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <extensions>true</extensions> <configuration> <instructions> <Bundle-SymbolicName>OSGiDmHelloWorldConsumer</Bundle-SymbolicName> <Bundle-Activator>com.bw.osgi.consumer.HelloWorldActivator</Bundle-Activator> <Bundle-Vendor>Baptiste Wicht</Bundle-Vendor> </instructions> </configuration> </plugin> </plugins> </build> </project>Then, we use mvn install to create the bundle and we install it in the container : g! start 6 g! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! Hello World ! g! stop 6And here we are We’ve created our first application using OSGi. With that technology you can build modules really independant. In the next post about OSGi, we’ll see how to use Spring to make the OSGi development easier and to concentrate our effort on the application not OSGi. The sources of the two bundles are available here :OSGiDmHelloWorldProvider sources OSGiDmHelloWorldConsumerReference: OSGi – Simple Hello World with services from our JCG partner Baptiste Wicht at @Blog(“Baptiste Wicht”). Related Articles :OSGi Using Maven with Equinox Real modular web applications: Why there is no standard for developing them? Java Modularity Approaches – Modules, modules, modules...
oracle-coherence-logo

High Performance JPA with GlassFish and Coherence – Part 3

In this third part of my four part series I’ll explain strategy number two of using Coherence with EclipseLink and GlassFish. This is all about using Coherence as Second Level Cache (L2) with EclipseLink. General approachThis approach applies Coherence data grid to JPA applications that rely on database hosted data that cannot be entirely pre-loaded into a Coherence cache. Some reasons why it might not be able to pre-loaded include extremely complex queries that exceed the feature set of Coherence Filters, third party database updates that create stale caches, reliance on native SQL queries, stored procedures or triggers, and so on. This is not only an option for local L2 Caches but with additional configured Coherence instances on different nodes, you also get a cluster-wide JPA L2 Cache. Details As with many Caches, this is a read-mostly optimization. Primary key queries attempt to get entities first from Coherence and, if unsuccessful, will query the database, updating Coherence with query results. Non-primary key queries are executed against the database and the results checked against Coherence to avoid object construction costs for cached entities. Newly queried entities are put into Coherence. Write operations update the database and, if successfully committed, updated entities are put into Coherence. This approach is called “Grid Cache” in the Coherence documentation. Move it into practice Start with the previous blog-post and prepare your environment, if you have not already done so. There is a single thing, you need to change. Go back to GlassFish 3.0.1 / EclipseLink 2.0.1 for this scenario as there is a problem with the CacheKey.getKey() method. The 2.0.1 returns a Vector, the 2.2.0 simply returns an Object. Seeing the new Oracle GlassFish Server 3.1 having support for ActiveCache, I expect this to be fixed with the 3.7 Coherence release. But until than, you have to stick to the old GF or EclipseLink. Anyway, lets create a new web project with your favorite IDE (e.g. GridCacheExample). Add the required libraries (coherence.jar, toplink-grid.jar and the eclipselink.jar). Now let’s create our entity class and add the extra @CacheInterceptor annotation to it: ...import oracle.eclipselink.coherence.integrated.cache.CoherenceInterceptor; import org.eclipse.persistence.annotations.CacheInterceptor;...@Entity @CacheInterceptor(value = CoherenceInterceptor.class) public class Employee implements Serializable {...}Don’t forget to add the @GeneratedValue(strategy= GenerationType.SEQUENCE) as this is needed in opposite to the last example. After this is done, you have to add the coherence configuration to your WEB-INF/classes folder. You can start from the tutorial (Example 2). (be careful, there is a typo in it … a dublicate </backing-map-scheme> tag). Configure your persistence.xml as you would do with a normal JPA based application. <persistence-unit name="GridCacheExamplePU" transaction-type="JTA"><provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <jta-data-source>jdbc/coherence</jta-data-source><properties> <property name="eclipselink.ddl-generation" value="drop-and-create-tables"/> <property name="eclipselink.logging.level" value="FINE" /> </properties></persistence-unit>That’s it basically. Now you can test your new L2 cache. A simple servlet should do the trick: public class InsertServletPart3 extends HttpServlet {@PersistenceUnit(unitName = "GridCacheExamplePU") EntityManagerFactory emf;@Resource UserTransaction tx;...EntityManager em = emf.createEntityManager();tx.begin();// some loop magic Employee employee = new Employee();employee.setFirstName("Markus"); employee.setLastName("Eisele");em.persist(employee);// some loop magic end tx.commit();em.close();If you watch the log, you can see something like this: FEIN: INSERT INTO EMPLOYEE (LASTNAME, FIRSTNAME) VALUES (?, ?) bind => [Eisele, Markus] ... FEIN: Coherence(Employee)::Put: 1 value: net.eisele.coherence.entities.Employee[ id=1 ] ...Which basically tells you, that the actual database insert is carried out by EclipseLink as you are used to. After that, you can see, that the Employee object is put to the Coherence Cache named Employee with the PK as it’s key. If you now issue a query against the database em.createQuery("select e from Employee e where e.lastName = :lastName").setParameter("lastName", "Eisele").getResultList();you see the following: FEIN: SELECT ID, LASTNAME, FIRSTNAME FROM EMPLOYEE WHERE (LASTNAME = ?) bind => [Eisele] FEIN: Coherence(Employee)::Get: 1 result: net.eisele.coherence.entities.Employee[ id=1 ] FEIN: Coherence(Employee)::Put: 1 value: net.eisele.coherence.entities.Employee[ id=1 ] ...Which tells you, that the query itself is issued against the database but the results checked against Coherence to avoid object construction already for cached entities. Newly queried entities are put into Coherence. If you issue a simple PK query: em.find(Employee.class, 1);the output changes to: FEIN: Coherence(Employee)::Get: 1 result: net.eisele.coherence.entities.Employee[ id=1 ]and you don’t see any db query at all. That’s it :) Your cache works! Thanks for reading. Stay tuned for the next part! Further ReadingOTN How-To: Using Coherence as a Shared L2 Cache Integration Guide for Oracle TopLink with Coherence Gird 11g Release 1 (11.1.1)Reference: High Performance JPA with GlassFish and Coherence – Part 3 from our JCG partner Markus Eisele at the “Enterprise Software Development with Java” Blog Related Articles :High performance JPA with GlassFish and Coherence – Part 1 High Performance JPA with GlassFish and Coherence – Part 2 Developing and Testing in the Cloud Configuration Management in Java EE Leaked: Oracle WebLogic Server 12g Java EE6 Decorators: Decorating classes at injection time Java Tutorials and Android Tutorials list...
mongodb-logo

Using MongoDB with Morphia

In the past few years, NoSQL databases like CouchDB, Cassandra and MongoDB have gained some popularity for applications that don’t require the semantics and overhead of running a traditional RDBMS. I won’t get into the design decisions to go into choosing a NoSQL database as others have done a good enough job already, but I will relate my experience with MongoDB and some tricks on using it effectively in Java. I recently have had a chance to work with MongoDB (as in humongoous), which is a document-oriented database written in C++. It is ideal for storing documents which may vary in structure, and it uses a format similar to JSON, which means it supports similar data types and structures as JSON. It provides a rich yet simple query language and still allows us to index key fields for fast retrieval. Documents are stored in collections which effectively limit the scope of a query, but there is really no limitation on the types of heterogeneous data that you can store in a collection. The MongoDB site has decent docs if you need to learn the basics of MongoDB. MongoDB in Java The Mongo Java driver basically exposes all documents as key-value pairs exposed as map, and lists of values. This means that if we have to store or retrieve documents in Java, we will have to do some mapping of our POJOs to that map interface. Below is an example of the type of code we would normally have to write to save a document to MongoDB from Java: BasicDBObject doc = new BasicDBObject();doc.put("user", "carfey");BasicDBObject post1 = new BasicDBObject(); post1.put("subject", "spam & eggs"); post1.put("message", "first!");BasicDBObject post2 = new BasicDBObject(); post2.put("subject", "sorry about the spam");doc.put("posts", Arrays.asList(post1, post2));coll.insert(doc);This is fine for some use cases, but for others, it would be better to have a library to do the grunt work for us. Enter Morphia Morphia is a Java library which acts sort of like an ORM for MongoDB – it allows us to seamlessly map Java objects to the MongoDB datastore. It uses annotations to indicate which collection a class is stored in, and even supports polymorphic collections. One of the nicest features is that it can be used to automatically index your collections based on your collection- or property-level annotations. This greatly simplifies deployment and rolling out changes. I mentioned polymorphic storage of multiple types in the same collection. This can help us map varying document structures and acts somewhat like a discriminator in something like Hibernate. Here’s an example of how to define entities which will support polymorphic storage and querying. The Return class is a child of Order and references the same collection-name. Morphia will automatically handle the polymorphism when querying or storing data. You would pretty much do the same thing for annotating collections that aren’t polymorphic, but you wouldn’t have multiple classes using the same collection name. Note: This isn’t really an example of the type of data I would recommend storing in MongoDB since it is more suited to a traditional RDBMS, but it demonstrates the principles nicely. @Entity("orders") // store in the orders collection @Indexes({ @Index("-createdDate, cancelled") }) // multi-column index public class Order { @Id private ObjectId id; // always required@Indexed private String orderId;@Embedded // let's us embed a complex object private Person person; @Embedded private List<Item> items;private Date createdDate; private boolean cancelled;// .. getters and setters aren't strictly required // for mapping, but they would be here }@Entity("orders") // note the same collection name public class Return extends Order { // maintain legacy name but name it nicely in mongodb @Indexed @Property("rmaNumber") private String rma; private Date approvedDate; private Date returnDate; }Now, below I will demonstrate how to query those polymorphic instances. Note that we don’t have to do anything special when storing the data. MongoDB stores a className attribute along with the document so it can support polymorphic fetches and queries. Following the example above, I can query for all order types by doing the following: // ds is a Datastore instance Query<Order> q = ds.createQuery(Order.class).filter("createdDate >=", date); List<Order> ordersAndReturns = q.asList();// and returns only Query<Return> rq = ds.createQuery(Return.class).filter("createdDate >=", date); List<Return> returnsOnly = rq.asList();If I only want to query plain orders, I would have to use a className filter as follows. This allows us to effectively disable the polymorphic behaviour and limit results to a single target type. Query<Order> q = ds.createQuery(Order.class) .filter("createdDate >=", cutOffDate) .filter("className", Order.class.getName());List<Order> ordersOnly = q.asList();Morphia currently uses the className attribute to filter results, but at some point in the future is likely to use a discriminator column, in which case you may have to filter on that value instead. Note: At some point during startup of your application, you need to register your mapped classes so they can be used by Morphia. See here for full details. A quick example is below. Morphia m = ... Datastore ds = ...m.map(MyEntity.class); ds.ensureIndexes(); //creates all defined with @Indexed ds.ensureCaps(); //creates all collections for @Entity(cap=@CappedAt(...))Problems with Varying Structure in Documents One of the nice features of document-oriented storage in MongoDB is that it allows you to store documents with different structure in the same collection, but still perform structured queries and index values to get good performance. Morphia unfortunately doesn’t really like this as it is meant to map all stored attributes to known POJO fields. There are currently two ways I’ve found that let us deal with this. The first is disabling validation on queries. This will mean that values which exist in the datastore but can’t be mapped to our POJOs will be dropped rather than blowing up: // drop unmapped fields quietly Query<Order> q = ds.createQuery(Order.class).disableValidation();The other option is to store all unstructured content under a single bucket element using a Map. This could contain any basic types supported by the MongoDB driver including Lists and Maps, but no complex objects unless you have registered converters with Morphia (e.g. morphia.getMapper().getConverters().addConverter(new MyCustomTypeConverter()) . @Entity("orders") public class Order { // .. our base attributes here private Map<String, Object> attributes; // bucket for everything else ( }Note that Morphia may complain on startup that it can’t validate the field (since the generics declaration is not strict), but as of the current release version (0.99), it will work with no problem and store any attributes normally and retrieve them as maps and lists of values. Note: When it populates a loosely-typed map from a retrieved document, it will use the basic MongoDB Java driver types BasicDBObject and BasicDBList. These implement Map and List respectively, so they will work pretty much as you expect, except that they will not be equals() to any input maps or lists you may have stored, even if the structure and content appear to be equal. If you want to avoid this, you can use the @PostLoad annotation to annotate a method which can perform normalization to JDK maps and lists after the document is loaded. I personally did this to ensure we always see a consistent view of MongoDB documents whether they are pulled from a collection or not yet persisted. Reference: Using MongoDB with Morphia from our JCG partners at the Carfey Software blog. Related Articles :Cassandra vs MongoDB vs CouchDB vs Redis vs Riak vs HBase comparison Java Code Geeks Andygene Web Archetype Java Tutorials and Android Tutorials list The top 9+7 things every programmer or architect should know...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books