Featured FREE Whitepapers

What's New Here?


Web development frameworks – part 3 : Ruby on Rails

The next runner is Ruby on Rails v3. Unless you have been living under a bucket without an RJ45 port (yes, some buckets have Internet access), you have probably heard of Ruby on Rails (RoR), it’s a very popular framework with lots of momentum. RoR started as an extract of the Basecamp product at 37signals and is now used in all sorts of web apps. I used RoR 3.2 on a Fedora 16 based workstation for this review. All the base documentation was taken from the official RoR site. Anyway, let’s get on with it. Install the framework in a development workstation Two basic steps and you are good to go:Install ruby and rubygems (using yum, apt-get or just download them from the RoR site) Install rails using gems (# gem install rails)On a sidenote, I had to install a few dependencies I was missing (ie. sqlite-devel) that were not tagged as dependencies for either ruby or rubygem, but without them the rails command fails. Setup a development environment using a free IDE, application server and database You don’t really need much more than a good text editor for RoR, I’m using the Sublime Text 2 editor (which is very nice to use and has nicel Ruby coloring/autocomplete). Develop the “Hello world” or similar sample outlined by the frameworks tutorial To create my “Hello world” I followed the official getting started guide. I had a few minor issues getting my first app working, probably because of the noob component. Following the steps outlined in the RoR homepage to get the first project working generated an error (that I needed a Javascript engine installed). Using the link given by the error, I decided to install the therubyracer engine, which in turn failed to install because I didn’t have a C++ compatible compiler; so going back to dear old yum I installed the gcc-c++ package and the Javascript engine finally accepted my PC. But this was not the end, the rails server command still failed, complaining that I didn’t have a Javascript engine (wasn’t this a developer friendly framework? :P). After a few web searches I found a post about this issue and voila! http://localhost:3000 rendered the start page for my sparkling new RoR app. Going from the “vanilla” project to a “Hello RoR!” status was quite easy, just one command line to generate my first controller and view, edit the view file and set the new default route in the routes.rb file. Note that there is an error in the tutorial if you use the latest Rails version, when you add the tags support to the Post model you need to add the :tags_attribute variable to the attr_accesible line or you will get a security error (apparently this was optional before, but now it’s mandatory). Modify the sample app to perform a specific database query over a custom structure and display the results Active Record, the default ORM library provided by Rails is quite powerful and easy to use. You can perform all the stuff you would expect using the provided Ruby API, and you can resort to your custom SQL commands if you want/need to. Adding a custom SQL query to the tutorial sample was really easy, even more if the result maps to an already existing model entity. Add a dependency to a third party library and perform a computation using it in our app Using RugyGems as the backbone of the dependency management makes using third party libraries in Rails quite simple. The community is huge and very active, just out of curiosity I did a $ gem query –remote | wc -l and got almost 39.000 items listed. Develop a “Hello world” REST service The REST concept is core to Rails, so you don’t really need to do anything special to support the HTTP verbs. Formatting your output as JSON is also a built-in feature of the framework. Anyway, developing a new REST based web service is just a matter of minutes. Consume our own service from our app You can use Active Resource to map RESTful resources to Rails models and then use them as any other model, it’s really cool :) There are also other options, like straight HTTP fetching using the facilities Ruby provides or some specific gems like HTTParty. Evaluation Learning curve: GOOD Learning RoR falls definetively on the easy side, if you already know Ruby and have a working knowledge of web development it’s a piece of cake. In my case, I had no clue about Ruby but found it similar enough to other languages I have worked with, so I felt right at home in a few hours. Development performance for simple tasks: GOOD If you consider all the work the generators do for you, add the thousands of Gems already available and finish it off with the really heavy use of convention over configuration that the framework promotes, you will agree that developing normal stuff is fast, and I mean Speedy Gonzales fast. Development performance for complex/singular tasks: GOOD Rails provides some help by not getting too much in the way when you want to develop something the framework doesn’t really foresees. So it basically falls to Ruby itself, and I found it complete enough to say you will probably manage to solve your business/domain specific problems with the same level of complexity and frustration you would find in most other broadly used languages/frameworks. Dependency management: MEDIUM The Gems and plugins support is good enough, it provides the things you need to keep your project under control and use external libraries. It does not provide all the features and customization tools like Maven do, but it probably fits most projects needs. Code performance/security tuning capabilities: MEDIUM I probably lack enough experience to pass judgment on this point, from my initial review and the apps I coded it seems RoR does a lot for you but it’s not that easy to get into it’s guts to fine tune it for security and performance, specially for mission critical and/or enterprise class applications. But I will probably review this statement after I deploy a few RoR apps myself in the real world with a considerable load. Platform escalation/redundancy capabilities: MEDIUM I will get a few insults for this, but being a scripted language that doesn’t run inside a controlled memory managed VM makes Ruby and therefore Rails an inferior contender for really big deployments compared to Java or .NET. I’m not saying you can’t deploy big RoR apps that serve thousands of users, I know there are real world examples of this, I just think that for the average IT crew it’s easier to control the scalation of a Java based app than a Rails one. Acceptance in corporate markets: BAD This is pure perception, but at least amongst my company’s customers Ruby isn’t even on the map. You tell them the next software you will provide will run under Ruby and the best thing you can expect to hear is “Whaaaat?”. Complexity of developing and consuming SOAP and REST services: GOOD Developing and consuming REST services under RoR is a piece of cake. Using SOAP requires a few more steps but it’s easy enough too. TL;DR Ruby on Rails v3 is easy to learn and use, provides amazing development speed of web based applications and has a large and active community behind it that will help anyone making his first steps solve most problems. From our evaluation perspective the main drawback of RoR is the lack of acceptance in the corporate sector, even though Ruby and RoR are now known by most people in the web development world, they are not yet accepted as serious platforms for mission critical solutions in sectors like banking, telecommunications, and the like (at least on the south half of the world :)). So if your boss is OK with it, or if you are developing an independant solution, I think RoR is an amazing option. And as time passes and more people use it, I’m sure it will become a serious contender for Java and .NET. Reference: Web development frameworks – part 3 : Ruby on Rails from our JCG partner Ricardo Zuasti at the Ricardo Zuasti’s blog blog....

Web development frameworks – part 4 : Django

This is a part of my web frameworks review series. Check it out if you haven’t already. Moving on to Django, the Python based all star. Django was created by the folks at the Lawrence Journal-World and released to the public in 2005. It’s very active and with a strong group of followers, the framework is currently in it’s 1.4 incarnation, and the last release was done in March/2012. For the purpose of this review I used Django 1.4 and Python 2.7 on a Fedora 17 based workstation. It’s worth noting I’m almost a complete newbie both in Python and Django, to learn my way around I followed the official Django tutorial. Install the framework in a development workstation Both Python and Django were provided by my linux distribution on the built-in repositories, so all I had to do was pull them using yum and was good to go. Easiest installation so far! Setup a development environment using a free IDE, application server and database Any text editor will do, I use Sublime Text 2, but your choice won’t really affect your projects structure or how to develop them. Develop the “Hello world” or similar sample outlined by the frameworks tutorial The tutorial walks you through the creation of a web polls app, with user and admin front-ends. It actually starts with the admin section (since Django auto-generates most of it) so it’s not your typicall “1 min to get a Hello World! app tutorial”, but it’s easy to follow and probably better aligned to a real world app than a page that just says something :P Anyway, the first steps into the framework are simple enough to follow, even by someone who has never developed in Python before. There is some stuff I find a little more complicated than necessary. Take the URL configuration for example. The tutorial suggests using the following snippet as URL configuration basis: urlpatterns = patterns('', url(r'^polls/$', 'polls.views.index'), url(r'^polls/(?P<poll_id>\d+)/$', 'polls.views.detail'), url(r'^polls/(?P<poll_id>\d+)/results/$', 'polls.views.results'), url(r'^polls/(?P<poll_id>\d+)/vote/$', 'polls.views.vote'), url(r'^admin/', include(admin.site.urls)), ) Compare this to a feature equivalent routes configuration in Play Framework: GET /forms/:id controllers.Forms.index(id: String, page: java.lang.Integer = 1) GET /forms/:id/:page controllers.Forms.index(id: String, page: java.lang.Integer) POST /forms controllers.Forms.save() I get it that a regular expression is more powerful, but the toll on readability and ease of use is not worth it imho. The Django version just looks messy compared to Play or Ruby on Rails. Modify the sample app to perform a specific database query over a custom structure and display the results Writing raw SQL in Django is quite simple, extremely simple actually if your query returns something that you can map to one of your model entities. If not, you still have the option to execute a custom SQL Sentence and iterate over it’s results in a cursor based fashion. Add a dependency to a third party library and perform a computation using it in our app Pluggable components in Django are called applications, you can find a repository at Django Packages. Including an app in your site is not complicated though the “feeling” I got is that they tend to get a little more coupled with the site than Java libraries (JARs) or Ruby Gems. Develop a “Hello world” REST service Even though Django is not as REST oriented as Rails or Play, developing a REST service is just as simple as a “human readable” one. I took the polls list view from the tutorial sample and converted it to a service returning a JSON formatted output in a few seconds, basically changing: return render_to_response('polls/index.html', {'latest_poll_list': latest_poll_list}) to return HttpResponse(serializers.serialize('json', latest_poll_list))Consume our own service from our app Python itself provides libraries that allow you to consume a REST service and decode it’s output, to try it out I used urllib2 and json. The API is what you would expect of it, get your data from an HTTP resource and then feed it to the JSON parser to obtain a key/value matrix. Evaluation Learning curve: MEDIUM Getting started with Django is not hard at all, and I bet that if you are familiar with Python it’s even easier. But in my opinion the learning curve of Django is steeper than RoR and Plays, while providing similar framework capabilities and goals. Development performance for simple tasks: MEDIUM Same argument as before, using Django is easy enough, but its not as easy as other frameworks. The code you write is not as simple as you would want, and the development mechanics doesn’t feel quite so fluid. Development performance for complex/singular tasks: GOOD Doing custom stuff feels natural in Django. Python is a very powerful language with tons of libraries, and using them from your app is completely friction-less. Dependency management: MEDIUM The Django project names it’s reusable components “apps”, you can find a bunch at Django Packages. Using an app within your project is not hard, but it feels a little more coupled than it should. Additionally, the apps handling and versioning system lacks (or at least I couldn’t find them) some features you grow accustomed to if you come from RoR Gems or Maven in the Java world, like automatic version handling, deployment management, profiles, etc. Code performance/security tuning capabilities: TBD Coming from a Java background I always feel reluctant in the sight of non-VM backed applications, because in my experience they usually tend to be harder to tune in terms of performance and escalability. Nonetheless, I would prefer to have some real life experience deploying Django and Python based apps into production before passing judgment on this point. Platform escalation/redundancy capabilities: TBD Check out the previous item, exactly the same happens here to me. Acceptance in corporate markets: BAD Python usually rings more bells in corporate IT environments than Ruby, but it still has a lot of road ahead to be accepted as a viable framework to develop and deploy mission critical web applications. Right now in the corporate mindset Python is a nice little scripting language, good to write backup scripts and even some harmless internal web app, but not the corporate e-commerce or home banking solution. Complexity of developing and consuming SOAP and REST services: GOOD Not much to add here to what I already said before, it’s simple to both provide and consume web services from Django. Python provides the basic tools and they are not hard to use at all. TL;DR Django is a good and solid web framework, but honestly it doesn’t bring to the table anything to make it a better choice than Ruby on Rails or Play. Having said that, it works, its fast and its easy, so if you are seasoned in Python or want to be, then by all means don’t hold back and use Django. In terms of our internal evaluation, Django is neither a better choice than other frameworks, nor is it accepted in corporate markets to be a marketing added value. Reference: Web development frameworks – part 4 : Django from our JCG partner Ricardo Zuasti at the Ricardo Zuasti’s blog blog....

Common sense and Code Quality

If you are involved in a software project (as an individual coder, technical team lead, architect or project manager) chances are that code quality might not be the first thing on your mind. However, the truth is, it needs to be on everyone’s radar. It is one of those things that needs well thought out strategy and continued focus throughout the project’s lifecycle. Otherwise it simply spirals out of control and comes back to bite when project can ill afford a quality issue.  This article takes a very simplistic and common sense approach to code quality. The intent is to demystify code quality and help project teams pick process and tool that makes sense to them. Just to contain the scope of the article, I have restricted the rest of the discussion, to a Java / J2EE based technology project in an enterprise scenario. The basic definition of quality and ways to ensure that, should be similar in technology projects using other technology stacks and operating in non corporate world e.g. in open source arena. Who should care about code quality?  Let’s start with a quick questionnaire:Do you deliver and / or review code written in Java? Do you manage / update / configure any 3rd party product written in Java? Do you contribute code in any java project which has legacy code? Do you contribute code in any java project which has sizeable number of classes (say more than 100) and you want to have a grasp on interdependence of those classes? Are you interested in assessing if there are structural issues in a given java project?If the answer is yes to any / many of these questions, you should care about code quality. The truth of the matter is that you might not have realized it yet and code quality (measuring, ensuring, delivering) might not show up as a distinct item in your role and responsibilities. But it is only a matter of time that it will catch up and cause grief if left unaddressed. It is much better approach to handle this monster proactively. What is a high quality code anyway? If you google it up or discuss this, you generally get two types of answers. First type is generic *ity stuff (Flexibility, Reusability, Portability, Maintainability, Reliability, Testability etc.). While they are important, it is not always clear as to how exactly to measure them and how exactly to improve them. Second type is highly specific technical parameters e.g. cyclometic complexity, Afferent coupling, Efferent coupling etc. There are well documented mathematical formulae to calculate these parameters, software that will calculate them for you, and relatively easy to get to a concrete actionable that will improve these numbers. However, converting the improvement in numbers to improvement in code quality remains a specialized skill. So, net net, there is no easy answer. Let’s try to change that. Let’s put a series of questions that – from common sense – anyone in a team that writes / maintains high quality code base should be able to answer in affirmative. Question 1: Are you confident that as you add new code, none of the existing, working functionality will break?  Do you / your team check in code? I think it is safe to assume, yes. Does an average developer of your team check in code more than once a day? Let’s assume, yes. Is it possible for an average developer of your team, on an average day, to know at the top of his head, what all other developers have checked in and how those code snippets are supposed to work? No. Even if you have all Newton and Einstein in your team, it is an emphatic no. So, how do you ensure that as the coders are frantically churning out code, they are not actually breaking more than they are creating? The answer should be unit testing. Cover as much code as you can cover by unit test. ( If your answer is something else could you comment about it in the article, please? I would love to hear about your suggestion.) Have an automated way of reporting to everyone in the team on the success of all unit tests every morning. If unit tests are broken, fixing them gets the highest priority for the day. Also have an automated report to everyone in the team every morning reporting on the code coverage percentage. Ideally the code coverage percentage should increase in every report. At the very least it should remain same. If it goes down on any report, halt everything and investigate. My common sense says that this has to be the most important code quality measure and process. ( Again, if you have a different opinion, please leave a comment.) Fortunately, sorting out this bit, is comparatively easy. Just use these toolset:Unit testing framework: JUnit, TestNG Unit test coverage tool: EclEmma, Cobertura A build tool: Maven, Ant A continuous integration tool: Jenkins, TeamCity A web dashboard for the report: SonarI am not saying this is the single / best answer. All I am saying, if you don’t have a better answer, this answer is easy, free and it works. One note of caution. Many times, when teams start with this, someone or other googles around and finds out that good quality products are supposed to have 80% unit test coverage. In comparison the product turns out to be in a much worse state. This has many implications including morale and political issues. It is important to emphasize here that 80% code coverage in isolation does not guarantee anything. What is really important is to get a working process in place and continuously improve the test coverage.    Question 2: As you add new code, are you sure you are not committing the same silly mistakes that generally coders do? E.g. did you free up all resources in final block?  Anyone who codes commits mistakes. You are lucky if the compiler catches them for you and spits out a stack trace. But what about those that compiler does not catch but coding community knows from experience to be bad code. If you have worked on banking software a decade ago, the only way to catch the silly mistakes was by having someone senior from the team to review your code. Things have not changed much. You should still have an extra pair of eyes look at your code and design. But luckily there is some help as well. You could use this toolset:Any source code analyzer: PMD, Checkstyle, Findbugs, Crap4j A build tool: Maven, Ant A continuous integration tool: Jenkins, TeamCity A web dashboard for the report: SonarAgain, I am not saying this is the single / best answer. All I am saying, if you don’t have a better answer, this answer is easy, free and it works. One note of caution. Most of the projects which start with these are inundated with hundreds (if not thousands) of items flagged by these source code analyzers. It is very important to spend some time upfront with these tools and throttle the reporting. Fortunately it is very easy to add / delete rules to these source code analyzers effectively configuring these to report only what you / your team thinks is worthy of flagging. The trick is to ensure that the rules are relevant to your team and the reports are treated with utmost respect. It is no good if the tools keep reporting a bunch of issues and nobody in the team is either convinced that they are relevant or nobody is sure who is expected to fix them. I will draw part 1 of this article to a close here. The first couple of questions that we have discussed in this article, I believe, are the most important. They should be taken up first by any technology project which sees value in having a handle on the quality of code. The next part will touch on advanced topics like structural analysis, mutation testing etc. Structural AnalysisIn this article about ‘code quality’ I am going to talk about ‘quality of the software structure’ in particular. The theme of the sequence of these article takes a very simplistic and common sense approach to code quality. The intent is to demystify code quality and help project teams pick process and tool that makes sense to them. I am going to try and keep the article as simple as I can. However, the audience be aware that this topic i.e. ‘structural analysis of software code’ has been and continues to be a fairly involved subject. Mathematicians and computer scientists have published seminal work on this subject as early as 1970s. Fortunately, some excellent material is available on this subject in the public domain. I would particularly like to call out the following works that I have relied on heavily for data used in this article. 1. ‘A Complexity Measure’ published in IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. SE-2, NO.46, DECEMBER 1976 by Thomas J. McCabe. 2. ‘A Metrics Suite for Object Oriented Design’ published in IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 20, NO. 6, JUNE 1994, by Shyam R. Chidamber and Chris F. Kemerer. 3. ‘ OO Design Quality Metrics, An Analysis of Dependencies‘ in 1994 by Robert Martin. 4. ‘ Design Principles and Design Patterns‘ published by Robert Martin. Let me try and present the gist of my interpretation of these works, in the following sections. Patterns Let’s start by enlisting the basic fundamental patterns of bad code structure. These are intuitive in nature and do not have a mathematical or scientific definition.Patterns of Structural Flaws Definition and ExplanationRigidity The software is difficult to change.Fragility Making changes in one part of the software causes breakage in conceptually unrelated part of the software.Immobility It is difficult to move around components of the code as code is not sufficiently modular.Viscosity Wrong practices are so deep rooted in the software that it is easier to keep continuing with the wrong practices, rather than introducing the right practices.Opacity The system is difficult to understand.Matrices The matrices are concrete, measurable items, with scientific and mathematical definition. Standard tools are available, that will measure them for your code base. Of course the list of matrices or tools supplied here are not exhaustive.Matrices Definition and Explanation ToolsNumber of Classes If a comparison is made between projects with identical functionality, those projects with more classes are better abstracted. You would want to keep this number down. Following OOPs concepts efficiently should help. Sonar (free, open source)Lines of Code (LOC) If a comparison is made between projects with identical functionality, those projects with fewer lines of code has superior design and requires less maintenance. Sonar (free, open source)Number of Children (NOC) It is the number of immediate sub-classes of a class. Try to keep it down. Else classes become too complex. Stan4JResponse for Class (RFC) This the count of all methods implemented within the class plus the number of methods accessible to an object of this class due to implementation. Try to keep it lower. Higher the RFC, higher is the effort to make changes. Sonar (free, open source), Stan4JDepth of Inheritance Tree (DIT) Maximum inheritance path from the class to the root class. Try to keep it under 5. Stan4JWeighted Methods Per Class (WMC) Average number of methods defined in class. Try to keep it under 14. Stan4JCoupling between Object Classes (CBO) Number of classes to which a class is coupled. Try to keep it under 14. Stan4JLack of Cohesion of Methods (LCOM4) It measures the number of ‘connected components’ in a class. A low value suggests that the code is simpler and reusable. A high value suggests that the class should be broken up into smaller classes. Try to break down the class if this matrix become higher than 2. Sonar (free, open source), Stan4JCyclomatic Complexity (CC) Measure of different executable paths through a module. Higher number of executable paths through the code means more effort to test completely. This is turn makes it more difficult to understand and change. Try to keep this value under 10. JDepend (free, open source)Distance (D) Distance from the idealized line of A + I = 1. The smaller the distance of your software from the idealized line, the better you are. Abstractness (A) = Na/Nc. where, Na = Number of abstract classes, and Nc = Number of concrete classes Instability (I) = Ce / (Ce + Ca). where, Afferent Couplings(Ca) = The number of other packages that depend upon classes within the package. Efferent Couplings(Ce) = The number of other packages that the classes in the package depend upon. JDepend (free, open source) , Stan4JSo, matrices are there and so are the thresholds and tools to report on them. If you read up the material I had quoted at the beginning of the article, you will find many more matrices. I can safely recommend the use of at least Sonar and the basic matrices that Sonar reports on. That is the very least that any enterprise grade software should have. As I had mentioned in the first article of this sequence, start by measuring. Compare against the figures of the same project build on build and make small incremental changes. Small baby steps in the right direction taken diligently build on build will do wonders. Just don’t for the big kill, measure against the so called ‘industry standard’ and everything should be alright. Beyond Matrices With all due respect to the matrices, their utility is limited to doing a health check on the existing code. Given the number of matrices and the plethora of tools to measure them (add conflicting views among technocrats about the efficacy of the tools and matrices) it soon gets confusing. It is like looking at admin panel with all dials and bulbs going berserk while you frantically try to figure out how to appease all. What you also need is a tool to analyse all these matrices and point out straightaway the complicated and vulnerable parts of your code. Of course it helps if it does so with an intuitive visual interface. There are a few software which does just this (unfortunately none of them is free). I have used and have quite liked Structure 101. It reports on ‘fat’ packages, classes, designs etc, which is it’s way of saying that it thinks that the ‘fat’ artifacts are excessively complex and hence tentative candidates to refactor / restructure. These artifacts generally have tangles (cyclic dependencies) and the tool does and excellent job of showing those. That brings me to the end of this article. In conclusion, I just want to say, creating simple code is a complex business. It is not (only) labor. It is skill. And like all skills, mastering tools of the trade is important. Knowing which tools to pick from the free opensource basket and which ones to pay for (because they are worth it) is crucial. In the next article we will talk about creating future state architecture for the project (assuming it is a long running support and upgrade project) and how to measure increased conformance of the code base to future state architecture, build by build. Until then, happy coding. Reference: Common sense and Code Quality – Part 1, Common sense and Code Quality – Part 2  from our JCG partner Partho at the Tech for Enterprise blog....

Spring Integration – Session 1 – Hello World

The ” Hello World” of Spring Integration – consider a simple program to print “Hello World” to the console using Spring Integration and in the process visit a few Enterprise Integration Patterns Concepts Before jumping into the program itself, a quick review of messaging concepts will be helpful – Messaging is an Integration style where two independent applications communicate with each other through an intermediary – the intermediary is referred to as the “Messaging System”. Enterprise Integration Patterns describes the common Integration related issues with Messaging based application Integration and their recommended solutions. For eg. Consider one of the Enterprise Integration Patterns – The Messaging Channel, to quote from the Entprise Integration Patterns Book: The problem “Messaging Channel” is trying to solve is: An enterprise has two separate applications that need to communicate, preferably by using Messaging. How does one application communicate with another using messaging? The solution is: Connect the applications using a Message Channel, where one application writes information to the channel and the other one reads that information from the channel. All the other Enterprise Integration Patterns are described along the same lines. The reason to quickly visit Enterprise Integration Patterns is to set the context – Spring Integration aligns very closely with the Enterprise Integration Patterns and is the “Messaging system” that was mentioned earlier. So now to see the Hello World using Spring Integration: First a small junit: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration("helloworld.xml") public class HelloWorldTest { @Autowired @Qualifier("messageChannel") MessageChannel messageChannel;@Test public void testHelloWorld() { Message<String> helloWorld = new GenericMessage<String>("Hello World"); messageChannel.send(helloWorld); } }Here a MessageChannel is being wired into the test, the first application (here the Junit), sends a Message(in this case a string “Hello World”) to the Message Channel, something reads the message from the “Message Channel” and writes the message to the system out. Now, let us see the remaining part of how “something” picks up a message from the Message Channel and writes to the system out: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:int="http://www.springframework.org/schema/integration" xmlns:int-stream="http://www.springframework.org/schema/integration/stream" xsi:schemaLocation="http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration-2.1.xsd http://www.springframework.org/schema/integration/stream http://www.springframework.org/schema/integration/stream/spring-integration-stream-2.1.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"><int:channel id="messageChannel"></int:channel> <int-stream:stdout-channel-adapter channel="messageChannel" append-newline="true"/></beans>The above is a Spring Integration flow being described using Spring Custom namespaces, here the Integration Namespace. A “Message Channel”, imaginatively called the “messageChannel” is created, a “Hello World” “Message” is placed into the “Messsage Channel”, from which a “Channel Adapter” gets the message and prints it to the standard out.This is a small program, but it uses three Enterprise Integration Patterns – the Message (“Hello World”, which is the packet of information being sent to the messaging system, the ” Message Channel” which was introduced earlier and the new one, the Channel Adapter, here an Outbound Channel Adapter which connects the messaging system to the application(in this case the system out). It further shows how Spring Integration aligns very closely with the Enterprise Integration Patterns terminology with its Spring custom namespace. This simple program introduces Spring Integration, I will be introducing Spring Integration in more detail using a few more samples in the next few sessions. References: 1. Spring Integration Reference: http://static.springsource.org/spring-integration/reference/htmlsingle/ 2. Enterprise Integration Patterns: http://www.eaipatterns.com/index.html 3. Visio templates for EIP : http://www.eaipatterns.com/downloads.html Reference: Spring, Spring Integration, Enterprise Development from our JCG partner Biju Kunjummen at the all and sundry blog....

JavaFX 2.0 Layout Panes – FlowPane and TilePane

FlowPanes and TilePanes are nice layout panes, if you want to layout your children consecutively one after another, either horizontally or vertically. They are quite similiar to each other as both will layout their children either in columns (in case of a horizontal Flow/TilePane) and wrap at their width or in rows (in case of a vertical Flow/TilePane) and wrap at their height. The only major difference is, that the TilePane places all children in tiles that are the same size! So the size of the greatest children is taken for the size of each individual tile in the TilePane. Therefore a TilePane is also a nice way to size and align buttons and other controls equally. (See my previous post Sizing Buttons equally inside a VBox or HBox) FlowPane and TilePane – Example 1 import java.util.Random; import javafx.application.Application; import javafx.geometry.Insets; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.effect.DropShadow; import javafx.scene.image.Image; import javafx.scene.image.ImageView; import javafx.scene.layout.*; import javafx.scene.paint.Color; import javafx.scene.text.Font; import javafx.scene.text.FontWeight; import javafx.scene.text.Text; import javafx.stage.Stage;/** * * Created on: 24.03.2012 * @author Sebastian Damm */ public class FlowPaneAndTilePaneExample extends Application { private Random random; private VBox root; private FlowPane flowPane; private TilePane tilePane; @Override public void start(Stage primaryStage) throws Exception { random = new Random(); root = new VBox(30); VBox upperVBox = createUpperVBox(); VBox lowerVBox = createLowerVBox();fillPanesWithImages(); root.getChildren().addAll(upperVBox, lowerVBox); Scene scene = new Scene(root, 800, 600, Color.ANTIQUEWHITE); primaryStage.setTitle("FlowPane and TilePane Example"); primaryStage.setScene(scene); primaryStage.show(); } private VBox createUpperVBox() { VBox vbox = new VBox(20); Text textFlowPane = new Text("I am a FlowPane"); textFlowPane.setFont(Font.font("Calibri", FontWeight.BOLD, 30)); textFlowPane.setUnderline(true); textFlowPane.setEffect(new DropShadow()); VBox.setMargin(textFlowPane, new Insets(10, 0, 0, 10)); flowPane = new FlowPane(); flowPane.setHgap(5); flowPane.setVgap(5); vbox.getChildren().addAll(textFlowPane, flowPane); VBox.setMargin(vbox, new Insets(10)); return vbox; } private VBox createLowerVBox() { VBox vbox = new VBox(20); Text textTilePane = new Text("I am a TilePane"); textTilePane.setFont(Font.font("Calibri", FontWeight.BOLD, 30)); textTilePane.setUnderline(true); textTilePane.setEffect(new DropShadow()); VBox.setMargin(textTilePane, new Insets(10, 0, 0, 10)); tilePane = new TilePane(); tilePane.setHgap(5); tilePane.setVgap(5); vbox.getChildren().addAll(textTilePane, tilePane); VBox.setMargin(vbox, new Insets(10)); return vbox; } private void fillPanesWithImages() { for (int i = 1; i <= 6; i++) { int imgSize = random.nextInt(128) + 1; Button bt = new Button(); Image img = new Image(FlowPaneAndTilePaneExample.class .getResourceAsStream("images/person" + i + ".png"), imgSize > 50 ? imgSize : 50, 0, true, false); ImageView view = new ImageView(img); bt.setGraphic(view); flowPane.getChildren().add(bt); Button bt2 = new Button(); Image img2 = new Image(FlowPaneAndTilePaneExample.class .getResourceAsStream("images/person" + i + ".png") , imgSize > 50 ? imgSize : 50, 0, true, false); ImageView view2 = new ImageView(img2); bt2.setGraphic(view2); tilePane.getChildren().add(bt2); } }public static void main(String[] args) { Application.launch(args); } } This little application shows the major difference between a FlowPane and a TilePane by putting the same content in both panes. The both panes will be put in another VBox with an additional Text on top. I am assuming that only the parts of the code with the FlowPane, the TilePane and the image loading are new to you by now. If you have problems understanding this JavaFX code please see my previous examples where I started with the basics of JavaFX 2.0. Both panes provide amongst others a setHgap and a setVgap method to declare a spacing between each column and each row. To fill the buttons I chose to load some images. In JavaFX 2.0 images can be shown with a ImageView which expects an Image object. (Note: This is an javafx.scene.image.Image, not an java.awt.image!) Such an ImageView can then by applied to any Labeled object. Labeled is a subclass of Control and amongst others the abstract parent class of Label and ButtonBase (which is the base class for every kind of button), which allows you to set an image to every kind of label and button. My six buttons are all 128×128 pixels. To show you the difference between a FlowPane and a TilePane I chose to resize these images. At the moment this is only possible directly in the constructor of the Image class as there are no methods to change the size of an Image object later on. One constructor takes an InputStream, two double values for the width and the height and two boolean values for preserving the aspect ratio of the image and for the ‘?smooth’ property. If you want to resize your image and keep the aspect ratio you can just specify either the width or the height and keep the ratio by passing ‘true’ as the first boolean value. With the ‘smooth’ property you can choose between a clearer or a faster rendering of the image. Depending on the random value generated for the size, your application should look something like this:You can see that the images are basically the same. The difference is, that the FlowPane lays out all images directly after another only separated by the gap specified with the setHgap method, whereas the TilePane put all images in tiles of the same size. FlowPane and TilePane – Example 2 Here is another small example: As stated in the introduction of this post, a TilePane is also a very nice way for sizing and aligning buttons equally. To show the main difference between a FlowPane and a TilePane another time, the same elements will be put in both panes again. Here is the code: import javafx.application.Application; import javafx.geometry.Insets; import javafx.geometry.Orientation; import javafx.geometry.Pos; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.control.Separator; import javafx.scene.layout.*; import javafx.scene.paint.Color; import javafx.scene.paint.CycleMethod; import javafx.scene.paint.RadialGradient; import javafx.scene.paint.RadialGradientBuilder; import javafx.scene.paint.Stop; import javafx.scene.text.Font; import javafx.stage.Stage;/** * * Created on: 24.03.2012 * @author Sebastian Damm */ public class FlowPaneAndTilePaneExample2 extends Application { private VBox root; private FlowPane flowPane; private TilePane tilePane; @Override public void start(Stage primaryStage) throws Exception { root = new VBox(); root.setAlignment(Pos.CENTER);initFlowPane(); initTilePane(); createButtons(); root.getChildren().addAll(flowPane, new Separator(), tilePane); Scene scene = new Scene(root, 400, 300); RadialGradient background = RadialGradientBuilder.create() .stops(new Stop(0d, Color.web("#fff")) , new Stop(0.47, Color.web("#cbebff")) , new Stop(1d, Color.web("#a1dbff")) ) .cycleMethod(CycleMethod.NO_CYCLE) .build(); scene.setFill(background); primaryStage.setTitle("FlowPane and TilePane Example 2"); primaryStage.setScene(scene); primaryStage.show(); } private void initFlowPane() { flowPane = new FlowPane(Orientation.VERTICAL); flowPane.setHgap(5); flowPane.setVgap(5); flowPane.setPrefHeight(200); flowPane.setAlignment(Pos.CENTER); VBox.setMargin(flowPane, new Insets(10)); } private void initTilePane() { tilePane = new TilePane(Orientation.VERTICAL); tilePane.setHgap(5); tilePane.setVgap(5); tilePane.setPrefHeight(200); tilePane.setAlignment(Pos.CENTER); VBox.setMargin(tilePane, new Insets(10)); } private void createButtons() { Button bt = new Button("1"); bt.setMaxWidth(Double.MAX_VALUE); bt.setMaxHeight(Double.MAX_VALUE); Button bt2 = new Button("Button 1"); bt2.setMaxWidth(Double.MAX_VALUE); bt2.setMaxHeight(Double.MAX_VALUE); Button bt3 = new Button("Button"); bt3.setMaxWidth(Double.MAX_VALUE); bt3.setMaxHeight(Double.MAX_VALUE); bt3.setFont(Font.font("Cambria", 22)); Button bt4 = new Button("1"); bt4.setMaxWidth(Double.MAX_VALUE); bt4.setMaxHeight(Double.MAX_VALUE); Button bt5 = new Button("Button 1"); bt5.setMaxWidth(Double.MAX_VALUE); bt5.setMaxHeight(Double.MAX_VALUE); Button bt6 = new Button("Button"); bt6.setMaxWidth(Double.MAX_VALUE); bt6.setMaxHeight(Double.MAX_VALUE); bt6.setFont(Font.font("Helvetica", 22)); flowPane.getChildren().addAll(bt, bt2, bt3); tilePane.getChildren().addAll(bt4, bt5, bt6); }public static void main(String[] args) { Application.launch(args); } } Again the root node is a VBox with a FlowPane in the upper region and a TilePane in the lower region. There are some parts in the code that may be new to you. First of all take a look at the lines 44-51. Here I create a radial gradient for the background of the scene with help from one of the numerous builder classes in JavaFX 2.0. I will cover gradients and also the builder pattern in own posts later on, so I won´t explain much here. For now you just need to know, that these lines create a radial background which is then applied to the scene via scene´s setFill method. (Like in previous examples we could have specified the background fill directly in the constructor of the scene, because it expects a Paint object, which includes not only normal colors, but also every kind of gradient). In contrast to the first example, this time we use vertical panes which are populated with buttons. Because I want to allow the buttons to grow to whatever space is provided by their parent, I set the max height as well as the max width of every button to the constant Double.MAX_VALUE. (Take a look at my previous example Sizing Buttons equally inside a VBox or HBox if you haven´t already) Your application should look like this:As you can see in both panes the buttons grow to the width of their parent, but only in the TilePane the buttons also grow vertically because each tile in a TilePane is equally sized. This example may not seem very important but in the applications I developed in JavaFX 2.0 until now, I always wanted to size and align buttons equally because it is a subtle aspect that makes your application look more clean and polished. If you resize you window, it should look like this:Take a note, that once the buttons are not layed out vertically anymore in the FlowPane the buttons only occupy the space they need (based on their content) whereas in the TilePane all buttons are still of equal size. Reference: JavaFX 2.0 Layout Panes – FlowPane and TilePane from our JCG partner Sebastian Damm at the Just my 2 cents about Java blog....

OSGi case study: a modular vert.x

OSGi enables Java code to be divided cleanly into modules known as bundles with access to code and resources controlled by a class loader for each bundle. OSGi services provide an additional separation mechanism: the users of an interface need have no dependency on implementation classes, factories, and so forth. The following case study aims to make the above advantages of OSGi bundles and services concrete. It takes an interesting Java project, vert.x, and shows how it can be embedded in OSGi and take advantage of OSGi’s facilities. Disclaimer: I am not proposing to replace the vert.x container or its module system. This is primarily a case study in the use of OSGi although some of the findings should motivate improvements to vert.x, especially when it is embedded in applications with custom class loaders. vert.x The vert.x open source project provides a JVM alternative to node.js: an asynchronous, event-driven programming model for writing web applications in a number of languages including Java, Groovy, JavaScript, and Ruby. vert.x supports HTTP as well as modern protocols such as WebSockets and sockjs (which works in more browsers than WebSockets and can traverse firewalls more easily). vert.x has a distributed event bus which allows JSON messages to be propagated between vert.x applications known as verticles and shared code libraries known as busmods. A busmod is a special kind of verticle which handles events from the event bus. vert.x ships some busmods, such as a MongoDB ‘persistor’, and users can write their own.vert.x’s threading model is interesting as each verticle (or busmod) is bound to a particular thread for its lifetime and so the code of a verticle needn’t be concerned about thread safety. A pool of threads is used for dispatching work on verticles and each verticle must avoid blocking or long-running operations so as not to impact server throughput (vert.x provides separate mechanisms for implementing long-running operations efficiently). This is similar to the quasi-reentrant threading model in the CICS transaction processor. 1 Of particular interest here is the vert.x module system which has a class loader per verticle and code libraries, known as modules, which are loaded into the class loader of each verticle which uses them. So there is no way to share code between verticles except via the event bus. vert.x has excellent documentation including a main manual, a java manual (as well as manuals for other language), tutorials, and runnable code examples. OSGi If you’re not already familiar with OSGi, read my OSGi introduction post, but don’t bother following the links in that post right now – you can always go back and do that later. Embedding vert.x in OSGi I did this in several small steps which are presented in turn below: converting vert.x JARs to OSGi bundles and then modularising verticles, busmods, and event bus clients. Converting vert.x JARs to OSGi Bundles The vert.x manual encourages users to embed vert.x in their own applications by using the vert.x core JAR, so the first step in embedding vert.x in OSGi was to convert the vert.x core JAR into an OSGi bundle so it could be loaded into an OSGi runtime. I used the bundlor tool, although other tools such as bnd would work equally well. Bundlor takes a template and then analyses the bytecode of the JAR to produce a new JAR with appropriate OSGi manifest headers. Please refer to the SpringSource Bundlor documentation for further information about bundlor for now as the Eclipse Virgo Bundlor documentation is not published at the time of writing even though the bundlor project has transferred to Eclipse.org. The template for the vert.x core JAR is as follows: Bundle-ManifestVersion: 2 Bundle-SymbolicName: org.vertx.core Bundle-Version: 1.0.0.final Bundle-Name: vert.x Core Import-Template: org.jboss.netty.*;version="[3.4.2.Final,4.0)", org.codehaus.jackson.*;version="[1.9.4,2.0)", com.hazelcast.*;version="[2.0.2,3.0)";resolution:=optional, groovy.*;resolution:=optional;version=0, org.codehaus.groovy.*;resolution:=optional;version=0, javax.net.ssl;resolution:=optional;version=0, org.apache.log4j;resolution:=optional;version=0, org.slf4j;resolution:=optional;version=0 Export-Template: *;version="1.0.0.final"(The template and all the other parts of this case study are available on github.) What this does is define the valid range of versions for packages that the JAR depends on (the range "0" represents the version range of 0 or greater), whether those packages are optional or mandatory, and what version the JARs own packages should be exported at. It also gives the bundle a symbolic name (used to identify the bundle), a version, and a (descriptive) name. Armed with this information, OSGi then wires together the dependencies of bundles by delegating class loads and resource lookups between bundle class loaders. Thankfully the netty networking JAR and jackson JSON JARs which the vert.x core JAR depends on ship with valid OSGi manifests. As a sniff test that the manifest was valid, I tried deploying the vert.x core bundle in the Virgo kernel. This was simply a matter of placing the vert.x core bundle in the pickup directory and its dependencies in the repository/usr directory and then starting the kernel. The following console messages showed the vert.x core bundle was installed and resolved successfully: <hd0001i> Hot deployer processing 'INITIAL' event for file 'vert.x-core-1.0.0.final.jar'. <de0000i> Installing bundle 'org.vertx.core' version '1.0.0.final'. <de0001i> Installed bundle 'org.vertx.core' version '1.0.0.final'. <de0004i> Starting bundle 'org.vertx.core' version '1.0.0.final'. <de0005i> Started bundle 'org.vertx.core' version '1.0.0.final'.Using the Virgo shell, I then checked the wiring of the bundles: osgi> ss "Framework is launched."id State Bundle 0 ACTIVE org.eclipse.osgi_3.7.1.R37x_v20110808-1106 ... 89 ACTIVE org.vertx.core_1.0.0.final 90 ACTIVE jackson-core-asl_1.9.4 91 ACTIVE jackson-mapper-asl_1.9.4 92 ACTIVE org.jboss.netty_3.4.2.Finalosgi> bundle 89 org.vertx.core_1.0.0.final [89] ... Exported packages ... org.vertx.java.core; version="1.0.0.final"[exported] org.vertx.java.core.buffer; version="1.0.0.final"[exported] ... Imported packages org.jboss.netty.util; version="3.4.2.Final"<org.jboss.netty_3.4.2.final [92]> ... org.codehaus.jackson.map; version="1.9.4"<jackson-mapper-asl_1.9.4 [91]> ...I also converted the vert.x platform JAR to an OSGi bundle in similar fashion as it was needed later. Modularising Verticles A typical verticle looks like this: public class ServerExample extends Verticle {public void start() { vertx.createHttpServer().requestHandler(new Handler<httpserverrequest>() { public void handle(HttpServerRequest req) { ... } }).listen(8080); } }When the start method is called it creates a HTTP server, registers a handler with the server, and sets the server listening on a port. Apart from the body of the handler, the remainder of this code is boilerplate. So I decided to factor out the boilerplate into a common OSGi bundle (org.vertx.osgi) and replace the verticle with a modular verticle bundle containing the handler and some declarative metadata equivalent to the boilerplate. The common OSGi bundle uses the whiteboard pattern to listen for specific kinds of services in the OSGi service registry, create boilerplate based on the metadata, and register the handler with the resultant HTTP server. Let’s look at the modular verticle bundle. Its code consists of a single HttpServerRequestHandler class: 2 public final class HttpServerRequestHandler implements Handler<httpserverrequest> {public void handle(HttpServerRequest req) { ... }}It also has declarative metadata in the form of service properties which are registered along with the handler in the OSGi service registry. I used the OSGi Blueprint service to do this, although I could have used OSGi Declarative Services or even registered the service programmatically using the OSGi API. The blueprint metadata is a file blueprint.xml in the bundle that looks like this: <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <service interface="org.vertx.java.core.Handler" ref="handler"> <service-properties> <entry key="type" value="HttpServerRequestHandler"> <entry key="port" value="8090"> </service-properties> </service> <bean class="org.vertx.osgi.sample.basic.HttpServerRequestHandler" id="handler"/></blueprint>This metadata declares that a HTTP server should be created (via the type service property), the handler registered with it, and the server set listening on port 8090 (via the port service property). This all happens courtesy of the whiteboard pattern when the org.vertx.osgi bundle is running as we’ll see below. Notice that the modular verticle depends only on the Handler and HttpServerRequest classes whereas the original verticle also depends on the Vertx, HttpServer, and Verticle classes. This also makes things quite a bit simpler for those of us who like unit testing (in addition to in-container testing) as fewer mocks or stubs are required. So what do we now have? Two bundles to add to the bundles we installed earlier: an org.vertx.osgi bundle which encapsulates the boilerplate code and an application bundle representing a modular verticle. We also need a Blueprint service implementation — as of Virgo 3.5, a Blueprint implementation is built in to the Virgo kernel. The following interaction diagram shows one possible sequence of events:In OSGi, each bundle has its own lifecycle and in general bundles are designed so that they will function correctly regardless of the order in which they is started relative to other bundles. In the above example the assumed start order is: blueprint service, org.vertx.osgi bundle, modular verticle bundle. However, the org.vertx.osgi bundle could start after the modular verticle bundle and the end result will be the same: a server will be created and the modular verticle bundle’s handler registered with the server and the server set listening. If the blueprint service is started after the org.vertx.osgi and modular verticle bundles, then the org.vertx.osgi bundle won’t detect the modular verticle bundle’s handler service appear in the service registry until the blueprint service has started, but then the end result will again be the same. The github project contains the source for some sample modular verticles: a basic HTTP vertical (which runs on port 8090) and a sockjs verticle (which runs on port 8091). The org.vertx.osgi bundle needed more code to support sockjs and the modular sockjs verticle needed to provide a sockjs handler in addition to a HTTP handler. Modularising BusMods The MongoDB persistor is a typical example of a busmod which processes messages from the event bus: public class MongoPersistor extends BusModBase implements Handler<message<jsonobject>> {private String address; private String host; private int port; private String dbName;private Mongo mongo; private DB db;public void start() { super.start();address = getOptionalStringConfig("address", "vertx.mongopersistor"); host = getOptionalStringConfig("host", "localhost"); port = getOptionalIntConfig("port", 27017); dbName = getOptionalStringConfig("db_name", "default_db");try { mongo = new Mongo(host, port); db = mongo.getDB(dbName); eb.registerHandler(address, this); } catch (UnknownHostException e) { logger.error("Failed to connect to mongo server", e); } }public void stop() { mongo.close(); }public void handle(Message<jsonobject> message) { ... }}Again there is a mixture of boilerplate code (to register the event bus handler), start/stop logic, configuration handling, and the event bus handler itself. I applied a similar approach to the other verticles and separated out the boilerplate code into the org.vertx.osgi bundle leaving the handler and metadata (including configuration) in a modular busmod. The persistor’s dependency on the MongoDB client JAR ( mongo.jar) is convenient because this JAR ships with a valid OSGi manifest. Here’s the blueprint.xml: <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <service ref="handler" interface="org.vertx.java.core.Handler"> <service-properties> <entry key="type" value="EventBusHandler"/> <entry key="address" value="vertx.mongopersistor"/> </service-properties> </service> <bean id="handler" class="org.vertx.osgi.mod.mongo.MongoPersistor" destroy-method="stop"> <argument type="java.lang.String"><value>localhost</value></argument> <argument type="int"><value>27017</value></argument> <argument type="java.lang.String"><value>default_db</value></argument> </bean> </blueprint>Notice that the boilerplate configuration consists of the handler type and event bus address. The other configuration (host, port, and database name) is specific to the MongoDB persistor. Here’s the modular MongoDB busmod code: public class MongoPersistor extends BusModBase implements Handler<Message<JsonObject>> {private final String host;private final int port;private final String dbName;private final Mongo mongo;private final DB db;public MongoPersistor(String host, int port, String dbName) throws UnknownHostException, MongoException { this.host = host; this.port = port; this.dbName = dbName;this.mongo = new Mongo(host, port); this.db = this.mongo.getDB(dbName); }public void stop() { mongo.close(); }public void handle(Message<JsonObject> message) { ... }}The code still extends BusModBase simply because BusModBase provides several convenient helper methods. Again the resultant code is simpler and easier to unit test than the non-modular equivalent. Modularising Event Bus Clients Finally, I needed a modular verticle to test the modular MongoDB persistor. All this verticle needs to do is to post an appropriate message to the event bus. Normal vert.x verticles obtain the event bus using the Vertx class, but I used the Blueprint service again, this time to look up the event bus service in the service registry and inject it into the modular verticle. I also extended the org.vertx.osgi bundle to publish the event bus service in the service registry. The blueprint.xml for the modular event bus client is as follows: <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"><reference id="eventBus" interface="org.vertx.java.core.eventbus.EventBus"/><bean class="org.vertx.osgi.sample.mongo.MongoClient"> <argument ref="eventBus"/> <argument type="java.lang.String"> <value>vertx.mongopersistor</value> </argument> </bean> </blueprint>Then the modular event bus client code is straightforward: public final class MongoClient {public MongoClient(EventBus eventBus, String address) { JsonObject msg = ... eventBus.send(address, msg, new Handler<Message<JsonObject>>(){...}); }}Taking it for a Spin 1. I’ve made all the necessary OSGi bundles available in the bundles directory in git. You can grab them either by cloning the git repository: git clone git://github.com/glyn/vert.x.osgi.git or by downloading a zip of the git repo. 2. vert.x requires Java 7, so set up a terminal shell to use Java 7. Ensure the JAVA_HOME environment variable is set correctly. (If you can’t get Java 7 right now, you’ll see some errors when the bundles are deployed to OSGi and you won’t be able to run the samples in steps 8 and 9.) 3. If you are an OSGi user, simply install and start the bundles in your favourite OSGi framework or container and skip to step 8. If not, then use the copy of the Virgo kernel in the git repository as follows. 4. Change directory to the virgo-kernel-… directory in your local copy of the git repo. 5. On UNIX, issue: bin/startup.sh -clean or on Windows, issue: bin\startup.bat -clean 6. The Virgo kernel should start and deploy the various bundles in its pickup directory:org.vertx.osgi bundle (org.vertx.osgi-0.0.1.jar) HTTP sample modular verticle (org.vertx.osgi.sample.basic-1.0.0.jar) SockJS sample modular verticle (org.vertx.osgi.sample.sockjs-1.0.0.jar) MongoDB persistor sample modular busmod (org.vertx.osgi.mods.mongo-1.0.0.jar)7. If you want to see which bundles are now running, start the Virgo shell from another terminal: telnet localhost 2501 and use the ss or lb commands to summarise the installed bundles. The help command will list the other commands available and disconnect will get you out of the Virgo shell. Here’s typical output of the ss command: ... 89 ACTIVE org.vertx.osgi_0.0.1 90 ACTIVE jackson-core-asl_1.9.4 91 ACTIVE jackson-mapper-asl_1.9.4 92 ACTIVE org.jboss.netty_3.4.2.Final 93 ACTIVE org.vertx.core_1.0.0.final 94 ACTIVE org.vertx.osgi.mods.mongo_1.0.0 95 ACTIVE com.mongodb_2.7.2 96 ACTIVE org.vertx.platform_1.0.0.final 97 ACTIVE org.vertx.osgi.sample.basic_1.0.0 98 ACTIVE org.vertx.osgi.sample.sockjs_1.0.0and of the lb command (which includes the more descriptive Bundle-Name headers): ... 89|Active | 4|vert.x OSGi Integration (0.0.1) 90|Active | 4|Jackson JSON processor (1.9.4) 91|Active | 4|Data mapper for Jackson JSON processor (1.9.4) 92|Active | 4|The Netty Project (3.4.2.Final) 93|Active | 4|vert.x Core (1.0.0.final) 94|Active | 4|MongoDB BusMod (1.0.0) 95|Active | 4|MongoDB (2.7.2) 96|Active | 4|vert.x Platform (1.0.0.final) 97|Active | 4|Sample Basic HTTP Verticle (1.0.0) 98|Active | 4|Sample SockJS Verticle (1.0.0)8. You can now use a web browser to try out the basic HTTP sample at localhost:8090 which should respond “hello” or the SockJS sample at http://localhost:8091 which should display a box into which you can type some text and a button which, when clicked, produces a pop-up:9. If you want to try the (headless) MongoDB event bus client, download MondoDB and start it locally on its default port and then copy org.vertx.osgi.sample.mongo-1.0.0.jar from the bundles directory to Virgo’s pickup directory. As soon as this bundle starts, it will send a message to the event bus and drive the MongoDB persistor to update the database. If you don’t want to use MongoDB to check that an update was made, take a look in Virgo’s logs (in serviceability/logs/log.log) to see some System.out lines like the following that confirmed something happened: System.out Sending message: {action=save, document={x=y}, collection=vertx.osgi} ... System.out Message sent ... System.out Message response {_id=95..., status=ok}OSGi and vert.x Modularity In this case study the various sample OSGi bundles all depend on, and share, the vert.x core bundle. Each bundle is loaded in its own class loader and OSGi controls the delegation of class loading and resource lookups according to how the OSGi bundles are wired together. In the same way, verticles written as OSGi bundles are free to depend on, and share, other OSGi bundles. This is quite different from the vert.x module system in which any module (other than a busmod) which a verticle depends on is loaded into the same class loader as the verticle. The advantages of the OSGi module system are that a single copy of each module is installed in the system and is visible to and may be managed by tools such as the Virgo shell. It also minimises footprint. The advantages of the vert.x module system are that there is no sharing of modules between verticles so a badly-written module could not inadvertently or deliberately leak information between independent verticles. Also, there is a separate copy of each (non-busmod) module for each verticle that uses it and so the module can be written without worrying about thread safety as each copy will only be executed on its verticle’s thread. OSGi users may, however, be happy to require reusable modules to be thread-safe and manage any mutable static data carefully to avoid leakage between threads.   Replacing the Container? When I raised the topic of embedding vert.x in OSGi, the leader of vert.x, Tim Fox, asked me whether I was writing a replacement for the current container, to which I replied “not really”. I said this because I liked vert.x’s event driven programming model and its threading model, which seem to be part of “the container”. But I was trying to replace a couple of aspects of the vert.x container: the module system and the way verticles register handlers. Later it struck me that perhaps the notion of “the container” as a monolithic entity is a little odd in a modular system and it might be better to think of multiple, separate notions of containment which could then be combined in different ways to suit different users. However, the subtle interaction between the class loading and threading models seen above shows that the different notions of containment can depend on each other. I wonder what others think about the notion of “the container”? Conclusions vert.x’s claim that it can be embedded in other applications is essentially validated since the OSGi framework is a fairly exacting application. The vert.x module system, although not providing isolation between modules, does neatly provide isolation between applications (comprising verticles and their modules) and it enables modules to be written without paying attention to thread safety. One vert.x issue was raised 2 which should make vert.x easier to embed in other environments with custom class loaders. vert.x could follow the example of netty, jackson, and MongoDB JARs and include OSGi manifests in its core and platform JARs to avoid OSGi users having to convert these JARs to OSGi bundles. I will leave this to someone else to propose as I cannot gauge the demand for using vert.x inside OSGi. Running vert.x in OSGi addresses some outstanding vert.x requirements such as how to automate in-container tests (OSGi has a number of solutions including Pax Exam while Virgo has a integration test framework) and how to develop verticles and deploy them to vert.x under control of the IDE (see the Virgo IDE tooling guide). Virgo also provides numerous ancillary benefits including the admin shell for inspecting and managing bundles and verticles, sophisticated diagnostics, and much more (see the Virgo white paper for details). The exercise also had some nice spin-offs for Virgo. Bug 370253 was fixed which was the only known issue in running Virgo under Java 7. Virgo 3.5 depends on Gemini Blueprint which broke in this environment and so bug 379384 was raised and fixed. I used the new Eclipse-based Virgo tooling to develop the various bundles and run them in Virgo. As a consequence, I found a few small issues in the tooling which will be addressed in due course. Finally, running vert.x on the Virgo kernel is a further validation that the kernel is suitable for building custom server runtimes since now we have vert.x in addition to Tomcat, Jetty, and one or two custom servers running on the kernel. Footnotes:I worked in the CICS development team in my IBM days. A colleague at SpringSource gave me a “CICS Does That!” T-shirt soon after we’d started working together. Old habits die hard. The modular vertical currently needs to intercept vert.x’s resource lookup logic so that files in the bundle can easily be served. It would be much better for this common code to move to the org.vertx.osgi bundle, but this requires vert.x issue 161 to be implemented first.Reference: OSGi case study: a modular vert.x from our JCG partner Glyn Normington at the Mind the Gap blog....

Categorize tests to reduce build time

Before we progress with the main content of the article, let’s get a few definitions out of the way. Unit tests Unit tests are tests that are small (tests one use case or unit), run in memory (do not interact with database, message queues etc.), repeatable and fast. For our conversation let us restrict these to JUnit based test cases that developers write to check their individual piece of code. Integration tests Integration tests are larger (tests one flow or integration of components), do not necessarily run in memory only (interact with database, file systems, message queues etc.), definitely slower, and not necessarily repeatable (as in the result might be change in case some change was done in data base for example). Why is this differentiation important? In Agile programming, on of he basic concepts is to run unit tests every once in a while (multiple times in a day on developer boxes) and enforce the integration tests to run once a day (on continuous integration server rather than on developer boxes). Please note that the developer should be able to run integration tests whenever he wants it, it is just that it is separate from the unit tests so the developer now have a choice to not run integration tests every time he wants to run tests. How exactly does that flexibility help?Developers build more frequently. That – in Agile world means – developers run unit tests more frequently (often a few times per day). Developers get to know of a bug sooner and waste less time coding to a broken codebase. That means saving time and money. Fixing bugs is easier and faster. Given the frequency of builds, less amount of “offending code” could have been committed and hence it is easier to zero down on the bug and fix it. Last but not the least, anyone who has done any professional coding will testify that while it helps to be able take a 10 minute break once in a while, nothing kills the creativity of a coder more efficiently than having to wait for a 1 hour build. The impact to morale is intangible, but immense.How exactly do I bring down the build time? There is no one size fits all (there never is). The exact executable steps to bring down build and release time will be a factor of many variables including the technology stack of the product (Java, DotNet, php), the build and release technologies (Batch files, Ant, Maven) and many other. For Java, Maven, and JUnit combination … Let us start by using Maven to create a simple java application to demonstrate the case. \MavenCommands.bat ECHO OFFREM ============================= REM Set the env. variables. REM ============================= SET PATH=%PATH%;C:\ProgramFiles\apache-maven-3.0.3\bin; SET JAVA_HOME=C:\ProgramFiles\Java\jdk1.7.0REM ============================= REM Create a simple java application. REM ============================= call mvn archetype:create ^ -DarchetypeGroupId=org.apache.maven.archetypes ^ -DgroupId=org.academy ^ -DartifactId=app001 pauseIf you run this batch file you will start with a standard java application readymade for you. The default java application does not come with the latest JUnit. You might want to change Maven configuration to add latest JUnit. \pom.xml [...] 4.10 [...] junit junit ${junit.version} testNow, go ahead and add a JUnit test class. /app001/src/test/java/org/academy/AppTest.java public class AppTest {private final static Logger logger = LoggerFactory.getLogger(AppTest.class);@Test public void smallAndFastUnitTest() { logger.debug("Quick unit test. It is not expected to interact with DB etc."); assertTrue(true); }@Test @Category(IntegrationTest.class) public void longAndSlowIntegrationTest() { logger.debug("Time consuming integration test. It is expected to interact with DB etc."); assertTrue(true); } }As you might notice there is a IntegrationTest.class marker. You will have to create this class as well. /app001/src/test/java/org/academy/annotation/type/IntegrationTest.java public interface IntegrationTest { // Just a marker interface. }Creating the marker interface and annotating your test methods (or classes if you choose to) is all that you have to do in your code. Now, all that remains to be done is to tell maven to run “integration tests” only at integration test phase. That means a developer could choose to run only the unit tests (the fast ones that are insulated from databases, queues etc) for most of the time. The Continuous Integration server i.e. Hudson (or the likes) will run the unit tests and the integration tests (which will be slower since they are expected to interact with databases etc.) and that can happen overnight. So, here is how you do it. /pom.xml org.apache.maven.plugins maven-surefire-plugin 2.12 org.apache.maven.surefire surefire-junit47 2.12 -XX:-UseSplitVerifier org.academy.annotation.type.IntegrationTestThis will mean that a developer can run all the unit tests just by using one liner. mvn clean testThis will not run any test that is annotated as integration test. For integration test add the following. /pom.xml maven-failsafe-plugin 2.12 org.apache.maven.surefire surefire-junit47 2.12 org.academy.annotation.type.IntegrationTest integration-test **/*.classThis means the Hudson or the developer (if he chooses to) can run all the tests, unit and integration by a single command. mvn clean verifyOf course if you choose to go all the way i.e. compile, run unit tests, package, run integration tests and deploy, you could do that with a single line of command as well. mvn clean deployThat’s it. You have taken one step towards faster builds and more agile way of working. Happy coding. Further readingA version of this article – slightly edited, is also available at this link at Javalobby. Here is another article which covers a similar topic using same technique.Reference: Categorize tests to reduce build time. from our JCG partner Partho at the Tech for Enterprise blog....

JMX and Spring – Part 1

This is the first of three articles which will show how to empower your Spring applications with JMX support. Maven Configuration This is the Maven pom.xml to setup the code for this example:           <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>uk.co.jemos.experiments.jmx</groupId> <artifactId>jemos-jmx-experiments</artifactId> <version>0.0.1-SNAPSHOT</version> <name>jemos-jmx-experiments</name> <description>Jemos JMX Experiments</description> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.8.2</version> <scope>test</scope> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.16</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>3.0.5.RELEASE</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>3.0.5.RELEASE</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jmx</artifactId> <version>2.0.8</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <version>3.0.5.RELEASE</version> <type>jar</type> <scope>test</scope> </dependency> </dependencies> </project>Spring configuration The Spring configuration is pretty straight-forward: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.0.xsd"><context:property-placeholder location="classpath:jemos-jmx.properties" /><bean id="rmiRegistry" class="org.springframework.remoting.rmi.RmiRegistryFactoryBean"> <property name="port" value="${jemos.jmx.rmi.port}" /> </bean><bean id="jemosJmxServer" class="org.springframework.jmx.support.ConnectorServerFactoryBean" depends-on="rmiRegistry" > <property name="objectName" value="connector:name=rmi" /> <property name="serviceUrl" value="service:jmx:rmi://localhost/jndi/rmi://localhost:${jemos.jmx.rmi.port}/jemosJmxConnector" /> <property name="environment"> <!-- the following is only valid when the sun jmx implementation is used --> <map> <entry key="jmx.remote.x.password.file" value="${user.home}/.secure/jmxremote.password" /> <entry key="jmx.remote.x.access.file" value="${user.home}/.secure/jmxremote.access" /> </map> </property> </bean></beans>This configuration, although simple, covers all that’s required for the following:Start up a JMX server from your Spring application context Expose access to the JMX server through a remote RMI URL Protect access to the JMX server though authentication and authorisationFew things to note about the above configuration:You want to externalise some configuration information, such as the RMI registry port and the host where the application is running. Although I externalised the RMI registry port to a property file in the classpath, I left “localhost” as host name. In a real production environment, especially when you want to scale your application horizontally, e.g. deploy it on different servers, the server part of the remote URL should also be externalised. Because we are exposing a remote RMI URL, in order to expose the JMX server to the RMI registry, we need to start an RMI registry if one is not already started; this happens by declaring the RmiRegistryFactoryBean. The port on which the registry is started must be the same as the exposed URL. The above configuration does not enable annotation-based MBean support; such configuration will be the subject of my next article in which I’ll show how to code a simple MBean to change the logging level of your Log4j-based application. Protecting access to the JMX server through authentication and authorisationIn the above configuration you might have noticed the following part: <property name="environment"> <!-- the following is only valid when the sun jmx implementation is used --> <map> <entry key="jmx.remote.x.password.file" value="${user.home}/.secure/jmxremote.password" /> <entry key="jmx.remote.x.access.file" value="${user.home}/.secure/jmxremote.access" /> </map> </property>What the above snippet declares is the location of two files, one used for authorisation, one for authentication. I decided to put such files under ~/.secure but the location is ultimately up to you. The content of such files is simple: JMX access file jemosAdmin readwrite The above file contains the name of a user (jemosAdmin) and its role (readwrite). JMX password file In the JMX password file you declare the user and its password: jemosAdmin secure Once these information are in place, you can start up the JMX server and then access it through either jconsole or jvisualvm (if you are using JDK6 or later).Once authenticated, the RMI connector is actually available as a bean, as are all Oracle’s native MBeans.In my next article I will show how to code a simple Logging MBean service which can be used at runtime to change the logging level of a package (and all its subpackages). This service brings the advantage of increased uptime and helps troubleshooting applications. Continue to Part 2. Reference: JMX and Spring – Part 1 from our JCG partner Marco Tedone at the Marco Tedone’s blog blog....

JMX and Spring – Part 2

This post continues from Part 1 of the tutorial. Hi, in my previous article I explained how to setup a JMX server through Spring and how to protect access to it through authentication and authorisation. In this article I will show how to implement a simple MBean which allows users to change the level of a Log4j logger at runtime without the need to restart the application. The Spring configuration has changed only slightly from my previous article to facilitate testing; the substance remains the same though. The Spring configuration <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.0.xsd"><bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <list> <value>classpath:jemos-jmx.properties</value> <value>file:///${user.home}/.secure/jmxconnector-credentials.properties</value> </list> </property> </bean><!-- In order to automatically detect MBeans we need to recognise Spring beans --> <context:component-scan base-package="uk.co.jemos.experiments.jmx.mbeans" /><!-- This causes MBeans annotations to be recognised and MBeans to be registered with the JMX server --> <context:mbean-export default-domain="jemos.mbeans"/><bean id="jemosJmxServer" class="org.springframework.jmx.support.ConnectorServerFactoryBean" depends-on="rmiRegistry"> <property name="objectName" value="connector:name=rmi" /> <property name="serviceUrl" value="service:jmx:rmi://localhost/jndi/rmi://localhost:${jemos.jmx.rmi.port}/jemosJmxConnector" /> <property name="environment"> <!-- the following is only valid when the sun jmx implementation is used --> <map> <entry key="jmx.remote.x.password.file" value="${user.home}/.secure/jmxremote.password" /> <entry key="jmx.remote.x.access.file" value="${user.home}/.secure/jmxremote.access" /> </map> </property> </bean><bean id="rmiRegistry" class="org.springframework.remoting.rmi.RmiRegistryFactoryBean"> <property name="port" value="${jemos.jmx.rmi.port}" /> </bean><!-- Used for testing --> <bean id="clientConnector" class="org.springframework.jmx.support.MBeanServerConnectionFactoryBean" depends-on="jemosJmxServer"> <property name="serviceUrl" value="service:jmx:rmi://localhost/jndi/rmi://localhost:${jemos.jmx.rmi.port}/jemosJmxConnector"/> <property name="environment"> <map> <entry key="jmx.remote.credentials"> <bean factory-method="commaDelimitedListToStringArray"> <constructor-arg value="${jmx.username},${jmx.password}" /> </bean> </entry> </map> </property> </bean> </beans>The only part of the configuration which is of interest to us is the scanning of Spring components and the declaration of the MBean exporter (which causes also MBean annotations to be recognised and Spring beans to be registered with a JMX server as MBeans) The LoggerConfigurator MBean package uk.co.jemos.experiments.jmx.mbeans;import org.apache.log4j.Level; import org.apache.log4j.Logger; import org.springframework.jmx.export.annotation.ManagedOperation; import org.springframework.jmx.export.annotation.ManagedOperationParameter; import org.springframework.jmx.export.annotation.ManagedOperationParameters; import org.springframework.jmx.export.annotation.ManagedResource; import org.springframework.stereotype.Component;/** * MBean which allows clients to change or retrieve the logging level for a * Log4j Logger at runtime. * * @author mtedone * */ @Component @ManagedResource(objectName = LoggerConfigurator.MBEAN_NAME, // description = "Allows clients to set the Log4j Logger level at runtime") public class LoggerConfigurator { public static final String MBEAN_NAME = "jemos.mbeans:type=config,name=LoggingConfiguration";@ManagedOperation(description = "Returns the Logger LEVEL for the given logger name") @ManagedOperationParameters({ @ManagedOperationParameter(description = "The Logger Name", name = "loggerName"), }) public String getLoggerLevel(String loggerName) {Logger logger = Logger.getLogger(loggerName); Level loggerLevel = logger.getLevel();return loggerLevel == null ? "The logger " + loggerName + " has not level" : loggerLevel.toString();}@ManagedOperation(description = "Set Logger Level") @ManagedOperationParameters({ @ManagedOperationParameter(description = "The Logger Name", name = "loggerName"), @ManagedOperationParameter(description = "The Level to which the Logger must be set", name = "loggerLevel") }) public void setLoggerLevel(String loggerName, String loggerLevel) {Logger thisLogger = Logger.getLogger(this.getClass()); thisLogger.setLevel(Level.INFO);Logger logger = Logger.getLogger(loggerName);logger.setLevel(Level.toLevel(loggerLevel, Level.INFO));thisLogger.info("Set logger " + loggerName + " to level " + logger.getLevel());}}Apart from Spring JMX annotations (in bold), this is a normal Spring bean. With those annotations however we have made an MBean of it and this bean will be registered with the JMX server at startup.The @ManagedOperation and @ManagedOperationParameters annotations determine what gets displayed on the jconsole. One could omit these annotations, but the parameter names would not become something like p1 and p2, without giving any information on the type of parameter. Invoking the function with, say, the value foo.bar.baz, INFO would result in the following output: ...snip2011-08-11 21:33:36 LoggerConfigurator [INFO] Set logger foo.bar.baz to level INFOIn my next and last article for this series, I will show how to setup an MBean which alerts a listener when the HEAP memory threshold has been reached, as explained in one of my previous articles Continue to Part 3. Reference: JMX and Spring – Part 2 from our JCG partner Marco Tedone at the Marco Tedone’s blog blog....

JMX and Spring – Part 3

This article is the last one of this series. Take a look at Part 1 and Part 2. In this last article of the series I’ll show how to use the native JMX support within the JDK to implement a notification mechanism which alerts a listener when the HEAP memory is above a certain threshold. As discussed in my previous article this approach is ideal because is push instead of pull, is not intrusive and places minimal computing demand on your application. These are the key components to the solution illustrated in this article:MemoryWarningService: This component acts as a listener and registers itself with the Memory MBean to receive notifications. It is configurable with a threshold in the form of a percentage between 0 and 1 (where 1 is 100%) MemoryThreadDumper: This component is invoked when the MemoryWarningService is notified that the HEAP usage is above the threshold and its responsibility is to write a thread dump to a file MemoryWarningServiceConfigurator: This component is an MBean and exposes a method to change the threshold of the MemoryWarningService.The solution provides also a MemoryHeapFiller class used to fill up the HEAP while testing the application and a MemTest class to bootstrap the Spring environment. While the application is running (play with the MemoryHeapFiller settings) You can fire the JConsole at URL: service:jmx:rmi://localhost/jndi/rmi://localhost:8888/jemosJmxConnector connecting as jemosAdmin / secure and change the threshold to various values. The code is not meant for production: it is not robust, there are numerous comments missing, and the filename where to write the thread dump is hard-coded; it represents, however, a good starting point. The code is attached below. You will need Maven to build it. Download Jemos-jmx-experiments-0.0.1-SNAPSHOT-project I tried a scenario with initial threshold to be 0.5, I changed it to 0.3 and then to 0.8. The results are shown below: 2011-08-15 21:53:21 ClassPathXmlApplicationContext [INFO] Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@a4a63d8: startup date [Mon Aug 15 21:53:21 BST 2011]; root of context hierarchy 2011-08-15 21:53:21 XmlBeanDefinitionReader [INFO] Loading XML bean definitions from class path resource [jemos-jmx-appCtx.xml] 2011-08-15 21:53:21 PropertyPlaceholderConfigurer [INFO] Loading properties file from class path resource [jemos-jmx.properties] 2011-08-15 21:53:21 PropertyPlaceholderConfigurer [INFO] Loading properties file from URL [file:/C:/Users/mtedone/.secure/jmxconnector-credentials.properties] 2011-08-15 21:53:21 ThreadPoolTaskScheduler [INFO] Initializing ExecutorService 'myScheduler' 2011-08-15 21:53:21 ClassPathXmlApplicationContext [INFO] Bean 'myScheduler' of type [class org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2011-08-15 21:53:21 DefaultListableBeanFactory [INFO] Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@30296f76: defining beans [propertyConfigurer,loggerConfigurator,memoryWarningServiceConfigurator,memoryHeapFiller,memoryThreadDumper,org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,mbeanExporter,jemosJmxServer,rmiRegistry,clientConnector,memoryMxBean,memoryWarningService,org.springframework.scheduling.annotation.internalAsyncAnnotationProcessor,org.springframework.scheduling.annotation.internalScheduledAnnotationProcessor,myScheduler]; root of factory hierarchy 2011-08-15 21:53:21 AnnotationMBeanExporter [INFO] Registering beans for JMX exposure on startup 2011-08-15 21:53:21 RmiRegistryFactoryBean [INFO] Looking for RMI registry at port '8888' 2011-08-15 21:53:23 RmiRegistryFactoryBean [INFO] Could not detect RMI registry - creating new one 2011-08-15 21:53:23 ConnectorServerFactoryBean [INFO] JMX connector server started: javax.management.remote.rmi.RMIConnectorServer@4355d3a3 2011-08-15 21:53:23 AnnotationMBeanExporter [INFO] Bean with name 'jemosJmxServer' has been autodetected for JMX exposure 2011-08-15 21:53:23 AnnotationMBeanExporter [INFO] Bean with name 'loggerConfigurator' has been autodetected for JMX exposure 2011-08-15 21:53:23 AnnotationMBeanExporter [INFO] Bean with name 'memoryWarningServiceConfigurator' has been autodetected for JMX exposure 2011-08-15 21:53:23 AnnotationMBeanExporter [INFO] Located managed bean 'loggerConfigurator': registering with JMX server as MBean [jemos.mbeans:type=config,name=LoggingConfiguration] 2011-08-15 21:53:23 AnnotationMBeanExporter [INFO] Located MBean 'jemosJmxServer': registering with JMX server as MBean [jemos.mbeans:name=jemosJmxServer,type=RMIConnectorServer] 2011-08-15 21:53:23 AnnotationMBeanExporter [INFO] Located managed bean 'memoryWarningServiceConfigurator': registering with JMX server as MBean [jemos.mbeans:type=config,name=MemoryWarningServiceConfiguration] 2011-08-15 21:53:23 MemoryWarningService [INFO] Percentage is: 0.5 2011-08-15 21:53:23 MemoryWarningService [INFO] Listener added to JMX bean Adding data... Adding data... Adding data... Adding data... Adding data... Adding data... Adding data... 2011-08-15 21:53:37 MemoryWarningService [INFO] Percentage is: 0.3 2011-08-15 21:53:37 MemoryWarningServiceConfigurator [INFO] Memory threshold set to 0.3 Adding data... 2011-08-15 21:53:38 MemoryWarningService [WARN] Memory usage low!!! 2011-08-15 21:53:38 MemoryWarningService [WARN] percentageUsed = 0.3815679398794023 2011-08-15 21:53:38 MemoryThreadDumper [WARN] Stacks dumped to: C:/tmp/stacks.dump Adding data... Adding data... Adding data... 2011-08-15 21:53:45 MemoryWarningService [INFO] Percentage is: 0.8 2011-08-15 21:53:45 MemoryWarningServiceConfigurator [INFO] Memory threshold set to 0.8 Adding data... Adding data... Adding data... Adding data... Adding data... Adding data... Adding data... 2011-08-15 21:54:01 MemoryWarningService [WARN] Memory usage low!!! 2011-08-15 21:54:01 MemoryWarningService [WARN] percentageUsed = 0.8383333266727508 2011-08-15 21:54:02 MemoryThreadDumper [WARN] Stacks dumped to: C:/tmp/stacks.dump Adding data... Adding data... Adding data... Exception in thread "JMX server connection timeout 24" java.lang.OutOfMemoryError: Java heap spaceThe Memory Warning Service package uk.co.jemos.experiments.jmx;import java.lang.management.ManagementFactory; import java.lang.management.MemoryNotificationInfo; import java.lang.management.MemoryPoolMXBean; import java.lang.management.MemoryType;import javax.annotation.PostConstruct; import javax.management.Notification; import javax.management.NotificationEmitter; import javax.management.NotificationListener;import org.springframework.beans.factory.annotation.Autowired;/** * A component which sends notifications when the HEAP memory is above a certain * threshold. * * @author mtedone * */ public class MemoryWarningService implements NotificationListener {/** This bean's name */ public static final String MBEAN_NAME = "jemos.mbeans:type=monitoring,name=MemoryWarningService";/** The application logger */ private static final org.apache.log4j.Logger LOG = org.apache.log4j.Logger .getLogger(MemoryWarningService.class);@Autowired private NotificationEmitter memoryMxBean;@Autowired private MemoryThreadDumper threadDumper;/** A pool of Memory MX Beans specialised in HEAP management */ private static final MemoryPoolMXBean tenuredGenPool = findTenuredGenPool();/** * {@inheritDoc} */ @Override public void handleNotification(Notification notification, Object handback) {if (notification.getType().equals( MemoryNotificationInfo.MEMORY_THRESHOLD_EXCEEDED)) { long maxMemory = tenuredGenPool.getUsage().getMax(); long usedMemory = tenuredGenPool.getUsage().getUsed(); LOG.warn("Memory usage low!!!"); double percentageUsed = (double) usedMemory / maxMemory; LOG.warn("percentageUsed = " + percentageUsed); threadDumper.dumpStacks();} else { LOG.info("Other notification received..." + notification.getMessage()); }}/** * It sets the threshold percentage. * * @param percentage */ public void setPercentageUsageThreshold(double percentage) { if (percentage <= 0.0 || percentage > 1.0) { throw new IllegalArgumentException("Percentage not in range"); } else { LOG.info("Percentage is: " + percentage); } long maxMemory = tenuredGenPool.getUsage().getMax(); long warningThreshold = (long) (maxMemory * percentage); tenuredGenPool.setUsageThreshold(warningThreshold); }@PostConstruct public void completeSetup() { memoryMxBean.addNotificationListener(this, null, null); LOG.info("Listener added to JMX bean"); }/** * Tenured Space Pool can be determined by it being of type HEAP and by it * being possible to set the usage threshold. */ private static MemoryPoolMXBean findTenuredGenPool() { for (MemoryPoolMXBean pool : ManagementFactory.getMemoryPoolMXBeans()) { // I don't know whether this approach is better, or whether // we should rather check for the pool name "Tenured Gen"? if (pool.getType() == MemoryType.HEAP && pool.isUsageThresholdSupported()) { return pool; } } throw new AssertionError("Could not find tenured space"); }}The Memory Thread Dumper package uk.co.jemos.experiments.jmx;import java.io.BufferedWriter; import java.io.File; import java.io.FileWriter; import java.io.IOException; import java.io.Writer; import java.lang.management.ManagementFactory; import java.lang.management.ThreadInfo; import java.lang.management.ThreadMXBean; import java.text.SimpleDateFormat; import java.util.Date; import java.util.HashMap; import java.util.Map;import org.apache.commons.io.IOUtils; import org.springframework.stereotype.Component;/** * This component dumps the thread stack to the file system. * * @author mtedone * */ @Component public class MemoryThreadDumper {/** The application logger */ private static final org.apache.log4j.Logger LOG = org.apache.log4j.Logger .getLogger(MemoryThreadDumper.class);/** * It dumps the Thread stacks * * @throws IOException */ public void dumpStacks() {// hard-coded: This needs to be changed to a property or something String stackFileName = "C:/tmp/stacks.dump";ThreadMXBean mxBean = ManagementFactory.getThreadMXBean(); ThreadInfo[] threadInfos = mxBean.getThreadInfo( mxBean.getAllThreadIds(), 0); Map<Long, ThreadInfo> threadInfoMap = new HashMap<Long, ThreadInfo>(); for (ThreadInfo threadInfo : threadInfos) { threadInfoMap.put(threadInfo.getThreadId(), threadInfo); }File dumpFile = new File(stackFileName); BufferedWriter writer = null; try { writer = new BufferedWriter(new FileWriter(dumpFile)); this.dumpTraces(mxBean, threadInfoMap, writer);LOG.warn("Stacks dumped to: " + stackFileName);} catch (IOException e) { throw new IllegalStateException( "An exception occurred while writing the thread dump"); } finally { IOUtils.closeQuietly(writer); }}private void dumpTraces(ThreadMXBean mxBean, Map<Long, ThreadInfo> threadInfoMap, Writer writer) throws IOException { Map<Thread, StackTraceElement[]> stacks = Thread.getAllStackTraces(); writer.write("Dump of " + stacks.size() + " thread at " + new SimpleDateFormat("yyyy/MM/dd HH:mm:ss z") .format(new Date(System.currentTimeMillis())) + "\n\n"); for (Map.Entry<Thread, StackTraceElement[]> entry : stacks.entrySet()) { Thread thread = entry.getKey(); writer.write("\"" + thread.getName() + "\" prio=" + thread.getPriority() + " tid=" + thread.getId() + " " + thread.getState() + " " + (thread.isDaemon() ? "deamon" : "worker") + "\n"); ThreadInfo threadInfo = threadInfoMap.get(thread.getId()); if (threadInfo != null) { writer.write(" native=" + threadInfo.isInNative() + ", suspended=" + threadInfo.isSuspended() + ", block=" + threadInfo.getBlockedCount() + ", wait=" + threadInfo.getWaitedCount() + "\n"); writer.write(" lock=" + threadInfo.getLockName() + " owned by " + threadInfo.getLockOwnerName() + " (" + threadInfo.getLockOwnerId() + "), cpu=" + mxBean.getThreadCpuTime(threadInfo.getThreadId()) / 1000000L + ", user=" + mxBean.getThreadUserTime(threadInfo.getThreadId()) / 1000000L + "\n"); } for (StackTraceElement element : entry.getValue()) { writer.write(" "); writer.write(element.toString()); writer.write("\n"); } writer.write("\n"); } }}The Memory Service Configuration MBean package uk.co.jemos.experiments.jmx.mbeans;import org.springframework.beans.BeansException; import org.springframework.context.ApplicationContext; import org.springframework.context.ApplicationContextAware; import org.springframework.jmx.export.annotation.ManagedOperation; import org.springframework.jmx.export.annotation.ManagedOperationParameter; import org.springframework.jmx.export.annotation.ManagedOperationParameters; import org.springframework.jmx.export.annotation.ManagedResource; import org.springframework.stereotype.Component;import uk.co.jemos.experiments.jmx.MemoryWarningService;@Component @ManagedResource(objectName = MemoryWarningServiceConfigurator.MBEAN_NAME, // description = "Allows clients to set the memory threshold") public class MemoryWarningServiceConfigurator implements ApplicationContextAware {/** The application logger */ private static final org.apache.log4j.Logger LOG = org.apache.log4j.Logger .getLogger(MemoryWarningServiceConfigurator.class);public static final String MBEAN_NAME = "jemos.mbeans:type=config,name=MemoryWarningServiceConfiguration";private ApplicationContext ctx;@ManagedOperation(description = "Sets the memory threshold for the memory warning system") @ManagedOperationParameters({ @ManagedOperationParameter(description = "The memory threshold", name = "memoryThreshold"), }) public void setMemoryThreshold(double memoryThreshold) {MemoryWarningService memoryWarningService = (MemoryWarningService) ctx .getBean("memoryWarningService"); memoryWarningService.setPercentageUsageThreshold(memoryThreshold);LOG.info("Memory threshold set to " + memoryThreshold); }@Override public void setApplicationContext(ApplicationContext applicationContext) throws BeansException { ctx = applicationContext;}}The Spring configuration <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:util="http://www.springframework.org/schema/util" xmlns:task="http://www.springframework.org/schema/task" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task-3.0.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.0.xsd"><bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" > <property name="locations"> <list> <value>classpath:jemos-jmx.properties</value> <value>file:///${user.home}/.secure/jmxconnector-credentials.properties </value> </list> </property> </bean><context:component-scan base-package="uk.co.jemos.experiments.jmx" /><context:mbean-export default-domain="jemos.mbeans" /><bean id="jemosJmxServer" class="org.springframework.jmx.support.ConnectorServerFactoryBean" depends-on="rmiRegistry"> <property name="objectName" value="connector:name=rmi" /> <property name="serviceUrl" value="service:jmx:rmi://localhost/jndi/rmi://localhost:${jemos.jmx.rmi.port}/jemosJmxConnector" /> <property name="environment"> <!-- the following is only valid when the sun jmx implementation is used --> <map> <entry key="jmx.remote.x.password.file" value="${user.home}/.secure/jmxremote.password" /> <entry key="jmx.remote.x.access.file" value="${user.home}/.secure/jmxremote.access" /> </map> </property> </bean><bean id="rmiRegistry" class="org.springframework.remoting.rmi.RmiRegistryFactoryBean"> <property name="port" value="${jemos.jmx.rmi.port}" /> </bean><bean id="clientConnector" class="org.springframework.jmx.support.MBeanServerConnectionFactoryBean" depends-on="jemosJmxServer"> <property name="serviceUrl" value="service:jmx:rmi://localhost/jndi/rmi://localhost:${jemos.jmx.rmi.port}/jemosJmxConnector" /> <property name="environment"> <map> <entry key="jmx.remote.credentials"> <bean factory-method="commaDelimitedListToStringArray"> <constructor-arg value="${jmx.username},${jmx.password}" /> </bean> </entry> </map> </property> </bean><bean id="memoryMxBean" class="java.lang.management.ManagementFactory" factory-method="getMemoryMXBean" /> <bean id="memoryWarningService" class="uk.co.jemos.experiments.jmx.MemoryWarningService"> <property name="percentageUsageThreshold" value="0.5" /> </bean><task:annotation-driven scheduler="myScheduler" /><task:scheduler id="myScheduler" pool-size="10" /></beans> Reference: JMX and Spring – Part 3 from our JCG partner Marco Tedone at the Marco Tedone’s blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books