Featured FREE Whitepapers

What's New Here?


JSF Component Libraries – Quality is more than zero bugs

It has been a while since I last looked at the quality of the three major JSF component libraries. In December 2009 I started a comparison of the overall software quality of RichFaces, Primefaces and ICEfaces. Things have changed since than and I wanted to re-evaluate and update this since some time now. The tools I used back in 2009 are still valid but the tool-suite was a bit tricky to setup and I was simply missing the time for doing this. Thanks to the recent needs for a FAMIX 2.1 exporter I was looking again at inFusion. It did the trick for GlassFish City posts ( First, Second). But beside this it is far more. It’s a tool to help with assessment of quality for your systems. It focuses on architecture and design quality and allows for quality assurance of multi-million LOC systems. Before I give you an idea about what inFusion can do for you (implicitly by analyzing the candidates, I don’t do advertising :-D) I have to thank Dr. Radu Marinescu and Dr. Adrian Trifu for providing a full functional test and evaluation license of their product to me. Without this I would not be able to show you the great software cities or blog about quality of open source projects in general like today! Please look at the resources underneath this post for further links about inFusion and the principles behind it. If you would like me to do a complete product post, let me know in the comments! Focus of this Article   PrimeFaces, RichFaces and ICEfaces are the three mostly used JSF component libraries. Looking at the communities using it I always get the feeling that there is kind of a competition for the one and only. This is absolutely driven by the PrimeFaces lead. You can think about what he is doing and like it or not. With this post I am not trying to blame anybody about political correct behavior but trying to bring this back to some more objective views on the different projects by looking at the delivered quality. Introduction   Before we get to the results I need to introduce you to some basics. If you feel you have seen enough of this before and everything down below is simple, feel free to proceed to the single results. InFusion assesses software quality in a way that is build around but not centered on metrics. So it is introducing a special kind of quality model (QM) which expresses the quality of a software system in terms of some of its measurable characteristics. Quality itself can mean a couple of different things (External, Process, Internal quality). inFusion defines the notion of quality as “internal quality”, in other words the quality of the system’s architecture and design. The inFusion QM defines two decompositional layers: a layer of “quality attributes”, and a layer of “design properties”. The higher level overview contains a set of five “design properties” which is build up on a couple of well known “design principles” (e.g. DRY Principle and the Law of Demeter). With these principles in mind, inFusion measures deviations from most of these principles and design rules. By taking into account also the “bad smells” these deviations are quantified. All this together with the right mapping (which could be looked up in inFusion itself or the publications mentioned below) a “Quality Deficit Index” (QDI) is computed. The QDI is a positive, upwards unbound value, which is a measure of “badness” of the analyzed system’s design quality respecting the overall size of the system. Beside those high-level measures, inFusion also presents visualizations like coupling, encapsulation and design flaws on different levels (package, inheritance, class and modules). I also like the metrics pyramid. It somehow answers the question “How does my project compare to others?”.It generates a pyramid, showing key metrics for your project along with comparisons to industry-standard ranges for those numbers.It is separated into three different categories (inheritance, size and communication).Overview PyramidThe numbers indicate the ratios; the colors indicate where the ratios fit into the industry-standard ranges (derived from numerous open source projects). Each ratio is either green (close to average range), blue (close to low range), or red (close to high range). The generated numbers serve a couple of purposes. First, they allow you to compare your code base to others along several dimensions. Second, these numbers indicate places where you might want to expend effort to improve code hygiene and design. However, you must understand these numbers in context. PrimeFaces (QDI: 30,8)Design Flaws on PrimeFacesFounded in 2009 and having a growing user base. Head of development is Ça?atay Çivici. The following analysis was run on the latest development trunk. The total number of lines of code in the system is 44.123 (includings comments and whitespace). The Quality Deficit Index for primefaces is 30,8. InFusion detected 12 different Design Flaws. Most impacting ones are the 24 Data Classes and the 23 Refused Parent Bequest classes. Followed by three God classes. There are quite a few duplication flaws but no cyclomatic dependencies. Class hierarchies tend to be tall and wide (i.e. inheritance trees tend to have many depth-levels and base-classes with many directly derived sub-classes) Classes tend to contain an average number of methods; be organized in rather fine-grained packages (i.e. few classes per package) Methods tend to be rather long and having an average logical complexity ; call many methods (high coupling intensity) from few other classes (low coupling dispersion);Metrics Pyramid for PrimeFacesGiven the fact, that this is a component library the NDD (Number of direct descendants) and HIT (Height of inheritance tree) might be acceptable. Complex inheritance is something that make understanding and predicting behavior more complex. Deeper trees constitute greater design complexity, since more methods and classes are involved, but enhance the potential reuse of inherited methods. NOM refers to the number of methods. This is a simple metric showing the complexity of a class in terms of responsibilities but not in terms of size of the methods. RichFaces (QDI: 9.1)Design Flaws on RichFaces   RichFaces originated from Ajax4jsf in late 2005. It is the widely used component library on JBoss. The analysis was using the latest development trunk and only includes the core and the components parts. The total number of lines of code in the system is 134.037 (includings comments and whitespace). The Quality Deficit Index for RichFaces is 9.1. Class hierarchies tend to be tall and of average width (i.e. inheritance trees tend to have many depth-levels and base-classes with several directly derived sub-classes) Classes tend to contain an average number of methods; and are organized in rather fine-grained packages (i.e. few classes per package); Methods tend to: be average in length and having an average logical complexity; call many methods (high coupling intensity) from few other classes (low coupling dispersion);Metrics Pyramid for RichFacesRichFaces is doing a better job with hierarchies in general. Only the inheritance tree height is close to high range. The NOM for communication classes also is close to high. The rest is within the defined ranges which actually leads to this good QDI. ICEfaces (QDI: 16.6)Design Flaws on ICEfacesICEfaces is there since … and The analysis was done against the 3.1.0-tag and includes the core, push and the components. The total number of lines of code in the system is 153.843 (includings comments and whitespace). The Quality Deficit Index for ICEfaces is 16.6. InFusion detected 16 different design flaws with 35 Data Classes, 13 God Classes, 20 SAP Breakers followed by 21 Refused Parent Bequest classes and 35 Cyclic Dependencies.We have a fair amount duplication in there, too. Class hierarchies tend to be tall and of average width i.e. inheritance trees tend to have many depth-levels and base-classes with several directly derived sub-classes) Classes tend to contain an average number of methods; be organized in rather fine-grained packages (i.e. few classes per package). Methods tend to be rather long and having an average logical complexity; call many methods (high coupling intensity) from few other classes (low coupling dispersion).Metrics Pyramid for ICEfacesAs expected, we also find a close to high inheritance tree height. Beside that only the number of methods is something to worry about. Interpretation   This analysis is different from the one I did a few years back. I skipped looking at all the obvious stuff (e.g. checkstyle, findbugs) because everybody is running a different approach here and to me this simply isn’t a comparable base for system quality in general. Before we draw the conclusion here, let me first express that the results are no indication about weather you should use any of the candidates or not. The system design quality doesn’t influence the quality of the code you produce using them. Also it shouldn’t be any indicator about if the candidates are stable or bug-free. It simply focuses on the issues the developers building the products might face. On a long range this also might have an impact on you as a user. Because design problems are expensive, frequent and unavoidable. So having a lot of quality defects in a code base could influence the number of new features a team is able to deliver over time or the time for fixing bugs raises significantly. At the end, in combination with a small team this might lead to the end of a product. All three candidates share the same problems in terms of inheritance. The reason for that is, that they are all frameworks which provide a good set of features to their clients. In combination with the size of the candidates, PrimeFaces seems to have the biggest design flaws at the moment doing the analysis. RichFaces is the leader in terms of quality far in front of any of the other two. This is what I expected to see from a RedHat community driven project. Another indicator, that working software communities are vital, skilled and kicking! ICEfaces is the only project with cyclic dependencies and an unusual amount of duplicated code. So they might end up having to fix the same error a couple of times. I don’t have a price to give away here but like to send my congratulations to the RichFaces team for a high quality product! Keep up the good work! Here is your RichFaces-City (core & components). The green area is the old org.ajax4jsf.* bungalow :)RichFaces-CityResources: inFusion Product Page Object-Oriented Metrics in Practice (Springer, 2006) iPlasma: An Integrated Platform for Quality Assessment of Object-Oriented Design (PDF) Pragmatic Design Quality Assessment (Slideshare presentation) Reference: JSF Component Libraries – Quality is more than zero bugs. from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....

Keep The Code Clean: WatchDog & SpotTheBug Approach

Before going to discuss ‘ WatchDog & SpotTheBug Approach‘, let me give a brief context on what is the needs for this. Three months back I was asked to write core infrastructure code for our new application which uses all the latest and greatest technologies. I have written the infrastructure code and implemented 2 usecases to demonstrate which logic should go into which layer and the code looks good(atleast to me :-)). Then I moved on to my main project and I was hearing that the project that i designed(from Now on-wards I will refer this as ProjectA) is going well. After 3 months last week one of the developer of ProjectA came to me to help him in resolving some JAXB Marshalling issue. Then I imported the latest code into eclipse and started looking into the issue and I was literally shocked by looking at the messy code. First I resolved that issue and started looking into whole code and I was speechless. How come the code become such a mess in this short span of time, it is just 3 months.                  There are Date Formatting methods in almost every Service class(Copy&Paste with different names) There are Domain classes with 58 String properties and setters/getters. Customer class contains homeAddressLine1, homeAddressLine2, homeCity.., officeAddrLine1, officeAddrLine2, officeCity… There is no Address class. In some classes XML to Java marshaling is done using JAXB and in some other classes using XStream and in some other places constructing XML string manually even though there is core utilities module with lots of XML marshaling utility methods. In some classes SLF4J Logger is used and in some places Log4J Logger is being used.and the list goes on… So what just happend? Where is the problem? We started this project by pledging to keep the code clean and highly maintainable/enhanceable. But now it is in worst possible state. Somehow it is understandable if the code is legacy code and is messy because today’s latest way of doing things becomes tomorrow’s legacy and bad approach like externalizing the application configuration into XML was the way to go sometime back and now it became XML hell with shiny new Annotations. I am pretty sure that in a couple of years we will see ‘Get Rid of Annotation Hell by Using SomeNew Gr8 Way’. But in my case it is just 3 months old project. When I think about the causes of why that code becomes such a mess I end-up with never-ending list of reasons:Tight dead lines Incompetent developers Not using code quality checking tools No code reviews No time to clean the messy codeetc etc So whatever the reason your code will become messy after sometime, especially when more number of people are working the project. The worst part is you can’t blame anyone. Developer will say I have no time to cleanup the code as I have assigned high priority tasks. Tech Lead is busy in analysing and assigning the new tasks to developers. Manager is busy in aggregating the team’s task status reports to satisfy his boss. Architect is busy in designing the new modules for new third party integration services. QA people are busy in preparing/executing their test cases for upcoming releases. So whose responsibility it is to clean the code? Or in other way, How can we keep code clean even with all the above said Busy circumstances? Before going to explain How ‘WatchDog & SpotTheBug Approach’ works let me tell you another story. 3 years back I worked on a banking project which is well designed, well organised and well written code that I have ever seen so far. That project started almost 10 years back, but still the code quality is very good. How is it possible? The only reason is If any developer check-in the code with some bad code like adding duplicate utility methods then within 4 hours that developer will recieve an email from a GUY asking for the explanation what is the need to add that method when that utility method is already available in core-utilities module. In case there is no valid reason, that developer has to open a new defect with ‘Cleaning Bad Code’ in the defect title, assign the defect to himself and change the code and should check-in the files ASAP. With this process, every team member in our team used to tripple check the code before checking into repository. I think this is best possible way to keep the code clean. By now you may have clue on what I mean by ‘WatchDog’. Yes, I called the GUY as WatchDog. First of all, sorry for calling such an important role as Dog but it better describe what that guy will do. It will bark as soon as it saw some bad code. Need for WatchDog:As I mentioned above, everyone in the team might be busy with their high-priority tasks. They might not be able to spend time on cleaning the code. Also from the Business perspective Adding new customer-requested features might be high-priority than cleaning the code. Sometime even though Business know that in long run there is a chance that entire application becomes un-maintainable if they don’t cleanup the mess they will have to satisfy their customer first with some quick new features and will opt for short-term benefits. We have plenty of Quality Checking tools like PMD, FindBugs, Sonar. But does these tools suggest to create an Address class instead of repeating all address properties for different type of addresses as i mentioned above. Does these tools suggest you to use same xml marshalling library across the project. As far as I know, they won’t. So if you really want your software/product to sustain over time, I would suggest to hire a dedicated WatchDog(Human Being). The WatchDog’s primary responsibilities would be:Continuously checking for the code smells, duplicate methods, coding standards violations and send the report to entire team. If possible point out the existing utility to use instead of creating duplicate methods. Checking for design violations like establishing Database Connection or Transaction management code in wrong places(web layer for ex). Checking for cyclic dependencies for between modules. Exploring and suggesting well established, tested generic libraries like apache commons-*.jars, Google Guava instead of writing home grown solutions(I feel like instead of writing home grown Cache Management better to use Guava Cache,but YMMV)So far so good if the WatchDog does its job well. What if the WatchDog itself is inefficient?? What if WatchDog is not Skilled enough to perform its job? Who is going to check whether WatchDog is doing good or not? Here ‘SpotTheBug’ program comes into picture. ‘SpotTheBug’ I strongly believe in having a friendly culture to encourage the developers to come up with thoughts to better the software. Every week each team member should come up with 3 points to better/clean the code. They can be: Bad code Identification, Better Design, New Features etc. Instead of just saying that code is bad code, he has to specify why he is feeling that code is bad, how to rewrite it in better way and what would be the impact. Based on the effectiveness of the points, value-points should be given to the developer and those points should definitely be considered in performance review(There should be some motivation right :-)). With WatchDog and SpotTheBug programs in place, if the team can identify the bad code before the WatchDog caught it then it is going to be a negetive point for WatchDog. If WatchDog continuously getting negative points then it is time to evaluate the effectiveness of WatchDog itself. By using this WatchDog & SpotTheBug approach combined with proper usage of Code Quality Checking Tools(FindBugs, PMD, Sonar) we can make sure the code is clean to the maximum extent. Reference: Keep The Code Clean: WatchDog & SpotTheBug Approach from our JCG partner Siva Reddy at the My Experiments on Technology blog....

How to manage Quartz remotely

Option 1: JMX Many people asked can they manage Quartz via JMX, and I am not sure why Quartz doc won’t even mention it. Yes you can enable JMX in quartz with the following in quartz.properties org.quartz.scheduler.jmx.export = true After this, you use standard JMX client such as $JAVA_HOME/bin/jconsole to connect and manage remotely. Option 2: RMI Another way to manage quartz remotely is to enable RMI in Quartz. If you use this, you basically run one instance of Quartz as RMI server, and then you can create second Quartz instance as RMI client. These two can talk remotely via a TCP port. For server scheduler instance, you want to add these in quartz.properties org.quartz.scheduler.rmi.export = true org.quartz.scheduler.rmi.createRegistry = true org.quartz.scheduler.rmi.registryHost = localhost org.quartz.scheduler.rmi.registryPort = 1099 org.quartz.scheduler.rmi.serverPort = 1100And for client scheduler instance, you want to add these in quartz.properties org.quartz.scheduler.rmi.proxy = true org.quartz.scheduler.rmi.registryHost = localhost org.quartz.scheduler.rmi.registryPort = 1099The RMI feature is mentioned in Quartz doc here. Quartz doesn’t have a client API, but use the same org.quartz.Scheduler for both server and client. It’s just the configuration are different. By different configuration, you get very different behavior. For server, your scheduler is running all the jobs, while for client, it’s simply a proxy. Your client scheduler instance will not run any jobs! You must be really careful when shutting down client because it does allow you to bring down the server! These configurations have been highlighted in the MySchedule project. If you run the webapp, you should see a screen like this demo, and you will see it provided many sample of quartz configurations with these remote managment config properties. If configure with RMI option, you can actually still use MySchedule web UI to manage the Quartz as proxy. You can view and drill down jobs, and you can even stop or shutdown remote server! Based on my experience, there is a down side of using Quartz RMI feature though. That is it creates a single point of failure. There is no fail over if your RMI server port is down! Reference: How to manage Quartz remotely from our JCG partner Zemian Deng at the A Programmer’s Journal blog....

Spring Profiles and Java Configuration

My last blog introduced Spring 3.1’s profiles and explained both the business case for using them and demonstrated their use with Spring XML configuration files. It seems, however, that a good number of developers prefer using Spring’s Java based application configuration and so Spring have designed a way of using profiles with their existing @Configuration annotation. I’m going to demonstrate profiles and the @Configuration annotation using the Person class from my previous blog. This is a simple bean class whose properties vary depending upon which profile is active. public class Person { private final String firstName; private final String lastName; private final int age; public Person(String firstName, String lastName, int age) { this.firstName = firstName; this.lastName = lastName; this.age = age; } public String getFirstName() { return firstName; } public String getLastName() { return lastName; } public int getAge() { return age; } } Remember that the Guys at Spring recommend that Spring profiles should only be used when you need to load different types or sets of classes and that for setting properties you should continue using the PropertyPlaceholderConfigurer. The reason I’m breaking the rules is that I want to try to write the simplest code possible to demonstrate profiles and Java configuration. At the heart of using Spring profiles with Java configuration is Spring’s new @Profile annotation. The @Profile annotation is used attach a profile name to an @Configuration annotation. It takes a single parameter that can be used in two ways. Firstly to attach a single profile to an @Configuration annotation: @Profile("test1") and secondly, to attach multiple profiles: @Profile({ "test1", "test2" }) Again, I’m going to define two profiles “test1” and “test2” and associate each with a configuration file. Firstly “test1”: @Configuration @Profile("test1") public class Test1ProfileConfig { @Bean public Person employee() { return new Person("John", "Smith", 55); } } …and then “test2”: @Configuration @Profile("test2") public class Test2ProfileConfig { @Bean public Person employee() { return new Person("Fred", "Williams", 22); } } In the code above, you can see that I’m creating a Person bean with an effective id of employee (this is from the method name) that returns differing property values in each profile. Also note that the @Profile is marked as: @Target(value=TYPE)…which means that is can only be placed next to the @Configuration annotation. Having attached an @Profile to an @Configuration, the next thing to do is to activate your selected @Profile. This uses exactly the same principles and techniques that I described in my last blog and again, to my mind, the most useful activation technique is to use the “spring.profiles.active” system property. @Test public void testProfileActiveUsingSystemProperties() { System.setProperty("spring.profiles.active", "test1"); ApplicationContext ctx = new ClassPathXmlApplicationContext("profiles-config.xml"); Person person = ctx.getBean("employee", Person.class); String firstName = person.getFirstName(); assertEquals("John", firstName); } Obviously, you wouldn’t want to hard code things as I’ve done above and best practice usually means keeping the system properties configuration separate from your application. This gives you the option of using either a simple command line argument such as: -Dspring.profiles.active="test1"…or by adding # Setting a property value spring.profiles.active=test1to Tomcat’s catalina.properties So, that’s all there is to it: you create your Spring profiles by annotating an @Configuration with an @Profile annotation and then switching on the profile you want to use by setting the spring.profiles.active system property to your profile’s name. As usual, the Guys at Spring don’t just confine you to using system properties to activate profiles, you can do things programatically. For example, the following code creates an AnnotationConfigApplicationContext and then uses an Environment object to activate the “test1” profile, before registering our @Configuration classes. @Test public void testAnnotationConfigApplicationContextThatWorks() { // Can register a list of config classes AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext(); ctx.getEnvironment().setActiveProfiles("test1"); ctx.register(Test1ProfileConfig.class, Test2ProfileConfig.class); ctx.refresh(); Person person = ctx.getBean("employee", Person.class); String firstName = person.getFirstName(); assertEquals("John", firstName); } This is all fine and good, but beware, you need to call AnnotationConfigApplicationContext’s methods in the right order. For example, if you register your @Configuration classes before you specify your profile, then you’ll get an IllegalStateException. @Test(expected = IllegalStateException.class) public void testAnnotationConfigApplicationContextThatFails() { // Can register a list of config classes AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext( Test1ProfileConfig.class, Test2ProfileConfig.class); ctx.getEnvironment().setActiveProfiles("test1"); ctx.refresh(); Person person = ctx.getBean("employee", Person.class); String firstName = person.getFirstName(); assertEquals("John", firstName); } Before closing today’s blog, the code below demonstrates the ability to attach multiple @Profiles to an @Configuration annotation. @Configuration @Profile({ "test1", "test2" }) public class MulitpleProfileConfig { @Bean public Person tourDeFranceWinner() { return new Person("Bradley", "Wiggins", 32); } } @Test public void testMulipleAssignedProfilesUsingSystemProperties() { System.setProperty("spring.profiles.active", "test1"); ApplicationContext ctx = new ClassPathXmlApplicationContext("profiles-config.xml"); Person person = ctx.getBean("tourDeFranceWinner", Person.class); String firstName = person.getFirstName(); assertEquals("Bradley", firstName); System.setProperty("spring.profiles.active", "test2"); ctx = new ClassPathXmlApplicationContext("profiles-config.xml"); person = ctx.getBean("tourDeFranceWinner", Person.class); firstName = person.getFirstName(); assertEquals("Bradley", firstName); }  In the code above, 2012 Tour De France winner Bradley Wiggins appears in both the “test1” and “test2” profiles.   Reference: Spring, Enterprise Java from our JCG partner Roger Hughes at the Captain Debug’s Blog blog. ...

Guaranteed messaging for topics, the JMS spec, and ActiveMQ

Recently a customer asked me to look closer at ActiveMQ’s implementation of “persistent” messages, how it applies to topics, and what happens in failover scenarios when there are non-durable subscribers. I had understood that the JMS semantics specify that only durable subscribers to a topic are guaranteed message delivery for a persistent delivery mode, even in the face of message-broker provider failure. But what does it have to say about non-durable subscribers for persistent messages? What’s the point of sending a message “persistent” when there are no durable subscribers? Upon looking at the exact wording of the spec, I became a little unsure. So I consulted the Java Message Service book (Richards, Monson-Haefel, and Chappell) for some more discussion around guaranteed messaging, checked into the ActiveMQ source code, and consulted with some of my co-workers. First, let’s look at what the spec says: From section 4.10 of the JMS spec: Most clients should use producers that produce PERSISTENT messages. This insures once-and-only-once message delivery for messages delivered from a queue or a durable subscription. Pretty clear, right? Using persistent message delivery ensures message delivery for a queue or durable subscription. From section 6.12: Unacknowledged messages of a nondurable subscriber should be able to be recovered for the lifetime of that nondurable subscriber. So now unacked messages of a non-durable subscriber should be able to be recovered? “for the lifetime of that non-durable subscriber” I guess… But later as part of 6.12: Only durable subscriptions are reliably able to recover unacknowledged messages. and… To ensure delivery, a TopicSubscriber should establish a durable subscription. Although the spec says very clearly [to the effect] that only queues and durable subscribers can take advantage of the store-and-forward guaranteed delivery, I guess I become confused at the “messages of a non-durable subscriber should be able to be recovered for the lifetime of the non-durable subscriber”Does the persistent protocol change for a topic depending on its consumers (where the message is considered the responsibility of the broker only after the broker has persisted the message and sent the producer an ack)? Does that mean even in the event of broker failure? Or is broker failure considered and end of the life of the subscription for non-durable subs? What happens with ActiveMQ when there is a network of brokers for persistent, non-durable topics? Can messages be missed if a broker in the network fails? What are the exact differences between sending a message “persistent” vs “non-persistent” to a topic with non-durable subscribers?These are two parts that have to be considered for this discussion of guaranteed delivery. The part where the publisher is sending a message to the broker, and when the consumer is receiving a message from the broker. For persistent messages, the protocol is for the sender to send a message and the broker to ack the message only after it has been persisted to a store. On the other hand, the consumer must ack the message after the broker has delivered it to say “hey, I’ll take responsibility of the message now”. Only then will the broker relinquish responsibility and removing it from its store.   Does the protocol change for a topic depending on its consumers? So for persistent messages sent to a topic (not taking into any account consumers at the moment), does the spec say anything about whether the message is supposed to be stored before the broker sends back its ack? No, it doesn’t. It’s left up to the implementors of the JMS broker in question. In the case of ActiveMQ, if there are ONLY non-durable subscriptions on a topic, it will NOT persist the message. The synchronous nature of the protocol does not change, i.e., if the message is sent persistent, the session will consider the exchange with the broker to be synchronous, and it will wait for a response from the broker before proceeding — but the broker will not actually persist the message. In ActiveMQ, this changes if there is AT LEAST one durable subscriber. Then the broker will persist the message (per the JMS spec).   Does that mean even in the event of a broker failure? The lifetime of a non-durable subscription is indeed broken if a broker fails. So a message will not be delivered, even if it’s sent persistent, to a non-durable subscriber in the event of broker failure (or any other termination of the life of that non-durable sub). Additionally, a message will not be redelivered in the face of broker failure for non-durable subscriptions.   What happens in a network of brokers? Messages can indeed be lost. Consider this network of brokers where A -> B -> C and subscription is demand forwarded from C -> B -> A. So if we have a producer at A producing to a topic “topic.foo” and a non-durable consumer is on broker C consuming from “topic.foo”, if broker B goes down, messages thereafter sent to A will be dropped. The lifetime of the subscription as far as A knows has been terminated. Lastly,   What are the exact differences between sending a message “persistent” vs “non-persistent” to a topic with non-durable subscribers? According to the JMS spec:How Published Nondurable Subscriber Durable SubscriberNON_PERSISTENT at-most-once (missed if inactive) at-most-oncePERSISTENT once-and-only-once (missed if inactive) once-and-only-onceSo for a non-durable subscriber, a non-persistent message will be delivered “at most once” but will be missed if inactive (or broker failure). For a non-durable subscriber, a persistent message will be delivered “once and only once” but missed if inactive. The “inactive” part of the spec effectively means if there are no durable subscribers to a topic, a message could be lost and there is no guarantee of delivery regardless of whether the message is sent persistent or non-persistent. Reference: Guaranteed messaging for topics, the JMS spec, and ActiveMQ from our JCG partner Christian Posta at the Christian Posta Software blog....

Programming Language Job Trends – 2012-08

It is a little late, but it is time for the summer edition of the job trends for traditional programming languages. The languages in this update have not changed for a while as we are only looking at Java, C++, C#, Objective C, Perl and Visual Basic. Over the next few months, I will be looking at various languages to determine how this list and other job trends posts should change. Also, please review some of the other job trends posts to see if your favorite language is already in one of these posts.First, we look at the job trends from Indeed.com:Most of the job trends have declined in the past few months. Objective-C continues to show solid growth. C# had a significant drop but still leads its C++ cousin. Over the long term trends, Java and C# have very positive growth, while the other languages are tending to stagnate. There is huge growth in mobile development, especially with Objective-C leading the way in iOS development. C++ and Perl show slight declines, but still not too significant. Visual Basic continues its stable trend, showing an increase over the past 2 years but still a decline from 2005.Now, let’s look at SimplyHired’s short term trends:SimplyHired’s trends are show much more decline in recent months than Indeed. Interestingly, Objective-C is not showing much of a positive trend, but it is a much better trend than the other languages in the list. Java is showing a surprising decline over the past few months, but still retains a large lead over the other languages. C++ and C# show almost identical trends over the past year, with a decline in recent months. Visual Basic and Perl show similar declines to the other languages.Finally, here is a review of the relative scaling from Indeed. This provides an interesting trend graph based on job growth:Unsurprisingly, Objective-C has the most growth, but the growth has slowed since our last update. C# growth is solid, hovering around 100% for the past 3 years. Visual Basic and C++ continue to decline. Perl and Java are still showing signs of life, but the growth is not very significant.What does all this mean? First, it is clear the iOS development is hot as is all mobile development. However, mobile development does not seem to be affecting Java or the growth of mobile is offsetting the decline of Java in the enterprise space. Why does Java (and some of the others) show relative growth, but not strong growth in the trend graphs? Basically, we are seeing that while some of the languages are still showing increasing job postings (the relative growth chart), the percentage of postings is less than before. So, other languages not in this list may be increasing in demand quicker than these traditional languages.Reference: Traditional Programming Language Job Trends – August 2012 from our JCG partner Rob Diana at the Regular Geek blog....

Product Manager – Strategic or Not?

Are product managers really involved in strategic discussions, or are we just order takers? Adrienne Tan has poked the beehive and started a great discussion with this article. Joining in from here, hopefully adding folks to the conversation. Check it out, and chime in here or on the brainmates blog. Product Managers Taking Orders Adrienne kicks off the discussion with a great post, including the following question : “why does a whole professional group continue to defend its right to be strategic? No one else seems to think that Product Management is the rightful owner of Product Strategy except Product Management.” As I write this, there are half a dozen great comments on her post, including some powerful ideas:People don’t want to relinquish power – and “owning strategy” is powerful. Of course other people want to say that they “own” it. Product management is the business – we run into problems when the role is “tacked on” organizationally and not deeply integrated. There are two distinct roles – strategic planning and tactical support – and both have the same title (product manager), but if they are different people, you have problems. There’s too much for one person to do, but the responsibilities of people sharing the work must overlap, or they will become disconnected.As Nick points out in the comments – Adrienne’s post is a productive one, not just a rant – since getting a “seat at the table” for strategic decision making is so hard, is it worth doing?Product Strategy Product strategy happens. It may be implicit, but it is probably explicit and intentional. Product strategy, however, is just a business tactic. Your company has a strategy, and someone makes the decision that “product” will play a role in that strategy. The definition of that role constrains what someone else should do with the product, in order to realize the product’s portion of the business strategy. Most of the product management roles that I’ve seen fall into this model. There’s a “glass ceiling” for product managers – who are only given freedom to make decisions within this context. Those product managers are “doers” within these constraints – occasionally allowed to, but generally not encouraged to, and certainly not required to provide recommendations to change the business strategy.Go Where I Tell You If the CEO (the actual CEO, not the “CEO of the product”) is the rider, then the product manager is the horse – constrained to “go where I tell you.” The product manager is watching where he’s running to make sure he steps sure-footedly, looking around to see where the other horses (or wolves) are – basically responsible for the “running.” The CEO is responsible for knowing where to go, not how to get there. As long as the product manager doesn’t get the bit in his teeth, the CEO can make sure the horse goes where she wants.What Does “Strategy” Mean to You? The problem with words like strategy is that they carry a lot of symbolic baggage. Wikipedia tells us that a strategy is a “plan of action, designed to achieve a vision.” The question is one of scope – what level of “vision” are you trying to achieve? Google’s vision is so big that they don’t really articulate it – instead, they share the philosophy that shapes their vision and guides their actions. You have to pick one of the products, like Google Wallet, before you get an articulation of vision. Product management (as a “named entity” in the organization), in the teams that I’ve been a part of, has “owned” the definition of the product strategy that enables the product vision – but not been invited to participate in the business strategy that enables the company’s vision. That strategy is primarily embodied in a product roadmap that articulates how (and when) the product vision will be achieved. The product strategy and vision are components of a portfolio of strategies and visions that collectively make up the business strategy. Command and Control The weakness of this old-school command-and-control model is that the commander (CEO) only gets limited inputs from the commanders in the field (product managers). With this top-down model of decomposition of business strategy into products with specific roles, each product manager has specific responsibilities (take the town), often being expected to succeed with limited context (we’re sweeping around the enemy’s flank, but it is a feint). The product manager is only providing progress reports and possibly market information back to the overall commander. The product manager is not tasked with providing recommendations to change the business strategy. The strength of this approach is that it enables a company to execute their business strategy. As product managers, we know there is always “more to be done” – and like the metaphor of boiling the ocean, if one person tries to do everything, they won’t succeed. In his recent series on product management roles, Rich Mironov calls out that the VP of Product Management is “making sure that the company as a whole is building and shipping and supporting the right products.” Not defining the business strategy, but rather determining which products should be components of that strategy, and what roles those products should have as members of that strategy. Each product manager is responsible for defining the roadmap for their product, articulating how it will fulfill its role in the comprehensive strategy.The Eye of the Beholder Given this definition, is product management still a “strategic” role? It depends on where you’re standing in the organization. The CEO probably sees us as horses. But the teams that are building, testing, and supporting our products are operating on a shorter time horizon, with an even narrower scope. From that point of view, having a multi-release point of view about the product, and a deep understanding of the market (needed to make the right product decisions) is definitely strategic.Should We Be Pushing the Strategic Product Manager Agenda? Getting back to Adrienne’s original point, and a fantasic image from Geoffrey Anderson, “Sometimes I feel like a salmon swimming upstream to spawn, and every tier of the swim is fraught with bears waiting to eat me.” – is it worth investing our time and energy in pushing the “product managers are strategic” message? If any of these apply to you, then yes – push:I’m told what features to build, and I’m expected to just translate for the technical folks on the product team, so that they build it correctly. I’m not given the (business strategy) context that defines the role my product plays, so have no way to know if my product is succeeding. When I provide feedback about the feasibility or effectiveness of my product in its role as part of the business strategy, it is not valued and nothing changes.If none of those apply to you, they do apply to some of your peers – so push. In the past 10 years, I’ve seen each of the above situations multiple times, and they are in order of increasing frequency. What I see even more frequently these days is product managers getting pulled more and more into the operational, product owner role – shepherding teams through daily stand-ups and validating acceptance criteria – purely tactical execution roles. Those product managers are still somehow responsible for doing product management, even when not given enough time to do it. If that’s the position you find yourself in today, start winding the klaxon.Reference: Product Manager – Strategic or Not? from our JCG partner Scott Sehlhorst at the Business Analysis | Product Management | Software Requirements blog....

Android broadcast receiver: Enable and disable during runtime

Broadcast receiver is the one of the basic and important components of the Android application. There are two different ways of adding broadcast receiver in the Android application. It can be added either programmatically or in Android Manifest file. You should be careful while adding broadcast receiver because unnecessary broadcast receivers drain battery power. If you add the broadcast receiver in the Android manifest file, it’s implied that you are going to handle a particular intent in the broadcast receiver and not ignore it. There is a way to enable and disable the broadcast receiver which is added in the manifest file. Example code Application layout file. <LinearLayout xmlns:android='http://schemas.android.com/apk/res/android' xmlns:tools='http://schemas.android.com/tools' android:layout_width='match_parent' android:layout_height='match_parent' android:orientation='vertical'><Button android:layout_width='fill_parent' android:layout_height='wrap_content' android:padding='@dimen/padding_medium' android:text='@string/start_repeating_alarm' android:onClick='startRepeatingAlarm' tools:context='.EnableDisableBroadcastReceiver' /> <Button android:layout_width='fill_parent' android:layout_height='wrap_content' android:padding='@dimen/padding_medium' android:text='@string/cancel_alarm' android:onClick='cancelAlarm' tools:context='.EnableDisableBroadcastReceiver' /> <Button android:layout_width='fill_parent' android:layout_height='wrap_content' android:padding='@dimen/padding_medium' android:text='@string/enable_broadcast_receiver' android:onClick='enableBroadcastReceiver' tools:context='.EnableDisableBroadcastReceiver' /> <Button android:layout_width='fill_parent' android:layout_height='wrap_content' android:padding='@dimen/padding_medium' android:text='@string/disable_broadcast_receiver' android:onClick='disableBroadcastReceiver' tools:context='.EnableDisableBroadcastReceiver' /> </LinearLayout> In the above layout file, we have used some string constants in buttons’ text field. Let’s define these string constants in string.xml as shown below. <resources> <string name='app_name'>EnableDisableBroadcastReceiver</string> <string name='enable_broadcast_receiver'>Enable Broadcast Receiver</string> <string name='disable_broadcast_receiver'>Disable Broadcast Receiver</string> <string name='start_repeating_alarm'>Start Repeating Alarm</string> <string name='cancel_alarm'>Cancel Alarm</string> <string name='menu_settings'>Settings</string> <string name='title_activity_enable_disable_boradcast_receiver'>EnableDisableBoradcastReceiver</string> </resources>Broadcast receiver We are going to use AlarmManager to set the repeating alarm which eventually sends the intent at the specific time interval. Read this post to know more about the AlarmManager.Now create AlarmManagerBroadcastReceiver class to extend the BroadcastReceiver class. The content of the class is given below. package com.code4reference.enabledisablebroadcastreceiver;import java.text.Format; import java.text.SimpleDateFormat; import java.util.Date;import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.os.PowerManager; import android.widget.Toast;public class AlarmManagerBroadcastReceiver extends BroadcastReceiver {final public static String ONE_TIME = 'onetime'; @Override public void onReceive(Context context, Intent intent) {//You can do the processing here update the widget/remote views. StringBuilder msgStr = new StringBuilder(); //Format time. Format formatter = new SimpleDateFormat('hh:mm:ss a'); msgStr.append(formatter.format(new Date()));Toast.makeText(context, msgStr, Toast.LENGTH_SHORT).show(); } }Enable/Disable Broadcast receiver Now we will define main activity which uses alarmManager to set a repeating alarm. The repeating alarm will broadcast intent after every 3 seconds. This alarm has been set in the setRepeatingAlarm() method below. package com.code4reference.enabledisablebroadcastreceiver;import com.example.enabledisablebroadcastreceiver.R;import android.app.Activity; import android.app.AlarmManager; import android.app.PendingIntent; import android.content.ComponentName; import android.content.Context; import android.content.Intent; import android.content.pm.PackageManager; import android.os.Bundle; import android.view.View; import android.widget.Toast;public class EnableDisableBroadcastReceiver extends Activity {@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } /** * This method gets called when 'Start Repeating Alarm' button is pressed. * It sets the repeating alarm whose periodicity is 3 seconds. * @param view */ public void startRepeatingAlarm(View view) { AlarmManager am=(AlarmManager)this.getSystemService(Context.ALARM_SERVICE); Intent intent = new Intent(this, AlarmManagerBroadcastReceiver.class); PendingIntent pi = PendingIntent.getBroadcast(this, 0, intent, 0); //After after 2 seconds am.setRepeating(AlarmManager.RTC_WAKEUP, System.currentTimeMillis(), 1000 * 4 , pi); Toast.makeText(this, 'Started Repeating Alarm', Toast.LENGTH_SHORT).show(); } /** * This method gets called when 'cancel Alarm' button is pressed. * This method cancels the previously set repeating alarm. * @param view */ public void cancelAlarm(View view) { Intent intent = new Intent(this, AlarmManagerBroadcastReceiver.class); PendingIntent sender = PendingIntent.getBroadcast(this, 0, intent, 0); AlarmManager alarmManager = (AlarmManager) this.getSystemService(Context.ALARM_SERVICE); alarmManager.cancel(sender); Toast.makeText(this, 'Cancelled alarm', Toast.LENGTH_SHORT).show(); } /** * This method enables the Broadcast receiver registered in the AndroidManifest file. * @param view */ public void enableBroadcastReceiver(View view){ ComponentName receiver = new ComponentName(this, AlarmManagerBroadcastReceiver.class); PackageManager pm = this.getPackageManager();pm.setComponentEnabledSetting(receiver, PackageManager.COMPONENT_ENABLED_STATE_ENABLED, PackageManager.DONT_KILL_APP); Toast.makeText(this, 'Enabled broadcast receiver', Toast.LENGTH_SHORT).show(); } /** * This method disables the Broadcast receiver registered in the AndroidManifest file. * @param view */ public void disableBroadcastReceiver(View view){ ComponentName receiver = new ComponentName(this, AlarmManagerBroadcastReceiver.class); PackageManager pm = this.getPackageManager();pm.setComponentEnabledSetting(receiver, PackageManager.COMPONENT_ENABLED_STATE_DISABLED, PackageManager.DONT_KILL_APP); Toast.makeText(this, 'Disabled broadcst receiver', Toast.LENGTH_SHORT).show(); } } We add the AlarmManagerBroadcastReceiver in the manifest file. This registers the Broadcast Receiver. <manifest xmlns:android='http://schemas.android.com/apk/res/android' package='com.example.enabledisablebroadcastreceiver' android:versionCode='1' android:versionName='1.0' ><uses-sdk android:minSdkVersion='15' android:targetSdkVersion='15' /> <application android:icon='@drawable/ic_launcher' android:label='@string/app_name' android:theme='@style/AppTheme' > <activity android:name='com.code4reference.enabledisablebroadcastreceiver.EnableDisableBroadcastReceiver' android:label='@string/title_activity_enable_disable_boradcast_receiver' > <intent-filter> <action android:name='android.intent.action.MAIN' /> <category android:name='android.intent.category.LAUNCHER' /> </intent-filter> </activity> <!-- Broadcast receiver --> <receiver android:name='com.code4reference.enabledisablebroadcastreceiver.AlarmManagerBroadcastReceiver' /> </application> </manifest> Once done, execute the code and you will notice the application as shown below. You can get the complete source at github/Code4Reference. You can find more Android tutorials here. Reference: Enable and disable Broadcast receiver during runtime from our JCG partner Rakesh Cusat at the Code4Reference blog....

Building ScalaFX 1.0 with Gradle 1.1

After becoming a little disenchanted with the SBT for Scala, I wanted an alternative that was more logical, simpler to understand and had a better user experience. After all, the whole point of a domain specific language is to make the writing of the script, formulae or grammar to be affordable to the users. A DSL must be comprehensible to the users, it must be relatively easy to write the script in the language of the domain, and surely must be mostly free of annoyances. The great examples of course are spreadsheets like Excel, the XML Stylesheet Language for Transformations (XSLT) and shell scripts (like DOS, BASH). I recently added a build.grade file to the ScalaFX project. Here is a screen cast about how to use Gradle build instead of the current SBT file. Building ScalaFX 1.0 with Gradle 1.1 from Peter Pilgrim on Vimeo. The only sore point so far, I have found with Gradle, is that the project takes it name from the containing folder. In other words, I found that force setting the artifactId does not work. group = 'org.scalafx' artifactId = 'ScalaFX-javaone-2012' // This does not work version = '1.0-SNAPSHOT' That might be worth considering when moving project folders around in order to make a quick research branch for a delta, or look at some other committer’s changes separately from your own. Because Gradle is written in Groovy, you have the full power of that dynamic language to play with. I was able to write a groovy task to push a UNIX bash launcher script in less than ten minutes. I was also able to run a launcher within Gradle for the Colorful Circles demo app. The Gradle documentation is a lot better than SBT, in my humble opinion. In SBT, if you missed adding a single blank line between statement declarations, or you forgot to add an extra delimiter between to Seq() or perhaps used the wrong method name “+” versus “++” then you could be lost for quite a long time. It would good to see how the Scala Plugin for Gradle could work with recently announced Zinc and Incremental Compilation from Typesafe. After reading that blog post, I think Zinc, Gradle and Scala plug-in should just work. Has anyone tried this combination yet? I have not yet. PS: Under Windows, you need to comment out the “chmod” line in the build.gradle for now. I will fix this later on before the 1.0 release. PS PS: With the Vimeo, you might prefer to click on the HD option to see improved clarity. Reference: Building ScalaFX 1.0 with Gradle 1.1 from our JCG partner Peter Pilgrim at the Peter Pilgrim’s blog blog....

Spring Scoped Proxy

Consider two Spring beans defined this way: @Component class SingletonScopedBean{ @Autowired private PrototypeScopedBean prototypeScopedBean; public String getState(){ return this.prototypeScopedBean.getState(); } }@Component @Scope(value="prototype") class PrototypeScopedBean{ private final String state; public PrototypeScopedBean(){ this.state = UUID.randomUUID().toString(); }public String getState() { return state; } } Here a prototype scoped bean is injected into a Singleton scoped bean. Now, consider this test using these beans: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class ScopedProxyTest { @Autowired private SingletonScopedBean singletonScopedBean; @Test public void testScopedProxy() { assertThat(singletonScopedBean.getState(), not(equalTo(singletonScopedBean.getState()))); } @Configuration @ComponentScan("org.bk.samples.scopedproxy") public static class SpringContext{}} The point to note is that there is only 1 instance of PrototypeScopedBean that is created here – and that 1 instance is injected into the SingletonScopedBean, so the above test which actually expects a new instance of PrototypeScopedBean with each invocation of getState() method will fail. If a new instance is desired with every request to PrototypeScopedBean (and in general if a bean with longer scope has a bean with shorter scope as a dependency, and the shorter scope needs to be respected), then there are a few solutions: 1. Lookup method injection – which can be read about here 2. A better solution is using Scoped proxies – A scoped proxy can be specified this way using @Configuration: @Component @Scope(value="prototype", proxyMode=ScopedProxyMode.TARGET_CLASS) class PrototypeScopedBean{ private final String state; public PrototypeScopedBean(){ this.state = UUID.randomUUID().toString(); }public String getState() { return state; }} With this change, the bean injected into the SingletonScopedBean is not the PrototypeScopedBean itself, but a proxy to the bean (created using CGLIB or Dynamic proxies) and this proxy understands the scope and returns instances based on the requirements of the scope, the test should now work as expected. Reference: Spring Scoped Proxy from our JCG partner Biju Kunjummen at the all and sundry blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books