Featured FREE Whitepapers

What's New Here?

apache-tomcat-logo

Death by Redirect

It is said that the greatest harm can come from the best intentions. We recently had a case where, because of the best intentions, two @#@&*@!!^@ parties killed our servers with a single request, causing a deadlock involving all of our Tomcat instances, including all HTTP threads. Naturally, not a pleasant situation to find yourself in.   Some explanation about our setup is necessary here. We have a number of Tomcat instances that serve HTML pages for our website, located behind a stateless Load Balancer. The time then came when we added a second application, deployed on Jetty.     Since we needed the new app to be served as part of the same website (e.g. http://www.wix.com/jetty-app), we proxied the second (Jetty) application from Tomcat (Don’t dig into why we proxied Jetty from Tomcat, we thought at the time we had good reasons for it).So in fact we had the following architecture:At the Tomcat end, we were using the Apache HttpClient library to connect to the Jetty application. HttpClient by default is configured to follow redirects. Best Intentions #1: Why should we require the developer to think about redirects? Let’s handle them automatically for her… At the Jetty end, we had a generic error handler that on an error, instead of showing an error page, redirected the user to the homepage of the app on Jetty. Best Intentions #2: Why show the user an error page? Let’s redirect him to our homepage… But what happens when the homepage of the Jetty application generates an error? Well, apparently it returns a redirect directive to itself! Now, if a browser would have gotten that redirect, it would have entered a redirect loop and break it after about 20 redirects. We would have seen 20 requests all resulting in a redirect, probably seen a traffic spike, but nothing else. However, because we had redirects turned on at the HttpClient library, what happened is the following:A Request arrives to our Tomcat server, which resolves it to be proxied to the Jetty application Tomcat Thread #1 proxies a request to Jetty Jetty has an exception and returns a redirect to http://www.wix.com/jetty -app Tomcat Thread #1 connects to the www.wix.com host, which goes via the load balancer and ends at another Tomcat thread – Tomcat Thread #2 Tomcat Thread #2 proxies a request to Jetty Jetty has an exception and returns a redirect to http://www.wix.com/jetty Tomcat Thread #1 connects to the www.wix.com host, which goes via the load balancer and ends at another Tomcat thread – Tomcat Thread #3 And so on, until all threads on all Tomcats are all stuck on the same one requestSo, what can we learn from this incident? We can learn that the defaults of Apache HttpClient are not necessarily the ones you’d expect. We can learn that if you issue a redirect, make sure you are not redirecting to yourself (like our Jetty application homepage). We can learn that the HTTP protocol, which is considered a commodity can be complicated at times and hard to tune, and that not every developer knows to perform an HTTP request. We can also learn that when you take on a 3rd party library, you should invest time in learning to use it, to understand the failure points and how to overcome them. However, there is a deeper message here. When we develop software, we trade development velocity and risk. The faster we want to develop software, the more we need to trust the individual developers. The more trust we give developers, the more risk we gain by developer black spots – things a developer fails to think about, e.g. handling redirect. As a software engineer, I am not sure there is a simple solution to this issue – I guess it is up to you.   Reference: Death by Redirect from our JCG partner Yoav Abrahami at the Wix IO blog. ...
apache-tomcat-logo

Tomcat Clustering Series Part 4 : Session Replication using Backup Manager

Hi, this is my fourth part of the Tomcat Clustering Series. In this post we are going to discuss how to setup session replication using Backup Manager in tomcat clustering environment. Session replication makes High availability and full fail-over capability to our clustering environment.   [Check the video below for better understanding] This post is a continuation of my last post (Session replication using Delta Manager). In delta manager each tomcat instance needs to replicate the session information to all other tomcat instances. It takes more time and replication if our cluster size is increased, so, an alternative manager is there, the Backup Manager.   Backup Manager replicates the copy of session data to exactly one other tomcat instance in the cluster. This is the main difference between Delta and Backup managers. Here one tomcat instance maintains what is the primary copy of the session whereas another tomcat instance holds the replicated session data acting as the backup one. If any one of the tomcat instances fails the other one serves the session. That way fail over capability is achieved. The setup process of backup manager is same as Delta manager. Except we need to mention the Manager as BacupManager (org.apache.catalina.ha.session.DeltaManager) inside <Cluster> element. Suppose we have 3 tomcat instances like previous post, and i configured into backup manager. Now user try access the page. User request comes to load balancer, and load balancer redirect the request to suppose tomcat1. Now tomcat one create the session, now tomcat1 is responsible to replicate exactly one copy to any one of the tomcat. So tomcat1 picks any tomcat which is part of the cluster (multicast). Here tomcat1 picks tomcat3 as a backup. So tomcat3 hold the backup copy of the session. We are running the load balancer in sticky session mode so all further request from that particular user is redirect to tomcat1 only. All modification in tomcat1 is replicated to tomcat3. Now tomcat1 is crashed/shutdown for some reasonNow same user try to access the page. This time load balancer tries to redirect to tomcat1 but tomcat1 is down, so load-balancer picks one tomcat from the remaining tomcats. Here interestingly 2 cases are there. Case 1: Suppose Load balancer picks the tomcat3, then tomcat3 receives the request and tomcat3 itself holds the backup copy of the session. So tomcat3 makes that session as primary copy and tomcat3 picks any one tomcat as backup copy. So here remaining only one tomcat is there. So tomcat3 replicates the session to tomcat2. Now tomcat3 holds primary copy and tomcat2 holds the backup copy. Now tomcat3 gives the response to user. All further request is handled by tomcat3 (sticky session). case 2: Suppose Load balancer picks the tomcat2 then tomcat2 receives the request and tomcat2 don’t have the session. So tomcat2 session manager (Backup Manager) asks all other tomcat managers: ‘hi anybody hold the session for this user (based on session id [cookie])’. Actually tomcat3 has the backup session. So tomcat3 informs to tomcat2 and replicate the session to tomcat2. Now tomcat2 makes that session as primary copy and tomcat3 whose already have copy of session as remains as a backup copy of that session, so now tomcat2 hold primary copy and tomcat3 hold the backup copy. Now tomcat2 give the response to user. All further request is handled by tomcat2 (sticky session). So in either case our session is replicated and maintained by backup manager. This is good for large cluster. Check the video below Check my configuration in my github repo or get as ZIP file Screen Cast:  Reference: Tomcat Clustering Series Part 4 : Session Replication using Backup Manager from our JCG partner Rama Krishnan at the Ramki Java Blog blog. ...
software-development-2-logo

Rule of 30 – When is a method, class or subsystem too big?

A question that constantly comes up from people that care about writing good code, is: what’s the right size for a method or function, or a class, or a package or any other chunk of code?   At some point any piece of code can be too big to understand properly – but how big is too big? It starts at the method or function level.             In Code Complete, Steve McConnell says that the theoretical best maximum limit for a method or function is the number of lines that can fit on one screen (i.e., that a developer can see at one time). He then goes on to reference studies from the 1980s and 1990s which found that the sweet spot for functions is somewhere between 65 lines and 200 lines: routines this size are cheaper to develop and have fewer errors per line of code. However, at some point beyond 200 lines you cross into a danger zone where code quality and understandability will fall apart: code that can’t be tested and can’t be changed safely. Eventually you end up with what Michael Feathers calls “runaway methods”: routines that are several hundreds or thousands of lines long and that are constantly being changed and that continuously get bigger and scarier. Patrick Duboy looks deeper into this analysis on method length, and points to a more modern study from 2002 that shows that code with shorter routines has fewer defects overall, which matches with most people’s intuition and experience. Smaller must be better Bob Martin takes the idea that “if small is good, then smaller must be better” to an extreme in Clean Code: “The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that. Functions should not be 100 lines long. Functions should hardly ever be 20 lines long.” Martin admits that “This is not an assertion that I can justify. I can’t produce any references to research that shows that very small functions are better.” So like many other rules or best practices in the software development community, this is a qualitative judgement made by someone based on their personal experience writing code – more of an aesthetic argument – or even an ethical one – than an empirical one. Style over substance. The same “small is better” guidance applies to classes, packages and subsystems – all of the building blocks of a system. In Code Complete, a study from 1996 found that classes with more routines had more defects. Like functions, according to Clean Code, classes should also be “smaller than small”. Some people recommend that 200 lines is a good limit for a class – not a method, or as few as 50-60 lines (in Ben Nadel’s Object Calisthenics exercise)and that a class should consist of “less than 10” or “not more than 20” methods. The famous C3 project – where Extreme Programming was born – had 12 methods per class on average. And there should be no more than 10 classes per package. PMD, a static analysis tool that helps to highlight problems in code structure and style, defines some default values for code size limits: 100 lines per method, 1000 lines per class, and 10 methods in a class. Checkstyle, a similar tool, suggests different limits: 50 lines in a method, 1500 lines in a class. Rule of 30 Looking for guidelines like this led me to the “Rule of 30” in Refactoring in Large Software Projects by Martin Lippert and Stephen Roock:   “If an element consists of more than 30 subelements, it is highly probable that there is a serious problem”:Methods should not have more than an average of 30 code lines (not counting line spaces and comments). A class should contain an average of less than 30 methods, resulting in up to 900 lines of code. A package shouldn’t contain more than 30 classes, thus comprising up to 27,000 code lines. Subsystems with more than 30 packages should be avoided. Such a subsystem would count up to 900 classes with up to 810,000 lines of code. A system with 30 subsystems would thus possess 27,000 classes and 24.3 million code lines.What does this look like? Take a biggish system of 1 million NCLOC. This should break down into:30,000+ methods 1,000+ classes 30+ packages Hopefully more than 1 subsystemHow many systems in the real world look like this, or close to this – especially big systems that have been around for a few years? Are these rules useful? How should you use them? Using code size as the basis for rules like this is simple: easy to see and understand. Too simple, many people would argue: a better indicator of when code is too big is cyclomatic complexity or some other measure of code quality. But some recent studies show that code size actually is a strong predictor of complexity and quality – that“complexity metrics are highly correlated with lines of code, and therefore the more complex metrics provide no further information that could not be measured simplify with lines of code”. In ‘Beyond Lines of Code: Do we Need more Complexity Metrics’ in Making Software, the authors go so far as to say that lines of code should be considered always as the ‘first and only metric’ for defect prediction, development and maintenance models. Recognizing that simple sizing rules are arbitrary, should you use them, and if so how? I like the idea of rough and easy-to-understand rules of thumb that you can keep in the back of your mind when writing code or looking at code and deciding whether it should be refactored. The real value of a guideline like the Rule of 30 is when you’re reviewing code and identifying risks and costs. But enforcing these rules in a heavy handed way on every piece of code as it is being written is foolish. You don’t want to stop when you’re about to write the 31st line in a method – it would slow down work to a crawl. And forcing everyone to break code up to fit arbitrary size limits will make the code worse, not better – the structure will be dominated by short-term decisions. As Jeff Langer points out in his chapter discussing Ken Beck’s four rules of Simple Design in Clean Code:“Our goal is to keep our overall system small while we are also keeping our functions and classes small. Remember however that this rule is the lowest priority of the four rules of Simple Design. So, although it’s important to keep class and function count low, it’s more important to have tests, eliminate duplication, and express yourself.”   Sometimes it will take more than 30 lines (or 20 or 5 or whatever the cut-off is) to get a coherent piece of work done. It’s more important to be careful in coming up with the right abstractions and algorithms and to write clean clear code – if a cut-off guideline on size helps to do that, use it. If it doesn’t, then don’t bother.   Reference: Rule of 30 – When is a method, class or subsystem too big? from our JCG partner Jim Bird at the Building Real Software blog. ...
apache-tomcat-logo

Securing your Tomcat app with SSL and Spring Security

If you’ve seen my last blog, you’ll know that I listed ten things that you can do with Spring Security. However, before you start using Spring Security in earnest one of the first things you really must do is to ensure that your web app uses the right transport protocol, which in this case is HTTPS – after all there’s no point in having a secure web site if you’re going to broadcast your user’s passwords all over the internet in plain text. To setup SSL there are three basic steps…           Creating a Key Store The first thing you need is a private keystore containing a valid certificate and the simplest way to generate one of these is to use Java’s keytool utility located in the $JAVA_HOME/bin directory. keytool -genkey -alias MyKeyAlias -keyalg RSA -keystore /Users/Roger/tmp/roger.keystore In the above example,-alias is the unique identifier for your key. -keyalg is the algorithm used to generate the key. Most examples you find on the web usually cite ‘RSA’, but you could also use ‘DSA’ or ‘DES’ -keystore is an optional argument specifying the location of your key store file. If this argument is missing then the default location is your $HOME directory.RSA stands for Ron Rivest (also the creator of the RC4 algorithm), Adi Shamir and Leonard Adleman DSA stands for Digital Signature Algorithm DES stands for Data Encryption Standard For more information on keytool and its arguments take a look at this Informit article by Jon Svede When you run this program you’ll be asked a few questions: Roger$ keytool -genkey -alias MyKeyAlias -keyalg RSA -keystore /Users/Roger/tmp/roger.keystore Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: localhost What is the name of your organizational unit? [Unknown]: MyDepartmentName What is the name of your organization? [Unknown]: MyCompanyName What is the name of your City or Locality? [Unknown]: Stafford What is the name of your State or Province? [Unknown]: NA What is the two-letter country code for this unit? [Unknown]: UK Is CN=localhost, OU=MyDepartmentName, O=MyCompanyName, L=Stafford, ST=UK, C=UK correct? [no]: YEnter key password for(RETURN if same as keystore password): Most of the fields are self explanatory; however for the first and second name values, I generally use the machine name – in this case localhost. Updating the Tomcat Configuration The second step in securing your app is to ensure that your tomcat has an SSL connector. To do this you need to find tomcat’s server.xml configuration file, which is usually located in the 'conf' directory. Once you’ve got hold of this and if you’re using tomcat, then it’s a matter of uncommenting: <Connector port='8443' protocol='HTTP/1.1' SSLEnabled='true' maxThreads='150' scheme='https' secure='true' clientAuth='false' sslProtocol='TLS' /> …and making it look something like this: <Connector SSLEnabled='true' keystoreFile='/Users/Roger/tmp/roger.keystore' keystorePass='password' port='8443' scheme='https' secure='true' sslProtocol='TLS'/> Note that the password ‘password’ is in plain text, which isn’t very secure. There are ways around this, but that’s beyond the scope of this blog. If you’re using Spring’s tcServer, then you’ll find that it already has a SSL connector that’s configured something like this: <Connector SSLEnabled='true' acceptCount='100' connectionTimeout='20000' executor='tomcatThreadPool' keyAlias='tcserver' keystoreFile='${catalina.base}/conf/tcserver.keystore' keystorePass='changeme' maxKeepAliveRequests='15' port='${bio-ssl.https.port}' protocol='org.apache.coyote.http11.Http11Protocol' redirectPort='${bio-ssl.https.port}' scheme='https' secure='true'/> …in which case it’s just a matter of editing the various fields including keyAlias, keystoreFile and keystorePass. Configuring your App If you now start tomcat and run your web application, you’ll now find that it’s accessible using HTTPS. For example typing https://localhost:8443/my-app will work, but so will http://localhost:8080/my-app This means that you also need to do some jiggery-pokery on your app to ensure that it only responds to HTTPS and there are two approaches you can take. If you’re not using Spring Security, then you can simply add the following to yourweb.xml before the last web-app tag: <security-constraint> <web-resource-collection> <web-resource-name>my-secure-app</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> If you are using Spring Security, then there are a few more steps to getting things going. Part of the general Spring Security setup is to add the following to your web.xml file. Firstly you need to add a Spring Security application context file to the contextConfigLocation context-param: <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/spring/root-context.xml /WEB-INF/spring/appServlet/application-security.xml </param-value> </context-param> Secondly, you need to add the Spring Security filter and filter-mapping: <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> Lastly, you need to create, or edit, your application-security.xml as shown in the very minimalistic example below: <?xml version='1.0' encoding='UTF-8'?> <beans:beans xmlns='http://www.springframework.org/schema/security' xmlns:beans='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/securityhttp://www.springframework.org/schema/security/spring-security-3.1.xsd'><http auto-config='true' > <intercept-url pattern='/**' requires-channel='https' /> </http><authentication-manager> </authentication-manager></beans:beans> In the example above intercept-url element has been set up intercept all URLs and force them to use the https channel. The configuration details above may give the impression that it’s quicker to use the simple web.xml config change, but if you’re already using Spring Security, then it’s only a matter of adding a requires-channel attribute to your existing configuration. A sample app called tomcat-ssl demonstrating the above is available on git hub at: https://github.com/roghughe/captaindebug   Reference: Securing your Tomcat app with SSL and Spring Security from our JCG partner Roger Hughes at the Captain Debug’s Blog blog. ...
news-logo

Introducing Spring Scala project

The Spring Scala project was first revealed to the world last October at SpringOne2GX. SpringSource now has revealed more information about it as well as how it can be used in Scala projects. Spring Scala project is here in order to bring the power of Spring framework in Scala. It combines the best of two worlds, Spring framework and Scala. Using a pure Java framework like Spring in Scala would feel awkward, so Spring Scala was introduced as a first-class citizen. ...
mozilla-persona-logo

A Guide To Authenticating Users With Mozilla Persona

Having only twitter and facebook authentication so far, I decided to add Mozilla Persona to the list for my latest project (computoser, computer-generated music). Why?                I like trying new things Storing passwords is a tough process, and even though I know how to do it, and even have most of the code written in another project, I don’t think that I should contribute to the landscape of every site requiring password authentication Mozilla is an open foundation that has so far generated a lot of great products. Persona implements a BrowserID protocol that may be supported natively in browsers other than Firefox in the future (for now, you need to include a .js file) 3rd party authentication has been attempted many times, and is a great thing, but isn’t mainstream for a couple of reasons. Being a bit different, Persona might succeed in becoming more popular. This explanation by Mozilla makes senseSo, I started with the “Quick setup” guide. It is looks really easy. Way easier than OpenID or OAuth authentication – you don’t have to register anything anywhere, you don’t need 3rd party libraries for handling the verification on the server, and you don’t need to learn a complex authentication flow, because the flow is simple:user clicks on the signin button a pop-up appears if not authenticated with Persona, the user is prompted for registration if authenticated with Persona, the user is prompted for approval of the authentication to your site the popup closes and the page redirects/refreshes – the user is now signed inOf course, it is not that simple, but there are just a few things to mind that are not mentioned in the tutorial. So let’s follow the official tutorial step by step, and I’ll extend on each point (the server-side language used is Java, but it’s simple and you can do it in any language) 1. Including the .js file – simple. It is advisable to get the js file from the Mozilla server, rather than store it locally, because it is likely to change (in order to fix bugs, for example). It might be trickier if you merge your js files into one (for the sake of faster page loading), but probably your mechanism allows for loading remote js files. 2. The signin and signout buttons. This looks easy as well. Probably it’s a good idea to add the logout handler conditionally – only if the user has logged in with Persona (as opposed to other authentication methods that your site supports) 3. Listening to authentication events. Listening to events is suggested to be put on all pages (e.g. included in a header template). But there’s a problem here. If your user is already authenticated in Persona, but his session has expired on your site, the script will automatically login the user. And this would require a page reload in a couple of seconds after the page loads. And that’s not necessarily what you or the users want – in my case, for example, this may mean that the track they have just played is interrupted because of page refresh. It can be done with AJAX of course, but it is certainly confusing when something in the UI changes for no apparent reason. Below I’ll show a fix for that. Also, the logout listener might not be needed everywhere – as far as I understand it will automatically logout the user in case you have logged out of Persona. This might not be what you want – for example users might want to keep open tabs with some documents that are not accessible when logged out. 4. Verifying the assertion on the server. Here you might need a 3rd party library in order to invoke the verification endpoint and the parse the json result, but these are pretty standard libraries that you probably already have included. Now, how to solve the problem with automatic authentication? Declare a new variable – userRequestedAuthentication – that holds whether the authentication has been initiated explicitly by the user, or it has been automatic. In the signin button click handler set that variable to true. Here’s how the js code looks like (btw, I think it’s ok to put the code in document.ready(), rather than directly within the script tag. Assuming you later need some DOM resources in the handler methods, it would be good to have the page fully loaded. On the other hand, this may slow down the process a bit). Note that you can include an empty onlogin handler on all pages, and have the complete one only on the authentication page. But given that the login buttons are either on the homepage, or shown with a javascript modal window, it’s probably ok having it everywhere/on multiple pages. <script type='text/javascript'> var loggedInUser = ${context.user != null ? ''' + context.user.email + ''' : 'null'}; var userRequestedAuthentication = false; navigator.id.watch({ loggedInUser : loggedInUser, onlogin : function(assertion) { $.ajax({ type : 'POST', url : '${root}/persona/auth', data : {assertion : assertion, userRequestedAuthentication : userRequestedAuthentication}, success : function(data) { if (data != '') { window.location.href = '${root}' + data; } }, error : function(xhr, status, err) { alert('Authentication failure: ' + err); } }); }, onlogout : function() { window.locaiton.open('${root}/logout'); } }); </script> As you can see, the parameter is passed to the server-side code. What happens there? @RequestMapping('/persona/auth') @ResponseBody public String authenticateWithPersona(@RequestParam String assertion, @RequestParam boolean userRequestedAuthentication, HttpServletRequest request, Model model) throws IOException { if (context.getUser() != null) { return ''; } MultiValueMap<String, String> params = new LinkedMultiValueMap<>(); params.add('assertion', assertion); params.add('audience', request.getScheme() + '://' + request.getServerName() + ':' + (request.getServerPort() == 80 ? '' : request.getServerPort())); PersonaVerificationResponse response = restTemplate.postForObject('https://verifier.login.persona.org/verify', params, PersonaVerificationResponse.class); if (response.getStatus().equals('okay')) { User user = userService.getUserByEmail(response.getEmail()); if (user == null && userRequestedAuthentication) { return '/signup?email=' + response.getEmail(); } else if (user != null){ if (userRequestedAuthentication || user.isLoginAutomatically()) { context.setUser(user); return '/'; } else { return ''; } } else { return ''; //in case this is not a user-requested operation, do nothing } } else { logger.warn('Persona authentication failed due to reason: ' + response.getReason()); throw new IllegalStateException('Authentication failed'); } } The logic looks more convoluted than you’d like it to be, but let me explain:As you can see in the javascript code, an empty string means “do nothing”. If anything else is returned, the javascript opens that page. If not using spring-mvc, instead of returning a @ResponseBody String from the method, you would write that to the response output stream (or in php terms – echo it) First you check if there’s an already authenticated user in the system. If there is, do nothing. I’m not sure if there’s a scenario when Persona invokes “onlogin” on an already authenticated user, but if you are using other authentication options, Persona won’t know that your user has logged in with, say, twitter. Then you invoke the verification url and parse the result to JSON. I’ve used RestTemplate, but anything can be used – HttpClient, URLConnection. For the JSON parsing spring uses Jackson behind the scene. You just need to write a value-object that holds all the properties that Persona might return. So far I’ve only included: status, email and reason (jackson detail: ignoreUnknown=true, spring-mvc detail: you need FormHttpMessageConverter set to the RestTemplate). It is important that the “audience” parameter is exactly the domain the user is currently on. It makes a difference if it’s with www or not, so reconstruct that rather than hardcoding it or loading it from properties. Even if you redirect from www to no-www (or vice-versa), you should still dynamically obtain the url for the sake of testing – your test environments don’t have the same url as the production one. If Persona authentication is “okay”, then you try to locate a user with that email in your database. If there is no such user, and the authentication action has been triggered manually, then send the user to a signup page and supply the email as parameter (you can also set it in the http session, so that the user can’t modify it). The registration page then asks for other details – name, username, date of birth, or whatever you see fit (but keep that to minimum – ideally just the full name). If you only need the email address and nothing else, you can skip the registration page and force-register the user. After the registration is done, you login the user. Note that in case you have stored the email in session (i.e. the user cannot modify it from the registration page), you can skip the confirmation email – the email is already confirmed by Persona If there is a user with that email in your database, check if the action has been requested by the user or whether he has indicated (via a checkbox in the registration page) that he wants to be automatically logged in. This is a thing to consider – should the user be asked about that, or it should always be set to either true or false? I’ve added the checkbox. If login should occur, then set the user in the session and redirect to home (or the previous page, or whatever page is your ‘user home’)(“context” is a session-scoped bean. You can replace it with session.setAttribute('user', user)). If the authentication attempt was automatic, but the user doesn’t want that, do nothing. And the final “else” is for the case when the user doesn’t have an account on your site and an automatic authentication has been triggered – do nothing in that case, otherwise you’ll end up with endless redirects to the registration page in case of failed authentication be sure to log the reason – then you can check if everything works properly by looking at the logsA cool side-effect of using the email as unique identifier (make the database column unique) is that if you add Persona later to your site, users can login with it even though they have registered in a different way – e.g. facebook or regular registration. So they can set their password to something long and impossible to remember and continue logging in only with Persona. The details I omitted from the implementation are trivial: the signup page simply gathers fields and submits them to a /completeRegistration handler that stores the new user in the database. The /logout url simply clears the session (and clears cookies if you have stored any). By the way, if automatic signin is enabled, and Persona is your only authentication method, you may not need to store cookies for the sake of keeping the user logged in after the session expires. Overall, the implementation is still simple, even with the points I made. Persona looks great and I’d like to see it on more sites soon.   Reference: A Guide To Authenticating Users With Mozilla Persona from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog. ...
software-development-2-logo

Comprehending the Mobile Development Landscape

There’s no shortage of mobile growth statistics, but here’s a few specific ones paint an overall picture of mobility:Roughly 87% of the worlds population has a mobile device Earlier this year, Facebook claimed an astounding 488 million mobile monthly active users Android’s user base is growing by 700,000 subscribers a day        These three facts clearly point out that mobility is a growing, global phenomenon, and that it’s drastically changing how people use the Internet. What’s more, from a technology standpoint, mobile is where the growth is! But the mobile landscape is as varied as it is big. Unlike a few short years ago, when doing mobile work implied J2ME on a Blackberry, mobile development now encompasses Android, iOS, HTML5, and even Windows Phone. That’s 4 distinct platforms with different development platforms and languages – and I haven’t even mentioned the myriad hybrid options available! The key to understanding the mobile landscape is an appreciation for the various developmental platforms – their strengths & weaknesses, speed of development, distribution, and, if you are looking at the consumer market, their payout. Android Android device distribution, as I pointed out earlier, is growing faster than other platforms, and the Android ecosystem has more than one app store: Google Play and Amazon’s store, just to name the two most popular ones. And by most accounts, Google Play has as many or more apps than Apple’s App Store (careful with this statistic though, see details below regarding payouts). The massive adoption of Android, however, has lead to fragmentation, which does present some significant challenges with respect to testing. In fact, the reality for most developers is that it is almost impossible to test an app on all combinations of device-OS version profiles in a cost effective manner (this is a growing service industry, by the way). On a positive note, Java, the native language of Android apps, is a fairly ubiquitous language – some estimates peg as many as 10 million active developers so there’s no shortage of able-bodied Java developers and their associated tools out there. Thus, with Android, you have a wide audience (both people with Android devices and developers to build apps) and multiple distribution channels. Yet, this large distribution of disparate devices does present some testing challenges; what’s more, it can be more difficult to make money on the Android platform compared to iOS, as you’ll see next. iOS iOS, the OS for iPhones and iPads, has a tight ecosystem and an avid user base, willing to spend money, ultimately translating into more money for developers. That is, even though there are far more Android devices globally than iOS ones, the iTunes App Store generates more money than Google Play, which means more money for developers of popular apps. In many respects, users of iOS devices are also more willing to pay a fee for an app as opposed to Android ones. The development ecosystem for iOS has a higher barrier to entry when compared to something like Java or JavaScript. OSX is a requirement and the cost alone here can be a barrier for a lot of developers; moreover, Objective-C can present some challenges for the faint of heart (manual memory management!). Yet, the tooling provided by Apple is almost universally lauded by the community at large (much like Microsoft’s VisualStudio) – XCode is a slick development tool. While there isn’t a lot of device fragmentation on iOS, developers do have to deal with OS fragmentation. That is, there are only a handful of Apple devices but quite a lot of different versions living in the field at any given time due to a lagging factor of user upgrades. The iOS platform certainly offers a direct path to revenue, provided you can build a stellar app; however, compared to Android, this is a closed community, which has the tendency to rub some portion of developmental community wrong. Given you can quickly embrace Objective-C and afford the requisite software, iOS is almost always the first platform app developers target. HTML5 HTML5 is truly universal and its apps are available on all platforms without any need to port them – JavaScript is as ubiquitous as Java; what’s more, HTML itself has almost no barrier to entry, making HTML5 and JavaScript a force to content with when it comes to finding talented developers and mass distribution. Cost isn’t even really part of the HTML5 equation too – tools and frameworks are free. Yet, HTML5 apps suffer from a distribution challenge – the major app stores do not carry these apps! Thus, in large part, as an HTML5 app developer, you are relying on a user to type in your URL into a browser. I for one, almost never type in a URL on my iPhone (while I will on my iPad). Lastly, HTML5 is no where near parity with respect to UX compared to native apps (and may never be). This, however, is only a disadvantage if you are building an app that requires a strong UX. There are plenty of great HTML5 apps out there! HTML5 offers an extremely low developmental barrier to entry and the widest support available – all smart devices have browsers (note, they aren’t all created equal!); however, because there isn’t a viable distribution channel, these apps have limited opportunity to make money. Windows Phone Windows is still unproven but could be an opportunity to get in early – first movers in Apple’s App Store without a doubt made far more money than if they had submitted the same apps today. In this case, you if want a truly native experience you’ll build apps on the .NET platform (presumably C#). Windows machines are far cheaper than OSX ones, so there is little financial barrier other than license fees for VisualStudio and a developer fee for the Windows Phone Marketplace. Indeed, it appears that Microsoft is modeling their app store and corresponding policies off of Apple’s – thus there is a tightly managed distribution channel, presenting an opportunity to reach a wide audience and earn their money. But, at this point, the wide audience has yet to develop. That’s 4, but there’s still more! As I alluded to in the beginning of this post, there are 4 primary platforms and myriad hybrid options, such as PhoneGap and Appcelerator, for example. These hybrid options have various advantages and disadvantages; however, the primary concerns one needs to think through are still speed of development, distribution, and payout. Before you embark on a mobile development effort, it pays to have the end in mind – that is, before you code, have tangible answers for app distribution, development effort, and potential payout as these points will help guide you through the mobile landscape.   Reference: Comprehending the Mobile Development Landscape from our JCG partner Andrew Glover at the The Disco Blog blog. ...
apache-log4j-logo

The new log4j 2.0

Before a while a new major version of the well known log4j logging framework was released. Since the first alpha version appeared 4 more releases happened!   You see, there is much more activity than with the predecessor log4j 1. And seriously, despite log4j 2s young age it is ways better.   This blog will give an overview on a few of the great features of Apache log4j 2.0.         Modern API In old days, people wrote things like this: if(logger.isDebugEnabled()) { logger.debug('Hi, ' + u.getA() + “ “ + u.getB()); } Many have complained about it: it is very unreadable. And if one would forget the surrounding if-clause a lot of unnecessary Strings would be the result. These days String creation is most likely optimized by modern the JVM, but should we rely on JVM optimizations? The log4j 2.0 team thought about things like that and improved the API. Now you can write the same like that: logger.debug('Hi, {} {}', u.getA(), u.getB()); The new and improved API supports placeholders with variable arguments, as other modern logging frameworks do. There is more API sweetness, like Markers and flow tracing: private Logger logger = LogManager.getLogger(MyApp.class.getName()); private static final Marker QUERY_MARKER = MarkerManager.getMarker('SQL'); ...public String doQuery(String table) { logger.entry(param); logger.debug(QUERY_MARKER, 'SELECT * FROM {}', table); return logger.exit(); } Markers let you identify specific log entries quickly. Flow Traces are methods which you can call at the start and the end of a method. In your log file you’ll would see a lot of new logging entries in trace level: your program flow is logged. Example on flow traces: 19:08:07.056 TRACE com.test.TestService 19 retrieveMessage - entry 19:08:07.060 TRACE com.test.TestService 46 getKey - entry Plugin Architecture log4j 2.0 supports a plugin architecture. Extending log4j 2 to your own needs has become dead-easy. You can build your extensions the namespace of your choice and just need to tell the framework where to look. <configuration … packages='de.grobmeier.examples.log4j2.plugins'> With the above configuration log4j 2 would look for plugins in the de.grobmeier.examples.log4j2.plugins package. If you have several namespaces, no problem. It is a comma separated list. An simple plugin looks like that: @Plugin(name = 'Sandbox', type = 'Core', elementType = 'appender') public class SandboxAppender extends AppenderBase {private SandboxAppender(String name, Filter filter) { super(name, filter, null); }public void append(LogEvent event) { System.out.println(event.getMessage().getFormattedMessage()); }@PluginFactory public static SandboxAppender createAppender( @PluginAttr('name') String name, @PluginElement('filters') Filter filter) { return new SandboxAppender(name, filter); } } The method with the annotation @PluginFactory seves as, well, factory. The two arguments of the factory are directly read from the configuration file. I achieved that behavior with using @PluginAttr and @PluginElement on my arguments. The rest is pretty trivial too. As I wrote an appender, I have chosen to extend AppenderBase. It forces me to implement the append() method which does the actual job. Besides Appenders, you can even write your own Logger or Filter. Just take a look at the docs. Powerful Configuration The new log4j 2 configuration has become easier. Don’t worry, if you could understand the old way to configure log4j, it will take you only a short time to learn the differences to the new way. It looks like that: <?xml version='1.0' encoding='UTF-8'?> <configuration status='OFF'> <appenders> <Console name='Console' target='SYSTEM_OUT'> <PatternLayout pattern='%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n'/> </Console> </appenders> <loggers> <logger name='com.foo.Bar' level='trace' additivity='false'> <appender-ref ref='Console'/> </logger> <root level='error'> <appender-ref ref='Console'/> </root> </loggers> </configuration> Please look at the appenders section. You are able to use speaking tags, f.e. matching the name of the appender. No more class names. Well, this XML document cannot be validated of course. In case you urgently need XML validation that, you can still use a more strict xml format, which reminds on the old format: <appenders> <appender type='type' name='name'> <filter type='type' ... /> </appender> ... </appenders> But that’s not all. You could even reload your configuration automatically: <?xml version='1.0' encoding='UTF-8'?> <configuration monitorInterval='30'> ... </configuration> The monitoring interval is a value in seconds, minimum value is 5. It means, log4j 2 would reconfigure logging in case something has changed in your configuration. If set to zero or left out, no change detection will happen. The best: log4j 2.0 does not lose logging events at the time of reconfiguration, unlike many other frameworks. But there is even more exciting stuff. If you prefer JSON to XML, you are free to go with a JSON configuration: { 'configuration': { 'appenders': { 'Console': { 'name': 'STDOUT', 'PatternLayout': { 'pattern': '%m%n' } } }, 'loggers': { 'logger': { 'name': 'EventLogger', 'level': 'info', 'additivity': 'false', 'appender-ref': { 'ref': 'Routing' } }, 'root': { 'level': 'error', 'appender-ref': { 'ref': 'STDOUT' } } } } } The new configuration is really powerful and supports things like property substitution. Check them more in detail on the manual pages. log4j 2.0 is a team player: slf4j and friends Apache log4j 2.0 has many integrations. It works well if your application still runs with Commons Logging. Not only that, you can use it with slf4j. You can bridge your log4j 1.x app to use log4j 2 in the background. And another opportunity: log4j 2 supports Apache Flume. Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. Java 5 Concurrency From the docs: “log4j 2 takes advantage of Java 5 concurrency support and performs locking at the lowest level possible.”. Apache log4j 2.0 does address many deadlock issues which are still in log4j 1.x. If you suffer from log4j 1.x memory leaks, you should definitely look at log4j 2.0. Built at the Apache Software Foundation As log4j 1.x, log4j 2.x is an Apache Software Foundation project. It means it is licensed with the “Apache License 2.0? and thus will stay free. You can build your own product upon it, you are free to modify it to your own needs and you can redistribute it, even commercially. You don’t need to care on intellectual property. The Apache Software Foundation does care about that. If you would like to know more on licensing and how you can use log4j 2.0 for your own project, I refer you to the Licensing FAQ.   Reference: The new log4j 2.0 from our JCG partner Christian Grobmeier at the PHP und Java Entwickler blog. ...
hazelcast-logo

Hazelcast Distributed Execution with Spring

The ExecutorService feature had come with Java 5 and is under java.util.concurrent package. It extends the Executor interface and provides a thread pool functionality to execute asynchronous short tasks. Java Executor Service Types is suggested to look over basic ExecutorService implementation. Also ThreadPoolExecutor is a very useful implementation of ExecutorService interface. It extends AbstractExecutorService providing default implementations of ExecutorService execution methods. It provides improved performance when executing large numbers of asynchronous tasks and maintains basic statistics, such as the number of completed tasks.     How to develop and monitor Thread Pool Services by using Spring is also suggested to investigate how to develop and monitor Thread Pool Services. So far, we have just talked Undistributed Executor Service implementation. Let us also investigate Distributed Executor Service. Hazelcast Distributed Executor Service feature is a distributed implementation of java.util.concurrent.ExecutorService. It allows to execute business logic in cluster. There are four alternative ways to realize it :The logic can be executed on a specific cluster member which is chosen. The logic can be executed on the member owning the key which is chosen. The logic can be executed on the member Hazelcast will pick. The logic can be executed on all or subset of the cluster members.This article shows how to develop Distributed Executor Service via Hazelcast and Spring. Used Technologies :JDK 1.7.0_09 Spring 3.1.3 Hazelcast 2.4 Maven 3.0.4STEP 1 : CREATE MAVEN PROJECT A maven project is created as below. (It can be created by using Maven or IDE Plug-in).STEP 2 : LIBRARIES Firstly, Spring dependencies are added to Maven’ s pom.xml <properties> <spring.version>3.1.3.RELEASE</spring.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties><dependencies> <!-- Spring 3 dependencies --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency><dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency><!-- Hazelcast library --> <dependency> <groupId>com.hazelcast</groupId> <artifactId>hazelcast-all</artifactId> <version>2.4</version> </dependency><!-- Log4j library --> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.16</version> </dependency> </dependencies> maven-compiler-plugin(Maven Plugin) is used to compile the project with JDK 1.7 <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.0</version> <configuration> <source>1.7</source> <target>1.7</target> </configuration> </plugin> maven-shade-plugin(Maven Plugin) can be used to create runnable-jar <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>2.0</version><executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <transformers> <transformer implementation='org.apache.maven.plugins.shade.resource. ManifestResourceTransformer'> <mainClass>com.onlinetechvision.exe.Application</mainClass> </transformer> <transformer implementation='org.apache.maven.plugins.shade.resource. AppendingTransformer'> <resource>META-INF/spring.handlers</resource> </transformer> <transformer implementation='org.apache.maven.plugins.shade.resource. AppendingTransformer'> <resource>META-INF/spring.schemas</resource> </transformer> </transformers> </configuration> </execution> </executions> </plugin> STEP 3 : CREATE Customer BEAN A new Customer bean is created. This bean will be distributed between two node in OTV cluster. In the following sample, all defined properties(id, name and surname)’ types are String and standart java.io.Serializable interface has been implemented for serializing. If custom or third-party object types are used, com.hazelcast.nio.DataSerializable interface can be implemented for better serialization performance. package com.onlinetechvision.customer;import java.io.Serializable;/** * Customer Bean. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class Customer implements Serializable {private static final long serialVersionUID = 1856862670651243395L;private String id; private String name; private String surname;public String getId() { return id; }public void setId(String id) { this.id = id; }public String getName() { return name; }public void setName(String name) { this.name = name; }public String getSurname() { return surname; }public void setSurname(String surname) { this.surname = surname; }@Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((id == null) ? 0 : id.hashCode()); result = prime * result + ((name == null) ? 0 : name.hashCode()); result = prime * result + ((surname == null) ? 0 : surname.hashCode()); return result; }@Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; Customer other = (Customer) obj; if (id == null) { if (other.id != null) return false; } else if (!id.equals(other.id)) return false; if (name == null) { if (other.name != null) return false; } else if (!name.equals(other.name)) return false; if (surname == null) { if (other.surname != null) return false; } else if (!surname.equals(other.surname)) return false; return true; }@Override public String toString() { return 'Customer [id=' + id + ', name=' + name + ', surname=' + surname + ']'; }} STEP 4 : CREATE ICacheService INTERFACE A new ICacheService Interface is created for service layer to expose cache functionality. package com.onlinetechvision.cache.srv;import com.hazelcast.core.IMap; import com.onlinetechvision.customer.Customer;/** * A new ICacheService Interface is created for service layer to expose cache functionality. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public interface ICacheService {/** * Adds Customer entries to cache * * @param String key * @param Customer customer * */ void addToCache(String key, Customer customer);/** * Deletes Customer entries from cache * * @param String key * */ void deleteFromCache(String key);/** * Gets Customer cache * * @return IMap Coherence named cache */ IMap<String, Customer> getCache(); } STEP 5 : CREATE CacheService IMPLEMENTATION CacheService is implementation of ICacheService Interface. package com.onlinetechvision.cache.srv;import com.hazelcast.core.IMap; import com.onlinetechvision.customer.Customer; import com.onlinetechvision.test.listener.CustomerEntryListener;/** * CacheService Class is implementation of ICacheService Interface. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class CacheService implements ICacheService {private IMap<String, Customer> customerMap;/** * Constructor of CacheService * * @param IMap customerMap * */ @SuppressWarnings('unchecked') public CacheService(IMap<String, Customer> customerMap) { setCustomerMap(customerMap); getCustomerMap().addEntryListener(new CustomerEntryListener(), true); }/** * Adds Customer entries to cache * * @param String key * @param Customer customer * */ @Override public void addToCache(String key, Customer customer) { getCustomerMap().put(key, customer); }/** * Deletes Customer entries from cache * * @param String key * */ @Override public void deleteFromCache(String key) { getCustomerMap().remove(key); }/** * Gets Customer cache * * @return IMap Coherence named cache */ @Override public IMap<String, Customer> getCache() { return getCustomerMap(); }public IMap<String, Customer> getCustomerMap() { return customerMap; }public void setCustomerMap(IMap<String, Customer> customerMap) { this.customerMap = customerMap; }} STEP 6 : CREATE IDistributedExecutorService INTERFACE A new IDistributedExecutorService Interface is created for service layer to expose distributed execution functionality. package com.onlinetechvision.executor.srv;import java.util.Collection; import java.util.Set; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException;import com.hazelcast.core.Member;/** * A new IDistributedExecutorService Interface is created for service layer to expose distributed execution functionality. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public interface IDistributedExecutorService {/** * Executes the callable object on stated member * * @param Callable callable * @param Member member * @throws InterruptedException * @throws ExecutionException * */ String executeOnStatedMember(Callable<String> callable, Member member) throws InterruptedException, ExecutionException;/** * Executes the callable object on member owning the key * * @param Callable callable * @param Object key * @throws InterruptedException * @throws ExecutionException * */ String executeOnTheMemberOwningTheKey(Callable<String> callable, Object key) throws InterruptedException, ExecutionException;/** * Executes the callable object on any member * * @param Callable callable * @throws InterruptedException * @throws ExecutionException * */ String executeOnAnyMember(Callable<String> callable) throws InterruptedException, ExecutionException;/** * Executes the callable object on all members * * @param Callable callable * @param Set all members * @throws InterruptedException * @throws ExecutionException * */ Collection<String> executeOnMembers(Callable<String> callable, Set<Member> members) throws InterruptedException, ExecutionException; } STEP 7 : CREATE DistributedExecutorService IMPLEMENTATION DistributedExecutorService is implementation of IDistributedExecutorService Interface. package com.onlinetechvision.executor.srv;import java.util.Collection; import java.util.Set; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Future; import java.util.concurrent.FutureTask;import org.apache.log4j.Logger;import com.hazelcast.core.DistributedTask; import com.hazelcast.core.Member; import com.hazelcast.core.MultiTask;/** * DistributedExecutorService Class is implementation of IDistributedExecutorService Interface. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class DistributedExecutorService implements IDistributedExecutorService {private static final Logger logger = Logger.getLogger(DistributedExecutorService.class);private ExecutorService hazelcastDistributedExecutorService;/** * Executes the callable object on stated member * * @param Callable callable * @param Member member * @throws InterruptedException * @throws ExecutionException * */ @SuppressWarnings('unchecked') public String executeOnStatedMember(Callable<String> callable, Member member) throws InterruptedException, ExecutionException { logger.debug('Method executeOnStatedMember is called...'); ExecutorService executorService = getHazelcastDistributedExecutorService(); FutureTask<String> task = (FutureTask<String>) executorService.submit( new DistributedTask<String>(callable, member)); String result = task.get(); logger.debug('Result of method executeOnStatedMember is : ' + result); return result; }/** * Executes the callable object on member owning the key * * @param Callable callable * @param Object key * @throws InterruptedException * @throws ExecutionException * */ @SuppressWarnings('unchecked') public String executeOnTheMemberOwningTheKey(Callable<String> callable, Object key) throws InterruptedException, ExecutionException { logger.debug('Method executeOnTheMemberOwningTheKey is called...'); ExecutorService executorService = getHazelcastDistributedExecutorService(); FutureTask<String> task = (FutureTask<String>) executorService.submit(new DistributedTask<String>(callable, key)); String result = task.get(); logger.debug('Result of method executeOnTheMemberOwningTheKey is : ' + result); return result; }/** * Executes the callable object on any member * * @param Callable callable * @throws InterruptedException * @throws ExecutionException * */ public String executeOnAnyMember(Callable<String> callable) throws InterruptedException, ExecutionException { logger.debug('Method executeOnAnyMember is called...'); ExecutorService executorService = getHazelcastDistributedExecutorService(); Future<String> task = executorService.submit(callable); String result = task.get(); logger.debug('Result of method executeOnAnyMember is : ' + result); return result; }/** * Executes the callable object on all members * * @param Callable callable * @param Set all members * @throws InterruptedException * @throws ExecutionException * */ public Collection<String> executeOnMembers(Callable<String> callable, Set<Member> members) throws ExecutionException, InterruptedException { logger.debug('Method executeOnMembers is called...'); MultiTask<String> task = new MultiTask<String>(callable, members); ExecutorService executorService = getHazelcastDistributedExecutorService(); executorService.execute(task); Collection<String> results = task.get(); logger.debug('Result of method executeOnMembers is : ' + results.toString()); return results; }public ExecutorService getHazelcastDistributedExecutorService() { return hazelcastDistributedExecutorService; }public void setHazelcastDistributedExecutorService(ExecutorService hazelcastDistributedExecutorService) { this.hazelcastDistributedExecutorService = hazelcastDistributedExecutorService; }} STEP 8 : CREATE TestCallable CLASS TestCallable Class shows business logic to be executed. TestCallable task for first member of the cluster : package com.onlinetechvision.task;import java.io.Serializable; import java.util.concurrent.Callable;/** * TestCallable Class shows business logic to be executed. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class TestCallable implements Callable<String>, Serializable{private static final long serialVersionUID = -1839169907337151877L;/** * Computes a result, or throws an exception if unable to do so. * * @return String computed result * @throws Exception if unable to compute a result */ public String call() throws Exception { return 'First Member' s TestCallable Task is called...'; }} TestCallable task for second member of the cluster : package com.onlinetechvision.task;import java.io.Serializable; import java.util.concurrent.Callable;/** * TestCallable Class shows business logic to be executed. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class TestCallable implements Callable<String>, Serializable{private static final long serialVersionUID = -1839169907337151877L;/** * Computes a result, or throws an exception if unable to do so. * * @return String computed result * @throws Exception if unable to compute a result */ public String call() throws Exception { return 'Second Member' s TestCallable Task is called...'; }} STEP 9 : CREATE AnotherAvailableMemberNotFoundException CLASS AnotherAvailableMemberNotFoundException is thrown when another available member is not found. To avoid this exception, first node should be started before the second node. package com.onlinetechvision.exception;/** * AnotherAvailableMemberNotFoundException is thrown when another available member is not found. * To avoid this exception, first node should be started before the second node. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class AnotherAvailableMemberNotFoundException extends Exception {private static final long serialVersionUID = -3954360266393077645L;/** * Constructor of AnotherAvailableMemberNotFoundException * * @param String Exception message * */ public AnotherAvailableMemberNotFoundException(String message) { super(message); }} STEP 10 : CREATE CustomerEntryListener CLASS CustomerEntryListener Class listens entry changes on named cache object. package com.onlinetechvision.test.listener;import com.hazelcast.core.EntryEvent; import com.hazelcast.core.EntryListener;/** * CustomerEntryListener Class listens entry changes on named cache object. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ @SuppressWarnings('rawtypes') public class CustomerEntryListener implements EntryListener {/** * Invoked when an entry is added. * * @param EntryEvent * */ public void entryAdded(EntryEvent ee) { System.out.println('EntryAdded... Member : ' + ee.getMember() + ', Key : '+ee.getKey()+', OldValue : '+ee.getOldValue()+', NewValue : '+ee.getValue()); }/** * Invoked when an entry is removed. * * @param EntryEvent * */ public void entryRemoved(EntryEvent ee) { System.out.println('EntryRemoved... Member : ' + ee.getMember() + ', Key : '+ee.getKey()+', OldValue : '+ee.getOldValue()+', NewValue : '+ee.getValue()); }/** * Invoked when an entry is evicted. * * @param EntryEvent * */ public void entryEvicted(EntryEvent ee) {}/** * Invoked when an entry is updated. * * @param EntryEvent * */ public void entryUpdated(EntryEvent ee) {}} STEP 11 : CREATE Starter CLASS Starter Class loads Customers to cache and executes distributed tasks. Starter Class of first member of the cluster : package com.onlinetechvision.exe;import com.onlinetechvision.cache.srv.ICacheService; import com.onlinetechvision.customer.Customer;/** * Starter Class loads Customers to cache and executes distributed tasks. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class Starter {private ICacheService cacheService;/** * Loads cache and executes the tasks * */ public void start() { loadCacheForFirstMember(); }/** * Loads Customers to cache * */ public void loadCacheForFirstMember() { Customer firstCustomer = new Customer(); firstCustomer.setId('1'); firstCustomer.setName('Jodie'); firstCustomer.setSurname('Foster');Customer secondCustomer = new Customer(); secondCustomer.setId('2'); secondCustomer.setName('Kate'); secondCustomer.setSurname('Winslet');getCacheService().addToCache(firstCustomer.getId(), firstCustomer); getCacheService().addToCache(secondCustomer.getId(), secondCustomer); }public ICacheService getCacheService() { return cacheService; }public void setCacheService(ICacheService cacheService) { this.cacheService = cacheService; }} Starter Class of second member of the cluster : package com.onlinetechvision.exe;import java.util.Set; import java.util.concurrent.ExecutionException;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.Member; import com.onlinetechvision.cache.srv.ICacheService; import com.onlinetechvision.customer.Customer; import com.onlinetechvision.exception.AnotherAvailableMemberNotFoundException; import com.onlinetechvision.executor.srv.IDistributedExecutorService; import com.onlinetechvision.task.TestCallable;/** * Starter Class loads Customers to cache and executes distributed tasks. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class Starter {private String hazelcastInstanceName; private Hazelcast hazelcast; private IDistributedExecutorService distributedExecutorService; private ICacheService cacheService;/** * Loads cache and executes the tasks * */ public void start() { loadCache(); executeTasks(); }/** * Loads Customers to cache * */ public void loadCache() { Customer firstCustomer = new Customer(); firstCustomer.setId('3'); firstCustomer.setName('Bruce'); firstCustomer.setSurname('Willis');Customer secondCustomer = new Customer(); secondCustomer.setId('4'); secondCustomer.setName('Colin'); secondCustomer.setSurname('Farrell');getCacheService().addToCache(firstCustomer.getId(), firstCustomer); getCacheService().addToCache(secondCustomer.getId(), secondCustomer); }/** * Executes Tasks * */ public void executeTasks() { try { getDistributedExecutorService().executeOnStatedMember(new TestCallable(), getAnotherMember()); getDistributedExecutorService().executeOnTheMemberOwningTheKey(new TestCallable(), '3'); getDistributedExecutorService().executeOnAnyMember(new TestCallable()); getDistributedExecutorService().executeOnMembers(new TestCallable(), getAllMembers()); } catch (InterruptedException | ExecutionException | AnotherAvailableMemberNotFoundException e) { e.printStackTrace(); } }/** * Gets cluster members * * @return Set<Member> Set of Cluster Members * */ private Set<Member> getAllMembers() { Set<Member> members = getHazelcastLocalInstance().getCluster().getMembers();return members; }/** * Gets an another member of cluster * * @return Member Another Member of Cluster * @throws AnotherAvailableMemberNotFoundException An Another Available Member can not found exception */ private Member getAnotherMember() throws AnotherAvailableMemberNotFoundException { Set<Member> members = getAllMembers(); for(Member member : members) { if(!member.localMember()) { return member; } }throw new AnotherAvailableMemberNotFoundException('No Other Available Member on the cluster. Please be aware that all members are active on the cluster'); }/** * Gets Hazelcast local instance * * @return HazelcastInstance Hazelcast local instance */ @SuppressWarnings('static-access') private HazelcastInstance getHazelcastLocalInstance() { HazelcastInstance instance = getHazelcast().getHazelcastInstanceByName(getHazelcastInstanceName()); return instance; }public String getHazelcastInstanceName() { return hazelcastInstanceName; }public void setHazelcastInstanceName(String hazelcastInstanceName) { this.hazelcastInstanceName = hazelcastInstanceName; }public Hazelcast getHazelcast() { return hazelcast; }public void setHazelcast(Hazelcast hazelcast) { this.hazelcast = hazelcast; }public IDistributedExecutorService getDistributedExecutorService() { return distributedExecutorService; }public void setDistributedExecutorService(IDistributedExecutorService distributedExecutorService) { this.distributedExecutorService = distributedExecutorService; }public ICacheService getCacheService() { return cacheService; }public void setCacheService(ICacheService cacheService) { this.cacheService = cacheService; }} STEP 12 : CREATE hazelcast-config.properties FILE hazelcast-config.properties file shows the properties of cluster members. First member properties : hz.instance.name = OTVInstance1hz.group.name = dev hz.group.password = devhz.management.center.enabled = true hz.management.center.url = http://localhost:8080/mancenterhz.network.port = 5701 hz.network.port.auto.increment = falsehz.tcp.ip.enabled = truehz.members = 192.168.1.32hz.executor.service.core.pool.size = 2 hz.executor.service.max.pool.size = 30 hz.executor.service.keep.alive.seconds = 30hz.map.backup.count=2 hz.map.max.size=0 hz.map.eviction.percentage=30 hz.map.read.backup.data=true hz.map.cache.value=true hz.map.eviction.policy=NONE hz.map.merge.policy=hz.ADD_NEW_ENTRY Second member properties : hz.instance.name = OTVInstance2hz.group.name = dev hz.group.password = devhz.management.center.enabled = true hz.management.center.url = http://localhost:8080/mancenterhz.network.port = 5702 hz.network.port.auto.increment = falsehz.tcp.ip.enabled = truehz.members = 192.168.1.32hz.executor.service.core.pool.size = 2 hz.executor.service.max.pool.size = 30 hz.executor.service.keep.alive.seconds = 30hz.map.backup.count=2 hz.map.max.size=0 hz.map.eviction.percentage=30 hz.map.read.backup.data=true hz.map.cache.value=true hz.map.eviction.policy=NONE hz.map.merge.policy=hz.ADD_NEW_ENTRY STEP 13 : CREATE applicationContext-hazelcast.xml Spring Hazelcast Configuration file, applicationContext-hazelcast.xml, is created and Hazelcast Distributed Executor Service and Hazelcast Instance are configured. <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:hz='http://www.hazelcast.com/schema/spring' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.hazelcast.com/schema/springhttp://www.hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd'><hz:map id='customerMap' name='customerMap' instance-ref='instance'/><!-- Hazelcast Distributed Executor Service definition --> <hz:executorService id='hazelcastDistributedExecutorService' instance-ref='instance' name='hazelcastDistributedExecutorService' /><!-- Hazelcast Instance configuration --> <hz:hazelcast id='instance'> <hz:config><!-- Hazelcast Instance Name --> <hz:instance-name>${hz.instance.name}</hz:instance-name><!-- Hazelcast Group Name and Password --> <hz:group name='${hz.group.name}' password='${hz.group.password}'/><!-- Hazelcast Management Center URL --> <hz:management-center enabled='${hz.management.center.enabled}' url='${hz.management.center.url}'/><!-- Hazelcast Tcp based network configuration --> <hz:network port='${hz.network.port}' port-auto-increment='${hz.network.port.auto.increment}'> <hz:join> <hz:tcp-ip enabled='${hz.tcp.ip.enabled}'> <hz:members>${hz.members}</hz:members> </hz:tcp-ip> </hz:join> </hz:network><!-- Hazelcast Distributed Executor Service configuration --> <hz:executor-service name='executorService' core-pool-size='${hz.executor.service.core.pool.size}' max-pool-size='${hz.executor.service.max.pool.size}' keep-alive-seconds='${hz.executor.service.keep.alive.seconds}'/><!-- Hazelcast Distributed Map configuration --> <hz:map name='map' backup-count='${hz.map.backup.count}' max-size='${hz.map.max.size}' eviction-percentage='${hz.map.eviction.percentage}' read-backup-data='${hz.map.read.backup.data}' cache-value='${hz.map.cache.value}' eviction-policy='${hz.map.eviction.policy}' merge-policy='${hz.map.merge.policy}' /></hz:config></hz:hazelcast></beans> STEP 14 : CREATE applicationContext.xml Spring Configuration file, applicationContext.xml, is created. <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:hz='http://www.hazelcast.com/schema/spring' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsd'><import resource='classpath:applicationContext-hazelcast.xml' /><!-- Beans Declaration --> <bean id='propertyConfigurer' class='org.springframework.beans.factory.config.PropertyPlaceholderConfigurer'> <property name='locations'> <list> <value>classpath:/hazelcast-config.properties</value> </list> </property> </bean><bean id='cacheService' class='com.onlinetechvision.cache.srv.CacheService'> <constructor-arg ref='customerMap'/> </bean><bean id='distributedExecutorService' class='com.onlinetechvision.executor.srv.DistributedExecutorService'> <property name='hazelcastDistributedExecutorService' ref='hazelcastDistributedExecutorService' /> </bean><bean id='hazelcast' class='com.hazelcast.core.Hazelcast'/><bean id='starter' class='com.onlinetechvision.exe.Starter'> <property name='hazelcastInstanceName' value='${hz.instance.name}' /> <property name='hazelcast' ref='hazelcast' /> <property name='distributedExecutorService' ref='distributedExecutorService' /> <property name='cacheService' ref='cacheService' /> </bean> </beans> STEP 15 : CREATE Application CLASS Application Class is created to run the application. package com.onlinetechvision.exe;import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext;/** * Application class starts the application * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class Application {/** * Starts the application * * @param String[] args * */ public static void main(String[] args) { ApplicationContext context = new ClassPathXmlApplicationContext('applicationContext.xml'); Starter starter = (Starter) context.getBean('starter'); starter.start(); }} STEP 16 : BUILD PROJECT After OTV_Spring_Hazelcast_DistributedExecution Project is built, OTV_Spring_Hazelcast_DistributedExecution-0.0.1-SNAPSHOT.jar will be created. Important Note : The Members of the cluster have got different configuration for Coherence so the project should be built separately for each member. STEP 17 : INTEGRATION with HAZELCAST MANAGEMENT CENTER Hazelcast Management Center enables to monitor and manage nodes in the cluster. Entity and backup counts which are owned by customerMap, can be seen via Map Memory Data Table. We have distributed 4 entries via customerMap as shown below :Sample keys and values can be seen via Map Browser : Added First Entry :Added Third Entry :hazelcastDistributedExecutorService details can be seen via Executors tab. We have executed 3 task on first member and 2 tasks on second member as shown below :STEP 18 : RUN PROJECT BY STARTING THE CLUSTER’ s MEMBER After created OTV_Spring_Hazelcast_DistributedExecution-0.0.1-SNAPSHOT.jar file is run at the cluster’ s members, the following console output logs will be shown : First member console output : Kas 25, 2012 4:07:20 PM com.hazelcast.impl.AddressPicker INFO: Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [x.y.z.t] Kas 25, 2012 4:07:20 PM com.hazelcast.impl.AddressPicker INFO: Prefer IPv4 stack is true. Kas 25, 2012 4:07:20 PM com.hazelcast.impl.AddressPicker INFO: Picked Address[x.y.z.t]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true Kas 25, 2012 4:07:21 PM com.hazelcast.system INFO: [x.y.z.t]:5701 [dev] Hazelcast Community Edition 2.4 (20121017) starting at Address[x.y.z.t]:5701 Kas 25, 2012 4:07:21 PM com.hazelcast.system INFO: [x.y.z.t]:5701 [dev] Copyright (C) 2008-2012 Hazelcast.com Kas 25, 2012 4:07:21 PM com.hazelcast.impl.LifecycleServiceImpl INFO: [x.y.z.t]:5701 [dev] Address[x.y.z.t]:5701 is STARTING Kas 25, 2012 4:07:24 PM com.hazelcast.impl.TcpIpJoiner INFO: [x.y.z.t]:5701 [dev] --A new cluster is created and First Member joins the cluster. Members [1] { Member [x.y.z.t]:5701 this }Kas 25, 2012 4:07:24 PM com.hazelcast.impl.MulticastJoiner INFO: [x.y.z.t]:5701 [dev]Members [1] { Member [x.y.z.t]:5701 this }... -- First member adds two new entries to the cache... EntryAdded... Member : Member [x.y.z.t]:5701 this, Key : 1, OldValue : null, NewValue : Customer [id=1, name=Jodie, surname=Foster] EntryAdded... Member : Member [x.y.z.t]:5701 this, Key : 2, OldValue : null, NewValue : Customer [id=2, name=Kate, surname=Winslet]... --Second Member joins the cluster. Members [2] { Member [x.y.z.t]:5701 this Member [x.y.z.t]:5702 }... -- Second member adds two new entries to the cache... EntryAdded... Member : Member [x.y.z.t]:5702, Key : 4, OldValue : null, NewValue : Customer [id=4, name=Colin, surname=Farrell] EntryAdded... Member : Member [x.y.z.t]:5702, Key : 3, OldValue : null, NewValue : Customer [id=3, name=Bruce, surname=Willis] Second member console output : Kas 25, 2012 4:07:48 PM com.hazelcast.impl.AddressPicker INFO: Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [x.y.z.t] Kas 25, 2012 4:07:48 PM com.hazelcast.impl.AddressPicker INFO: Prefer IPv4 stack is true. Kas 25, 2012 4:07:48 PM com.hazelcast.impl.AddressPicker INFO: Picked Address[x.y.z.t]:5702, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5702], bind any local is true Kas 25, 2012 4:07:49 PM com.hazelcast.system INFO: [x.y.z.t]:5702 [dev] Hazelcast Community Edition 2.4 (20121017) starting at Address[x.y.z.t]:5702 Kas 25, 2012 4:07:49 PM com.hazelcast.system INFO: [x.y.z.t]:5702 [dev] Copyright (C) 2008-2012 Hazelcast.com Kas 25, 2012 4:07:49 PM com.hazelcast.impl.LifecycleServiceImpl INFO: [x.y.z.t]:5702 [dev] Address[x.y.z.t]:5702 is STARTING Kas 25, 2012 4:07:49 PM com.hazelcast.impl.Node INFO: [x.y.z.t]:5702 [dev] ** setting master address to Address[x.y.z.t]:5701 Kas 25, 2012 4:07:49 PM com.hazelcast.impl.MulticastJoiner INFO: [x.y.z.t]:5702 [dev] Connecting to master node: Address[x.y.z.t]:5701 Kas 25, 2012 4:07:49 PM com.hazelcast.nio.ConnectionManager INFO: [x.y.z.t]:5702 [dev] 55715 accepted socket connection from /x.y.z.t:5701 Kas 25, 2012 4:07:55 PM com.hazelcast.cluster.ClusterManager INFO: [x.y.z.t]:5702 [dev] --Second Member joins the cluster. Members [2] { Member [x.y.z.t]:5701 Member [x.y.z.t]:5702 this }Kas 25, 2012 4:07:56 PM com.hazelcast.impl.LifecycleServiceImpl INFO: [x.y.z.t]:5702 [dev] Address[x.y.z.t]:5702 is STARTED -- Second member adds two new entries to the cache... EntryAdded... Member : Member [x.y.z.t]:5702 this, Key : 3, OldValue : null, NewValue : Customer [id=3, name=Bruce, surname=Willis] EntryAdded... Member : Member [x.y.z.t]:5702 this, Key : 4, OldValue : null, NewValue : Customer [id=4, name=Colin, surname=Farrell]25.11.2012 16:07:56 DEBUG (DistributedExecutorService.java:42) - Method executeOnStatedMember is called... 25.11.2012 16:07:56 DEBUG (DistributedExecutorService.java:46) - Result of method executeOnStatedMember is : First Member' s TestCallable Task is called...25.11.2012 16:07:56 DEBUG (DistributedExecutorService.java:61) - Method executeOnTheMemberOwningTheKey is called... 25.11.2012 16:07:56 DEBUG (DistributedExecutorService.java:65) - Result of method executeOnTheMemberOwningTheKey is : First Member' s TestCallable Task is called...25.11.2012 16:07:56 DEBUG (DistributedExecutorService.java:78) - Method executeOnAnyMember is called... 25.11.2012 16:07:57 DEBUG (DistributedExecutorService.java:82) - Result of method executeOnAnyMember is : Second Member' s TestCallable Task is called...25.11.2012 16:07:57 DEBUG (DistributedExecutorService.java:96) - Method executeOnMembers is called... 25.11.2012 16:07:57 DEBUG (DistributedExecutorService.java:101) - Result of method executeOnMembers is : [First Member' s TestCallable Task is called..., Second Member' s TestCallable Task is called...] STEP 19 : DOWNLOAD https://github.com/erenavsarogullari/OTV_Spring_Hazelcast_DistributedExecution Related Links : Java ExecutorService Interface Hazelcast Distributed Executor Service   Reference: Hazelcast Distributed Execution with Spring from our JCG partner Eren Avsarogullari at the Online Technology Vision blog. ...
agile-logo

Agile Program Management: How Will You Deliver?

One of my program management clients is organizing a program and is having trouble thinking about a delivery model that fits their program. They are transitioning to agile and are accustomed to traditional releases. When I suggested they have someone representing deployment on their core team, that was an initial shock to their system, and now they see that it was a good idea. They don’t have a hardware person on their core team, but otherwise they look a lot this this picture.         With agile, they have options they didn’t have before. Because they are a software-only product, they have the option to release as mandated release as before. They can rollout as before, with IT scheduling the release and mandating when people upgrade. I asked how well that worked before. You should have seen people’s eyes roll when I asked that question!I suggested there might be other options: continuous deployment and phased deployment. With continuous deployment, you deploy as you have features ready. You do need to have continuous integration across all the project teams, or you need to somehow manage the risk of not having the code integrated. You need enough test automation to know the product will not blow up so you can continue to deploy new features and not break old features. Remember, a program is big. Imagine you have at least 50 people, working on this program. This client expects to have closer to 150 at full staffing. (I think that agile approaches will mean they will need fewer people, but that’s my opinion only. No data yet.) If you have 5-7 person cross-functional teams, all checking in completed features every day or two or three, and everyone is verifying that yes, their code works against the main branch, you have a lot of progress being made every single day. Every day. 50 people is only 10 teams. That’s it. 150 people is 30 teams. Talk about a program taking on a life of its own! You can’t “control” that. You can guide it. Some people were concerned—what if they delivered the wrong features first with continuous deployment? I asked about their roadmap. Who was updating the roadmap and how often? How long were the iterations? Remember Short is Beautiful? I had a chance to explain that again. Then we discussed phased deployment, where they could commit to certain features on the roadmap at certain times to the customer. They could deliver builds at certain days and give themselves enough time to test those builds after they had completed the features. I asked if they were doing hardening sprints now. They sheepishly said yes, they were. I said that was fine, as long as they knew what they were doing. I believe in being honest with yourself. I asked if they were planning on being able to integrate testing into the development sprints. They tried to give me a song-and-dance about how testing was integrated. I explained that if they had hardening sprints now, only 8 weeks into the program, testing was not integrated. It was only going to get worse the more people they added and the longer they proceeded. Did they know and understand their impediments? It’s one thing to make your release decisions because it’s a business decisions. It’s another because you can’t choose one of the options. It turns out the program product owner had a bad case of feature-itis. This can be a disaster for a program. All of the program product owner’s decisions are magnified, because those decisions are rolled down to the feature teams. If you have 10 teams that’s one thing. If you have 30 teams, that’s even more magnification. If you decide to de-emphasize technical debt, and the technical teams don’t address the debt when they create the features, you might be left with code that doesn’t always compile, never mind release. When you start a program, think about how you want to deliver. As Covey said, “Start with the end in mind.” There is no one right answer. Continuous deployment might be right for you and your customers. It might not. Your customers might say, “No, we do not want the system to change all the time. Stop it!” However, you might decide that you want the discipline of being able to release at any given moment. In that case, continuous deployment internally might be exactly the right model. Phased deployment is a great alternative if you have a roadmap and you know when you want to commit to which features on your roadmap. You want to pick your customers and work with those customers carefully. A gazillion years ago, when I managed beta programs, and worked with beta customers, I set their expectations about what they would get in each beta release. It’s the same idea. Except, you want to have many more deployments. If you have to stick with a traditional rollout because you have hardware or you have some other constraint, you’ll have to work to make your product enticing. You won’t have the work-in-progress as enticement. The more you work towards continuous deployment internally, the more your program product owner can see the value of your work. That means the faster the program can potentially be over. It’s worth trying. What clear to me, is that if you want to be agile in your program, you need to think about delivery and deployment as soon as you start your program management work. How you deliver and release is critical. Once you know your release criteria, you need to know how you will release. There is no right or wrong decision. It’s a decision for your program.   Reference: Agile Program Management: How Will You Deliver? from our JCG partner Johanna Rothman at the Managing Product Development blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close