Featured FREE Whitepapers

What's New Here?

agile-logo

Agile Estimation, Prediction, and Commitment

Your boss wants a commitment. You want to offer a prediction. Agile, you say, only allows you to estimate and predict – not to commit. ”Horse-hockey!” your boss exclaims, “I want one throat to choke, and it will be yours if you don’t make a commitment and meet it.” There’s a way to keep yourself off the corporate gallows – estimate, predict, and commit – using agile principles. This is an article about agile product management and release planning. Change and Uncertainty In the dark ages before your team became agile, you would make estimates and commitments. You never exactly met your commitments, and no one really noticed. That was how the game was played. You made a commitment, everyone knew it would be wrong, but they expected it anyway. Maybe your boss handicapped your commitment, removing scope, lowering expectations, padding the schedule. Heck, that’s been the recipe for success since they planned the pyramids. It makes sense.Your early estimates are wrong. When you add them up, the total will be wrong. If you do PERT estimation, the law of large numbers will help you in aggregate. But you’ll still be wrong. The outside demands on, and availability of, your people will change. Unplanned sick time, attrition, levels of commitment over time, lots of “people stuff” is really unknown. The needs of your customers will change. Markets evolve over time. You get smarter, your competitors get better, your customer’s expectations change.Agile processes are designed to help you deliver what your customer actually needs, not what was originally asked for. Contrast the two worlds. In the old world, you would commit to delivering a couple pyramids. After spending double your budget, with double the project duration, you would have delivered one pyramid. When you deliver it, you find out that sphinxes are all the rage. Oops. Your team changed to agile, so that you could deliver the sphinx. But your Pharaoh still wants a commitment to deliver a couple pyramids (the smart ones will be expecting to get just one). You can stay true to agile, and still mollify your boss’ need to have a commitment, if you take advantage of the first-principles of why agile estimation works.Estimation A commitment is a factual prediction of the future. ”This will take two weeks.” Nobody is prescient. A factual prediction has to be nuanced. ”I expect* this will take no more than two weeks.” *in reality, this is shorthand for a mathematical prediction, such as “I expect, with 95% confidence, that this will take no more than two weeks.” Few non-scientist, non-engineers, non-mathematicians understand that 95% confidence has a precise meaning. People usually interpret it to mean “a 5% chance that it will take more than two weeks.” What it really means is that if this exact same task were performed twenty thousand times (in a hypothetical world, of course), then nineteen thousand of those times, it would be completed in under two weeks – do you feel lucky? To make a statement like this, you actually have to create a PERT estimate – identifying the best-case, worst-case, and most-likely case for how long a task will take.Unfortunately, we’re rarely asked to make a commitment about a single task – but rather a large collection of tasks – well-defined, ill-defined, and undefined. You can combine PERT estimates for the individual tasks, resulting in an overall estimate of the collection of tasks.The beauty of this approach is that the central limit theorem, and the law of large numbers, work to help you estimate a collection of tasks – you can actually provide better estimates of a group of tasks than a single task. This obviously helps with the well-defined tasks that you know about at the start of the project. This even helps with the ill-defined tasks. Rationalists will argue that the key, then, is to do more up-front research to discover the undefined tasks – and then we’re set. As Frederick Brooks (Mythical Man-Month) points out in The Design of Design, this debate has been going on since Descartes and Locke. It is not a new idea. Big Up-Front Design and Requirements (BUFD & BUFR) hasn’t worked particularly well, so far. Don’t throw out the baby with the bath-water, however. The math of estimation is still important and useful. Even if empiricism is not the silver bullet.Prediction Estimation is a form of prediction. Even agile teams do it. In Scrum, you estimate a collection of user stories – in story points that represent complexity, and you predict how many points the team can complete in this sprint. Note the time factor. If you’re working a two-week sprint, there is very little risk of changes in staffing during a two-week period. There’s also very little risk that your market will change significantly in two weeks – and if it does, what are the odds that you will notice and materially change your requirements in two weeks? Visually, let’s take that PERT estimate and turn it sideways – so we can introduce the dimension of time. Imagine you estimated all of the tasks (well-defined, ill-defined, and a guess about the undefined), as if they were all to happen in the first sprint. Ignore inter-task dependencies, and pretend you had unlimited resources and the ability to perform all tasks in parallel.The graph above shows the aggregate estimate – the circle is your best prediction, with error bars representing your confidence interval in the estimate. If you were using PERT estimates, these could represent that 5% and 95% confidence lines. Subjectively pick something based on your team’s experience in the domain and your confidence in your guesses (about the undefined tasks). We need a segue into the “best of waterfall” approach to estimating projects, to steal and invert a good idea.The Cone of Uncertainty The folks at Construx have published a nice explanation of the cone of uncertainty – an adaptation of an idea from Steven McConnell’s Software Estimation: Demystifying The Black Art (2006). That article uses his imagery with permission – so please go look at it there. The idea is that as the project becomes better defined (e.g. during the project), the amount of uncertainty is reduced. The findings show that initial estimates are off by 400% (either low by a factor of 4 or high by a factor of 4)! Even after “nailing down” requirements, estimates are still off by 30% to 50%! As bad as that sounds, it is actually worse. This is a prediction for the original project (delivering pyramids). Not only are your estimates wrong – but they are bad estimates for delivering the wrong product. But – the core idea is sound – the further into the future you have to execute, the greater the mistakes in your estimate. Taking that concept, and applying it to our diagram, we get the following: The further into the future you are trying to predict, the less accuracy you have in your prediction. This reduction in accuracy is reflected as a widening of the confidence bands for your estimate.A couple sprints’ worth of work is not much different than one sprint – so your estimation range is not much changed. An entire release of sprints (say 6 to 10 sprints) has much more opportunity for the unknown to rear its head.Now, your prediction is (probably) unusably vague and imprecise. ”This set of tasks will take X plus or minus a factor of two.” That’s the reality. Note: This has always been the reality. People have historically reduced this “risk to timing” by hiding the “risk of change” aspects – and waterfall processes encourage you to deliver the wrong thing, as close to on-time as possible.That’s not what we want to do, however. We still want to deliver the (not-yet-defined) right product, as efficiently as possible. That’s the goal of agile. (For folks who haven’t been here at Tyner Blain for long – “right” includes both value and quality). Refinement Because we’re agile, and we’re willing to “get smarter” about our product over time, we have an opportunity to improve. Because of the nature of compounding estimates and the cone of uncertainty, our uncertainty gets smaller over time. Let’s remove our artificial simplification that we could do everything “right now” and look at what we think we know right now, about the end of the release.Our ability to predict the amount of effort (for today’s definition of the product) at the end of the release is not very good.Our ability to predict (today’s definition of the product) one sprint into the future is much better.After completing the first sprint, we are a little bit smarter – the ill-defined tasks are better defined. Maybe some of the undefined tasks are now ill-defined. The same cone of uncertainty is now a little bit smaller – we are a little bit smarter, and the time horizon of the release date is a little bit closer.The trend continues – each sprint gets us closer to the release date, and with each sprint (assuming we get feedback from our customers, and continue to study our markets) we get a little bit smarter. We also get better at predicting the team’s velocity (how much “product” they can deliver during each sprint). Commitment Your boss still wants a commitment, however. And that’s where we get to change the way we look at this (again). The above diagrams all display how we converge on an estimate for a stable body of work. However, we know that the body of work is constantly changing. Backlog! [you say] Yes! The backlog. The backlog is an ordered, prioritized list of user stories and bugs. I was talking with Luke Hohmann of Innovation Games last month, and one of the most popular online Innovation Games is now the one they created based on prioritizing by bang for the buck. Play it today online (for free!). How cool is that? The backlog represents the work the team is going to do – in the order in which the team is going to do it. Over time, as we get smarter, we will add and remove items from the backlog – because we discover new capabilities that are important, and because we learn that some things aren’t worth doing. We will even re-order the backlog as we recognize shifting priorities in the markets (or in our changing strategy). As this happens, it turns out that the items at the top of the list are least likely to get displaced, and therefore most likely to still be part of the product by the time we get to the release. Instead of thinking about uncertainty in terms of how long it takes, think about uncertainty in terms of how much we complete in a fixed amount of time. In agile, generally, we apply a timebox approach to determining what gets built. Now, uncertainty, instead of manifesting as “when do we finish?” becomes “what will we finish?” Your boss is rational. She appreciates the constraints, she just wants to know what you can commit. Every boss I’ve worked with has been willing (sometimes only after much discussion) to treat this uncertainty in terms of what instead of when. They acknowledge that they need to translate (usually for their boss) into a “fixed” commitment. The solution: commit to a subset of what you predict you can complete. At the start of the release, you may have 500 points worth of stories. Based on your team’s expected velocity, and the number of sprints in the release, you predict that you can complete 320 points worth of stories (5 people on the team, a team velocity of 40 points per sprint, and 8 sprints in the release). Starting at the top of the backlog and working down, draw a cut-line at the last story you can complete (when you reach 320 points). This is your prediction. Now the commitment part. You’ll have to figure out what you’re comfortable with. Maybe for 8 sprints (say, 16 weeks into the future), you may only be comfortable committing to half that amount – 160 points. Go back to the top of the backlog, and count down until you reach 160 points. Everything above the line is what you commit to delivering. Maybe you are comfortable committing to 240 points, maybe only 80. This is like playing spades. The more you can commit to, without missing, the better off you are. Your tolerance for risk is different than mine. You can also negotiate with your boss. Commit to 160 points now, and provide an update after every other sprint. More likely than not, you will be increasing the scope of your commitment with every update. Mid-project updates of “we can do more” are always better than “we can do less.” And both are better than end-of-project surprises. This also allows you to have updates that look like this: We didn’t know this at the start of the release, but X is really important to our customers – and we will be able to deliver X in addition to what we already committed. Without slipping the release date.Conclusion Making commitments with an agile process is not impossible. It just needs to be approached differently (if you want to stay true to agile). The end result: better predictions, more realistic commitments, and the likelihood that each update will be good news instead of bad. Don’t forget to share! Reference: Agile Estimation, Prediction, and Commitment from our JCG partner Scott Sehlhorst at the Business Analysis | Product Management | Software Requirements blog....
json-logo

JSON – Jackson to the rescue

Sometimes you have to fetch some data from the server in JavaScript, JSON is pretty good choice for this task. Let’s play with the Employer – Employee – Benefit example from the post JPA Demystified (episode 1) – @OneToMany and @ManyToOne mappings. We will use it inside the web application based on Spring Framework. Our first controller will return the employees list as the response body, in our case MappingJacksonHttpMessageConverter will be used automagically for converting the value returned by handleGet method to the response send to client.   @Controller @RequestMapping('/employee-list.json') public class EmployeeListController { @Autowired private EmployerDAO employerDAO;@RequestMapping(method = RequestMethod.GET) @ResponseBody public List<Employee> handleGet(@RequestParam('employerId') Long employerId) { return employerDAO.getEmployees(employerId); } } When we try to fetch the data for the first time, we encounter beautiful exception: JsonMappingException: Infinite recursion (StackOverflowError) – caused by bi-directional references between the Employer – Employee – Benefit. Looking for the possible solution, I’ve found a note Handle bi-directional references using declarative method(s), and after reading it, I’ve corrected the domain entities in following way: @Entity @Table(name = 'EMPLOYERS') public class Employer implements Serializable { ... @JsonManagedReference('employer-employee') @OneToMany(mappedBy = 'employer', cascade = CascadeType.PERSIST) public ListgetEmployees() { return employees; } ... }@Entity @Table(name = 'EMPLOYEES') public class Employee implements Serializable { ... @JsonManagedReference('employee-benefit') @OneToMany(mappedBy = 'employee', cascade = CascadeType.PERSIST) public ListgetBenefits() { return benefits; }@JsonBackReference('employer-employee') @ManyToOne(optional = false) @JoinColumn(name = 'EMPLOYER_ID') public Employer getEmployer() { return employer; } ... }@Entity @Table(name = 'BENEFITS') public class Benefit implements Serializable { ... @JsonBackReference('employee-benefit') @ManyToOne(optional = false) @JoinColumn(name = 'EMPLOYEE_ID') public Employee getEmployee() { return employee; } ... } After performing the above changes, I could finally enjoy the JSON response returned by my code: [{'id':1, 'benefits':[{'name':'Healthy Employees', 'id':1, 'type':'HEALTH_COVERAGE', 'startDate':1104534000000, 'endDate':null}, {'name':'Gold Autumn','id':2,'type':'RETIREMENT_PLAN','startDate':1104534000000,'endDate':null},{'name':'Always Secured','id':3,'type':'GROUP_TERM_LIFE','startDate':1104534000000,'endDate':null}],'firstName':'John'},{'id':2,'benefits':[],'firstName':'Mary'},{'id':3,'benefits':[],'firstName':'Eugene'}] And as usual some links for the dessert:JSON – JavaScript Object Notation Jackson – High-performance JSON processorReference: JSON – Jackson to the rescue from our JCG partner Micha? Ja?tak at the Warlock’s Thoughts blog. ...
oracle-glassfish-logo

PrimeFaces Push with Atmosphere on GlassFish 3.1.2.2

PrimeFaces 3.4 came out three days ago. Beside the usual awesomeness of new and updated components it also includes the new PrimeFaces Push framework. Based on Atmosphere this is providing easy push mechanisms to your applications. Here is how to configure and run it on latest GlassFish 3.1.2.2. PreparationsAs usual you should have some Java, Maven and GlassFish installed. If you need it out of one hand give NetBeans 7.2 a try. It is the latest and greatest and comes with all the things you need for this example. Install the parts or the whole to a location of your choice and start with creating a new GlassFish domain: asadmin create-domain pf_pushaccept the default values and start your domain asadmin start-domain pf_push Now you have to enable Comet support for your domain. Do this either by using the http://<host>:4848/ admin ui or with the following command: asadmin set server-config.network-config.protocols.protocol.http-1.http.comet-support-enabled='true' That is all you have to do to configure your domain. The Maven Project SetupNow switch to your IDE and create a new Maven based Java EE 6 project. Add the primefaces repository to the <repositories> section and add the primefaces dependency to your project <dependencies> section or your project’s pom.xml: <repository> <url>http://repository.primefaces.org/</url> <id>primefaces</id> <layout>default</layout> <name>Repository for library PrimeFaces 3.2</name> </repository><dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>3.4</version> </dependency>Additionally we need the latest Atmosphere dependency (Contrats to JeanFrancois Arcand for this release) <dependency> <groupId>org.atmosphere</groupId> <artifactId>atmosphere-runtime</artifactId> <version>1.0.0</version> </dependency>It is using Log4j and if you need to have some more output it is a good idea to also include the corresponding configuration or bridge it to JUL with slf4j. To do the later, simply include the following to your pom.xml: <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.6.6</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-jdk14</artifactId> <version>1.6.6</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>log4j-over-slf4j</artifactId> <version>1.6.6</version> </dependency>There is only one thing left to do. The PrimePush component needs to have its servlet channel registered. So, open your web.xml and add the following to it: <servlet> <servlet-name>Push Servlet</servlet-name> <servlet-class>org.primefaces.push.PushServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>Push Servlet</servlet-name> <url-pattern>/primepush/*</url-pattern> </servlet-mapping> That was it! On to the code! The CodeI’m going to use the example referred to in the PrimeFaces users guide. A very simple example which has a global counter which could be incremented. import java.io.Serializable; import javax.faces.bean.ManagedBean; import javax.faces.bean.SessionScoped; import org.primefaces.push.PushContext; import org.primefaces.push.PushContextFactory;/** * Counter is a global counter where each button click increments the count * value and new value is pushed to all subscribers. * * @author eiselem */ @ManagedBean @SessionScoped public class GlobalCounterBean implements Serializable {private int count;public int getCount() { return count; }public void setCount(int count) { this.count = count; }public synchronized void increment() { count++; PushContext pushContext = PushContextFactory.getDefault().getPushContext(; pushContext.push('/counter', String.valueOf(count)); } }The PushContext contains the whole magic here. It is mainly used to publish and schedule messages and manage listeners and more. It is called from your facelet. This looks simple and familiar: <h:form id='counter'> <h:outputText id='out' value='#{globalCounterBean.count}' styleClass='display' /> <p:commandButton value='Click' actionListener='#{globalCounterBean.increment}' /> </h:form>This basically does nothing, except incrementing the counter. So you have to add some more magic for connecting to the push channel. Add the following below the form: <p:socket channel='/counter' > <p:ajax event='message' update='counter:out' /> </p:socket><p:socket /> is the PrimeFaces component that handles the connection between the server and the browser. It does it by defining a communication channel and a callback to handle the broadcasts. The contained <p:ajax /> component listens to the message event and updates the counter field in the form. This however requires and additional server round-trip. You could also shortcut this by using a little java-script and binding the onMessage attribute to it to update the output field: <script type='text/javascript'> function handleMessage(data) { $('.display').html(data); } </script> <p:socket onMessage='handleMessage' channel='/counter' />That is all for now. Congratulations to your first PrimeFaces Push example. Happy coding and don’t forget to share! Reference: PrimeFaces Push with Atmosphere on GlassFish 3.1.2.2 from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
agile-logo

Agile development articles on Java Code Geeks

Agile denotes the quality of being agile, ready for motion, nimble. Agile Development methods are attempting to offer an answer to the eager business community asking for lighter weight along with faster and nimbler software development processes. This is especially the case with the rapidly growing and volatile Internet Software Industry as well as for the emerging mobile application environment. Disciplined Agile Software Development includes an iterative and evolutionary approach to software development which is performed in a highly collaborative manner by self-organizing teams within an effective governance framework with “just enough” ceremony that produces high quality solutions in a cost effective and timely manner which meets the changing needs of its stakeholders. These methods have evoked substantial amount of literature and debates. However, academic research is still scarce, as most of existing publications are written by practitioners or consultants. With this article, Java Code Geeks are trying to put together their available resources concerning Agile Software Development. So here is the list of all Agile Development articles for your reference:Save money from Agile Development Agile software development recommendations for users and new adopters Breaking Down an Agile process Backlog The Ten Minute Build Standups – take them or leave them How extreme is extreme programming? Even Backlogs Need Grooming You can’t be Agile in Maintenance? (Part 1) You can’t be Agile in Maintenance? (Part 2) Understanding the Vertical Slice Iterationless Development – the latest New New Thing How to start a Coding Dojo The Architecture Spike Kata 4 Warning Signs that Agile Is Declining Agile Before there was Agile: Egoless Programming and Step-by-Step Playing around with pomodoros Agile’s Customer Problem That’s Not Agile! Agile Lifecycles for Geographically Distributed Teams Software Engineering needs leaders, not ScrumMasters! Infrastructure, Technical Debt, and Automated Test Framework You don’t need Testers – Or do you? Why Does Management Care About Velocity? Programs and Technical Debt The pursuit of protection: How much testing is “enough”? Why an Agile Project Manager is Not a Scrum Master Are Agile plans Better because they are Feature-Based? Becoming a Leading Manager Product-Burndown-Charts and Sprint-Burndown-Charts in SCRUM Projects Where do Security Requirements come from? Measuring your IT OPS – Part 1 Measuring your IT OPS – Part 2 Agile Estimating: Story Points and Decay Sooner or Later: Deliver Early or Minimize Waste An agile methodology for orthodox environments Architects Need a Pragmatic Software Development Process In Agile development planning, a security framework loses out Hours, Velocity, Silo’d Teams, & Gantts Looking For Leaders In All The Wrong Places What Scrum, Kanban, RUP, and ITIL All Have In Common (which causes them to fail) Throughput Planning – Why Project Managers Should Like Lean and Agile Five Step Illustrated Guide to Setup a Kanban System in an Enterprise Organization The Demise of IT Business Analysts Build documentation to last – choose the agile way Client Reviews: From Waterfall to Agile What can you get out of Kanban? Does the PMI-ACP set the bar high enough on Risk Management? A Prototype is Worth a Thousand Lines of Code Contracting in Agile – You try itEnjoy! And don’t forget to share and spread the word!...
java-logo

Changing delay, and hence the order, in a DelayQueue

So I was looking at building a simple object cache that expires the objects after a given time. The obvious mechanism for this is the use the DelayedQueue class from the concurrency package in Java; but I wanted to know if it way possible to update the delay after an object has been added to the queue. Looking at the Delayed interface there didn’t seem to be a good reason not to in the docs so I thought it was time to experiment. So first of all you need to to create an instance of Delayed, this is a very simple implementation that with the switch of a flag you can basically invert the timeout order in the list. (And add a suitable offset so things happen in the right order static int COUNT=100;class DelayedSwap implements Delayed, Comparable<Delayed> {int index = 0; volatile boolean swap = false; long starttime;public DelayedSwap(int index, long starttime) { super(); this.index = index; this.starttime = starttime; }private long getDelay() { return (swap ? starttime + (2*COUNT - index) * 100 : starttime + index * 100) - System.currentTimeMillis(); }public String toString() { return index + ' swapped ' + swap + ' delay ' + getDelay(); }@Override public long getDelay(TimeUnit unit) { return unit.convert(getDelay(), TimeUnit.MILLISECONDS); }@Override public int compareTo(Delayed delayed) { if (delayed == this) return 0;return (int)(getDelay(TimeUnit.MILLISECONDS) - delayed.getDelay(TimeUnit.MILLISECONDS)); } }So to test this I created a method that would create a bunch of the DelayedSwap objects and half way through processing the list switch the flag so altering the order of expiration. public static void main(String[] args) throws InterruptedException {long start = System.currentTimeMillis(); final List delayed = new ArrayList (); for (int i = 1; i < COUNT; i++) { delayed.add(new DelayedSwap(i, start)); }final DelayQueue dq = new DelayQueue(); dq.addAll(delayed);new Thread(new Runnable() {@Override public void run() { try { TimeUnit.SECONDS.sleep(5); } catch (InterruptedException e) { } for (DelayedSwap d : delayed) { d.swap = true; } } }).start();while (!dq.isEmpty()) { System.out.println(dq.take()); }} So what I was expecting was the elements 1-50 ish written out in the correct order but instead after the swap over the elements are coming out in an arbitrary order quite far away from the request delay time. 1 swapped false delay -19 2 swapped false delay -4 3 swapped false delay -4 4 swapped false delay -4 5 swapped false delay -4 6 swapped false delay -4 7 swapped false delay -4 8 swapped false delay -4 9 swapped false delay -4 10 swapped false delay -4 11 swapped false delay -4 12 swapped false delay -4 13 swapped false delay -4 14 swapped false delay -4 15 swapped false delay -4 16 swapped false delay -4 17 swapped false delay -4 18 swapped false delay -4 19 swapped false delay -4 20 swapped false delay -4 21 swapped false delay -4 22 swapped false delay -4 23 swapped false delay -4 24 swapped false delay -4 25 swapped false delay -4 26 swapped false delay -4 27 swapped false delay -4 28 swapped false delay -4 29 swapped false delay -4 30 swapped false delay -4 31 swapped false delay -4 32 swapped false delay -4 33 swapped false delay -4 34 swapped false delay -4 35 swapped false delay -4 36 swapped false delay -4 37 swapped false delay -4 38 swapped false delay -4 39 swapped false delay -5 40 swapped false delay -4 41 swapped false delay -4 42 swapped false delay -5 43 swapped false delay -4 44 swapped false delay -5 45 swapped false delay -5 46 swapped false delay -5 47 swapped false delay -5 48 swapped false delay -5 49 swapped false delay -5 50 swapped false delay -5 51 swapped true delay -6 94 swapped true delay -4306 96 swapped true delay -4506 87 swapped true delay -3606 91 swapped true delay -4006 97 swapped true delay -4606 95 swapped true delay -4406 98 swapped true delay -4706 92 swapped true delay -4106 82 swapped true delay -3106 80 swapped true delay -2906 90 swapped true delay -3906 93 swapped true delay -4206 74 swapped true delay -2306 99 swapped true delay -4806 70 swapped true delay -1906 69 swapped true delay -1806 66 swapped true delay -1506 83 swapped true delay -3206 62 swapped true delay -1107 61 swapped true delay -1007 58 swapped true delay -707 71 swapped true delay -2007 89 swapped true delay -3807 85 swapped true delay -3407 78 swapped true delay -2707 86 swapped true delay -3507 81 swapped true delay -3007 88 swapped true delay -3707 84 swapped true delay -3307 79 swapped true delay -2807 76 swapped true delay -2507 72 swapped true delay -2107 68 swapped true delay -1707 65 swapped true delay -1407 60 swapped true delay -907 57 swapped true delay -608 55 swapped true delay -408 75 swapped true delay -2408 77 swapped true delay -2608 73 swapped true delay -2208 63 swapped true delay -1208 67 swapped true delay -1608 64 swapped true delay -1308 59 swapped true delay -808 56 swapped true delay -508 54 swapped true delay -308 53 swapped true delay -208 52 swapped true delay -108 Process exited with exit code 0. So the trick is when you know you are going to modify the delay is to remove and then re-add the element to the queue. // Replacement swap loop for (DelayedSwap d : delayed) { if (dq.remove(d)) { d.swap = true; dq.add(d); } } This run produces a more sensible set of results: 1 swapped false delay -4 2 swapped false delay -8 3 swapped false delay -14 4 swapped false delay -8 5 swapped false delay -4 6 swapped false delay -4 7 swapped false delay -4 8 swapped false delay -4 9 swapped false delay -4 10 swapped false delay -4 11 swapped false delay -4 12 swapped false delay -4 13 swapped false delay -4 14 swapped false delay -4 15 swapped false delay -4 16 swapped false delay -4 17 swapped false delay -4 18 swapped false delay -8 19 swapped false delay -4 20 swapped false delay -4 21 swapped false delay -4 22 swapped false delay -4 23 swapped false delay -4 24 swapped false delay -4 25 swapped false delay -4 26 swapped false delay -4 27 swapped false delay -4 28 swapped false delay -4 29 swapped false delay -4 30 swapped false delay -4 31 swapped false delay -4 32 swapped false delay -4 33 swapped false delay -4 34 swapped false delay -4 35 swapped false delay -4 36 swapped false delay -4 37 swapped false delay -4 38 swapped false delay -4 39 swapped false delay -5 40 swapped false delay -5 41 swapped false delay -5 42 swapped false delay -4 43 swapped false delay -4 44 swapped false delay -5 45 swapped false delay -5 46 swapped false delay -5 47 swapped false delay -5 48 swapped false delay -5 49 swapped false delay -5 50 swapped false delay -5 99 swapped true delay -5 98 swapped true delay -5 97 swapped true delay -11 96 swapped true delay -1 95 swapped true delay -5 94 swapped true delay -9 93 swapped true delay -5 92 swapped true delay -5 91 swapped true delay -5 90 swapped true delay -5 89 swapped true delay -5 88 swapped true delay -5 87 swapped true delay -5 86 swapped true delay -5 85 swapped true delay -5 84 swapped true delay -5 83 swapped true delay -5 82 swapped true delay -5 81 swapped true delay -5 80 swapped true delay -5 79 swapped true delay -5 78 swapped true delay -5 77 swapped true delay -5 76 swapped true delay -5 75 swapped true delay -5 74 swapped true delay -5 73 swapped true delay -5 72 swapped true delay -6 71 swapped true delay -5 70 swapped true delay -5 69 swapped true delay -5 68 swapped true delay -5 67 swapped true delay -5 66 swapped true delay -5 65 swapped true delay -5 64 swapped true delay -5 63 swapped true delay -6 62 swapped true delay -5 61 swapped true delay -6 60 swapped true delay -6 59 swapped true delay -6 58 swapped true delay -6 57 swapped true delay -6 56 swapped true delay -6 55 swapped true delay -6 54 swapped true delay -6 53 swapped true delay -6 52 swapped true delay -6 51 swapped true delay -6 Process exited with exit code 0. I don’t think this is a bug in the object itself, as you wouldn’t expect a HashTable to orders it’s self when the key changes, but I was a little bit surprise by the behaviour. Happy coding and don’t forget to share! Reference: Changing delay, and hence the order, in a DelayQueue from our JCG partner Gerard Davison at the Gerard Davison’s blog blog....
software-development-2-logo

Signal-to-noise ratio in your code

You write code to deliver business value, hence your code deals with a business domain like e-trading in finance, or the navigation for an online shoe store. If you look at a random piece of your code, how much of what you see tells you about the domain concepts? How much of it is nothing but technical distraction, or « noise »? Like the snow on tv I remember TV used to be not very reliable long ago, and you’d see a lot of « snow » on top of the interesting movie. Like in the picture below, this snow is actually a noise that interferes with the interesting signal.TV signal hidden behind snow-like noise The amount of noise compared to the signal can be measured with the signal-to-noise ratio. Quoting the definition from Wikipedia: Signal-to-noise_ratio (often abbreviated SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. It is defined as the ratio of signal power to the noise power. A ratio higher than 1:1 indicates more signal than noise. We can apply this concept of signal-to-noise ratio to the code, and we must try to maximize it, just like in electrical engineering. Every identifier matters Look at each identifier in your code: package names, classes and interfaces names, method names, field names, parameters names, even local variables names. Which of them are meaningful in the domain, and which of them are purely technicalities? Some examples of class names and interface names from a recent project (a bit changed to protect the innocents) illustrate that. Identifiers like « CashFlow »or « CashFlowSequence » belong to the Ubiquitous Language of the domain, hence they are the signal in the code.Examples of classnames as signals, or as noise On the other hand, identifiers like « CashFlowBuilder » do not belong to the ubiquitous language and therefore are noise in the code. Just counting the number of « signal » identifiers over the number of « noise » identifiers can give you an estimate of your signal-to-noise ratio. To be honest I’ve never really counted to that level so far. However for years I’ve been trying to maximize the signal-to-noise ratio in the code, and I can demonstrate that it is totally possible to write code with very high proportion of signal (domain words) and very little noise (technical necessities). As usual it is just a matter of personal discipline. Logging to a logging framework, catching exceptions, a lookup from JNDI and even @Inject annotations are noise in my opinion. Sometimes you have to live with this noise, but everytime I can live without I definitely chose to. For the domain model in particular All these discussion mostly focuses on the domain model, where you’re supposed to manage everything related to your domain. This is where the idea of a signal-to-noise ratio makes most sense. A metric? It’s probably possible to create a metric for the signal-to-noise ratio, by parsing the code and comparing to the ubiquitous language « dictionary » declared in some form. However, and as usual, the primary interest of this idea is to keep it in mind while coding and refactoring, as a direction for action, just like test coverage. I introduced the idea of signal-to-code ratio in my talk at DDDx 2012, you can watch the video here. Follow me (@cyriux) on Twitter! Reference: What’s your signal-to-noise ratio in your code? from our JCG partner Cyrille Martraire at the Cyrille Martraire’s blog blog....
enterprise-java-logo

Benchmarking JMS layer with JMSTester

For most of the clients I’ve been to, scaling out a JMS messaging layer with ActiveMQ is a priority. There are a couple ways to achieve this, but without a doubt, creating benchmarks and analyzing an architecture on real hardware (or as my colleague Gary Tully says “asking the machine”) is step one. But what opensource options do you have for creating a set of comprehensive benchmarks? If you have experience with some good ones, please let me know in the comments. The projects that I could think of:Apache Jmeter ActiveMQ perf plugin FuseSource JMSTester Hiram Chirino’s jms-benchmarkWhile chatting with Gary about setting up test scenarios for ActiveMQ, he recalled there was a very interesting project that appeared dead sitting in the FuseSource Forge repo named JMSTester. He suggested I take a look at it. I did, and I was impressed at its current capabilities. It was created by a former FuseSource consultant, Andres Gies, through many iterations with clients, flights, and free-time hacking. I have since taken it over and I will be adding features, tests, docs, and continuing the momentum it once had. But even before I can get my creative hands in there, I want to share with you the power it has at the moment.Purpose The purpose of this blog entry is to give a tutorial-like introduction to the JMSTester tool. The purpose of the tool is to provide a powerful benchmarking framework to create flexible, distributed JMS tests while monitoring/recording stats critical to have on-hand before making tweaks and tuning your JMS layer. Some of the docs from the JMSTester homepage are slightly out of date, but the steps that describe some of the benchmarks are still accurate. This tutorial will require you download the SNAPSHOT I’ve been working on, which can be found here: jmstester-1.1-20120904.213157-5-bin.tar.gz. I will be deploying the next version of the website soon, which should have more updated versions of the binaries. When I do that, I’ll update this post.Meet the JMSTester tool The JMSTester tool is simply a tool that sends and receives JMS messages. You use profiles defined in spring context config files to specify what sort of load you want to throw at your message broker. JMSTester allows you to define the number of producers you wish to use, the number of consumers, the connection factories, JMS properties (transactions, session acks,), etc. But the really cool part is you can run the benchmarks distributed over many machines. This means you setup machines to specifically act as producers and different ones to act as consumers. As far as monitoring and collecting the stats for benchmarking, JMSTester captures information in three different categories:Basic: message counts per consumer, message size JMX: monitor any JMX properties on the broker as the tests run, including number of threads, queue size, enqueue time, etc Machine: CPU, system memory, swap, file system metrics, network interface, route/connection tables, etcThey Hyperic SIGAR library is used to capture the machine-level stats (group 3) and the RRD4J library is used to log the stats and output graphs. At the moment, I believe the graphs are pretty basic and I hope to improve upon those, but the raw data is always dumped to a csv file and you can use your favorite spreadsheet software to create your own graphs.Architecture The JMSTester tool is made up the following concepts:Controller Clients Recorder Frontend Benchmark ConfigurationController The controller is the organizer for the benchmark. It keeps track of who’s interested in benchmark commands, starts the tests, keeps track of the number of consumers, the number of producers, etc. The benchmark cannot run without a controller. For those of you interested, the underlying architecture of the JMSTester tool relies on messaging, and ActiveMQ is the broker that the controller starts up for the rest of the architecture to work.Clients Clients are containers that take commands and can emulate the role of Producer, Consumer or both or neither (this will make sense further down). You can have as many clients as you want. You give them unique names and use their names within your benchmark configuration files. The clients can run anywhere, including on separate machines or all on one machine.Recorder The clients individually record stats and send the data over to the recorder. The recorder ends up organizing the stats and assembling the graphs, RRD4J databases, and benchmark csv files.Frontend The frontend is what sends commands to the controller. Right now there is only a command-line front end, but my intentions include a web-based front end with a REST-based controller that can be used to run the benchmarks.Benchmark Configuration The configuration files are Spring context files that specify beans that instruct the controller and clients how to run the benchmark. In these config files, you can also specify what metrics to capture and while kind of message load to send to the JMS broker. Going forward I aim to improve these config files including adding custom namespace support to make the config less verbose.Let’s Go! The JMSTester website has a couple of good introductory tutorials:Simple: http://jmstester.fusesource.org/documentation/manual/TutorialSimple.html JMX Probes: http://jmstester.fusesource.org/documentation/manual/TutorialProbes.html Distributed: http://jmstester.fusesource.org/documentation/manual/TutorialDistributed.htmlThey are mostly up-to-date, but I’ll continue to update them as I find errors. The only thing about the distributed tutorial, it doesn’t actually set up a distributed example. It separates out the clients but only on the same localhost machine. There are just a couple other parameters that need to be set to distribute it, which we’ll cover here. The architecture for the tutorial will be the following:Let’s understand the diagram really quickly. The JMS Host will have two processes running: the ActiveMQ broker we’ll be testing, and a JMSTester client container named Monitor. The container will be neither a producer or container, but instead will be used only to monitor machine and JMX statistics. The statistics will be sent back to the recorder on the Controller Host as described in the Recorder section above. The Producer and Consumer containers will be run on separate machines named, respectively, Producer and Consumer. Lastly, the Controller Host machine will have the Controller and Recorder components of the distributed test.Initial Setup Download and extract the JMSTester binaries on each machine that will be participating in the benchmark.Starting the Controller and Recorder containers On the machine that will host the controller, navigate to the $JMSTESTER_HOME dir and type the following command to start the controller and the recorder: ./bin/runBenchmark -controller -recorder -springConfigLocations conf/testScriptsNote that everything must be typed exactly as it is above, including no trailing spaces on the ‘conf/testScripts’ This is a particularity that I will alleviate as part of my future enhancements. Once you’ve started the controller and recorder, you should be ready to start up the rest of the clients. The controller starts up an embedded broker that the clients will end up connecting to.Starting the Producer container On the machine that will host the producer, navigate to the $JMSTESTER_HOME dir, and type the following command: ./bin/runBenchmark -clientNames Producer -hostname domU-12-31-39-16-41-05.compute-1.internalFor the -hostname parameter, you must specify the host name where you started the controller. I’m using Amazon EC2 above, and if you’re doing the same, prefer to use the internal DNS name for the hosts. Starting the Consumer container For the consumer container, you’ll be doing the same thing you did for the producer, except give it a client name of Consumer ./bin/runBenchmark -clientNames Consumer -hostname domU-12-31-39-16-41-05.compute-1.internal Again, the -hostname parameter should reflect the host on which you’re running the controller.Setting up ActiveMQ and the Monitor on JMS Host Setting up ActiveMQ is beyond the scope of this article. But you will need to enable JMX on the broker. Just follow the instructions found on the Apache ActiveMQ website. This next part is necessary to allow the machine-level probe/monitoring. You’ll need to install the SIGAR libs. They are not distributed with JMSTester because of their license, and their JNI libs are not available in Maven. Basically all you need to do is download and extract the [SIGAR distro from here][sigar-distro] and copy all of the libs from the $SIGAR_HOME/sigar-bin/lib folder into your $JMSTESTER_HOME/lib folder. Now start the Monitor container with a similar command for the producer and consumer:< ./bin/runBenchmark -clientNames Monitor -hostname domU-12-31-39-16-41-05.compute-1.internal Submitting the tutorial testcase We can submit the testcase from any computer. I’ve chosen to do it from my local machine. You’ll notice the machine from which you submit the testcase isn’t reflected in the diagram from above; this is simply because we can do it from any machine. Just like the other commands, however, you’ll still need the JMSTester binaries. Before we run the test, let’s take a quick look at the Spring config file that specifies the test. To do so, open up $JMSTESTER_HOME/conf/testScripts/tutorial/benchmark.xml in your favorite text editor, preferably one that color-codes XML documents so it’s easier to read. The benchmark file is annotated with a lot of comments that describe the individual sections clearly. If something is not clear, please ping me so I can provide more details. There are a couple places in the config where you’ll want to specify your own values to make this a successful test. Unfortunately, this is a manual process at the moment, but I plan to fix that up. Take a look at where the JMS broker connection factories are created. In this case, that would be where the ActiveMQ Connection Factories are created (lines 120 and 124.) The URL that goes here is the URL for the ActiveMQ broker you started in one of the previous sections. As it’s distributed, there is a EC2 host url in there. You must specify your own host. Again, if you use EC2, prefer the internal DNS names. Then, take a look at line 169 where the AMQDestinationProbe is specified. This probe is JMX-probe specific to ActiveMQ. You must change the brokerName property to match whatever you named your broker when you started it (usually found in the <broker brokerName='name here'> section of your broker config). Finally, from the $JMSTESTER_HOME dir, run the following command: ./bin/runCommand -command submit:conf/testScripts/tutorial -hostname ec2-107-21-69-197.compute-1.amazonaws.com Again, note that I’m setting the -hostname parameter to the host that the controller is running on. In this case, we’ll prefer the public DNS of EC2, but it would be whatever you have in your environment.Output There you have it. You’ve submitted the testcase to the benchmark frammework. You should see some activity on each on of the clients (producer, consumer, monitor) as well as on the controller. If your test has run correctly and all of the raw data and graphs have been produced, you should see something similar as logging output: Written probe Values to : /home/ec2-user/dev/jmstester-1.1-SNAPSHOT/tutorialBenchmark/benchmark.csv Note that all of the results are written to tutorialBenchmark which is the name of the test as defined by the benchmarkId in the Spring config file on line 18: <property name='benchmarkId' value='tutorialBenchmark'/> If you take a look at the benchmark.csv file, you’ll see all of the stats that were collected. The stats for this tutorial that were collected include the following:message count message size JMX QueueSize JMX ThreadCount SIGAR CpuMonitor SIGAR Free System Memory SIGAR Total System Memory SIGAR Free Swap SIGAR Total Swap SIGAR Swap Page In SIGAR Swap Page Out SIGAR Disk Reads (in bytes) SIGAR Disk Write (in bytes) SIGAR Disk Reads SGIAR Disk Writes SIGAR Network RX BYTES SIGAR Network RX PACKETS SIGAR Network TX BYTES SIGAR Network RX DROPPED SiGAR Network TX DROPPED SIGAR Network RX ERRORS SIGAR Network TX ERRORSThat’s it I highly recommend taking a look at this project. I haven taken it over, and will be improving it as time permits, but I would very much value any thoughts or suggestions about how to improve it or what use cases to support. Take a look at the documentation already there, and I will be adding more as we go. If you have questions, or something didn’t work properly as described above, please shoot me a comment, email or find me in the Apache IRC channels… I’m usually in at least #activemq and #camel. Happy coding and don’t forget to share! Reference: Benchmarking your JMS layer with an open source JMSTester tool from FuseSource from our JCG partner Christian Posta at the Christian Posta Software blog....
spring-logo

Wire object dependencies outside a Spring Container

There are a few interesting ways of setting the properties and dependencies of an object instantiated outside of a Spring container. Use CasesTo start with, why would we need to do inject in dependencies outside of a Spring container – I am aware of three use cases where I have instantiated objects outside of the Spring container and needed to inject in dependencies. Consider first the case of a series of tasks executed using a Spring TaskExecutor, the tasks highlighted below are instantiated outside of a Spring container:     List<Callable<ReportPart>> tasks = new ArrayList<Callable<ReportPart>>(); List<ReportRequestPart> reportRequestParts = reportRequest.getRequestParts(); for (ReportRequestPart reportRequestPart : reportRequestParts) { tasks.add(new ReportPartRequestCallable(reportRequestPart, reportPartGenerator)); }List<Future<ReportPart>> responseForReportPartList; List<ReportPart> reportParts = new ArrayList<ReportPart>(); try { responseForReportPartList = executors.invokeAll(tasks); for (Future<ReportPart> reportPartFuture : responseForReportPartList) { reportParts.add(reportPartFuture.get()); }} catch (Exception e) { logger.error(e.getMessage(), e); throw new RuntimeException(e); }public class ReportPartRequestCallable implements Callable<ReportPart> { private final ReportRequestPart reportRequestPart; private final ReportPartGenerator reportPartGenerator;public ReportPartRequestCallable(ReportRequestPart reportRequestPart, ReportPartGenerator reportPartGenerator) { this.reportRequestPart = reportRequestPart; this.reportPartGenerator = reportPartGenerator; }@Override public ReportPart call() { return this.reportPartGenerator.generateReportPart(reportRequestPart); } } The second use case is with ActiveRecord pattern say with the samples that come with Spring Roo, consider the following method where a Pet class needs to persist itself and needs an entity manager to do this: @Transactional public void Pet.persist() { if (this.entityManager == null) this.entityManager = entityManager(); this.entityManager.persist(this); } The third use case is for a tag library which is instantiated by a web container, but needs some dependencies from Spring. Solutions 1. The first approach is actually simple, to provide the dependencies at the point of object instantiation, through constructors or setters. This is what I have used with the first use case where the task has two dependencies which are being provided by the service instantiating the task: tasks.add(new ReportPartRequestCallable(reportRequestPart, reportPartGenerator)); 2. The second approach is a to create a factory that is aware of the Spring container, declaring the beans that are required with a prototype scope within the container and getting the beans by a getBeans method of the application context, Declaring the bean as a prototype scoped bean: <bean name='reportPartRequestCallable' class='org.bk.sisample.taskexecutor.ReportPartRequestCallable' scope='prototype'> <property name='reportPartGenerator' ref='reportPartGenerator'></property> </bean> <bean name='reportPartRequestCallableFactory' class='org.bk.sisample.taskexecutor.ReportPartRequestCallableFactory'/> and the factory serving out the bean: public class ReportPartRequestCallableFactory implements ApplicationContextAware{ private ApplicationContext applicationContext;@Override public void setApplicationContext(ApplicationContext applicationContext) throws BeansException { this.applicationContext = applicationContext; } public ReportPartRequestCallable getReportPartRequestCallable(){ return this.applicationContext.getBean('reportPartRequestCallable', ReportPartRequestCallable.class); } } 3. The third approach is a variation of the above approach is to instantiate the bean and then inject dependencies using AutoWireCapableBeanFactory.autowireBean(instance), this way: public class ReportPartRequestCallableFactory implements ApplicationContextAware{ private GenericApplicationContext applicationContext;@Override public void setApplicationContext(ApplicationContext applicationContext) throws BeansException { this.applicationContext = (GenericApplicationContext)applicationContext; } public ReportPartRequestCallable getReportPartRequestCallable(){ ReportPartRequestCallable reportPartRequestCallable = new ReportPartRequestCallable(); applicationContext.getBeanFactory().autowireBean(reportPartRequestCallable); return reportPartRequestCallable; } } 4. The fourth approach is using @Configurable, the catch though is that it requires AspectJ to work. Spring essentially enhances the constructor of the class to inject in the dependencies along the lines of what is being explicitly done in the third approach above: import org.springframework.beans.factory.annotation.Configurable;@Configurable('reportPartRequestCallable') public class ReportPartRequestCallable implements Callable<ReportPart> { private ReportRequestPart reportRequestPart; @Autowired private ReportPartGenerator reportPartGenerator;public ReportPartRequestCallable() { }@Override public ReportPart call() { return this.reportPartGenerator.generateReportPart(reportRequestPart); }public void setReportRequestPart(ReportRequestPart reportRequestPart) { this.reportRequestPart = reportRequestPart; }public void setReportPartGenerator(ReportPartGenerator reportPartGenerator) { this.reportPartGenerator = reportPartGenerator; } } The following is also required to configure the Aspect responsible for @Configurable weaving: <context:spring-configured/> With these changes in place, any dependency for a class annotated with @Configurable is handled by Spring even if the construction is done completely outside of the container: @Override public Report generateReport(ReportRequest reportRequest) { List<Callable<ReportPart>> tasks = new ArrayList<Callable<ReportPart>>(); List<ReportRequestPart> reportRequestParts = reportRequest.getRequestParts(); for (ReportRequestPart reportRequestPart : reportRequestParts) { ReportPartRequestCallable reportPartRequestCallable = new ReportPartRequestCallable(); reportPartRequestCallable.setReportRequestPart(reportRequestPart); tasks.add(reportPartRequestCallable); } .......Conclusion All of the above approaches are effective with injecting in dependencies in objects which are instantiated outside of a container. I personally prefer to use Approach 4 (using @Configurable) in cases where AspectJ support is available, else I would go with Approach 2(hiding behind a factory and using a prototype bean). Happy coding and don’t forget to share! Reference: Ways to wire dependencies for an object outside of a Spring Container from our JCG partner Biju Kunjummen at the all and sundry blog....
jenkins-logo

Jenkins: Deploying JEE Artifacts

With the advent of Continuous Integration and Continuous Delivery , our builds are split into different steps creating the deployment pipeline. Some of these steps can be for example compile and run fast tests, run slow tests, run automated acceptance tests, or releasing the application, to cite a few. The final steps of our deployment pipeline, implies a deployment of our product (in case of JEE project a war or ear) to production-like environment, for UAT or to production system when product is released. In this post we are going to see how we can configure Jenkins to manage the deployment of a Java Enterprise Application correctly. First thing to do is creating the application, in this case a very simple web application in Java (in fact is only one jsp which prints a Hello World!! message) and mavenize it to create a war file ( bar.war) when package goal is executed. Then we need to create a Jenkins job (called bar-web) which is the responsible of compiling, and running unit tests. After this job would come other jobs like running integration tests, running more tests, static code analysis (aka code quality), or uploading artifacts to artifacts repository but won’t be shown here. And finally, the last steps which imply deploying previous generated code to staging environment (for running User Acceptance Tests for example) and after key users give the ok, deploying to production environment. So let’s see how to create these final steps in Jenkins. Note that binary file created in previous steps ( bar-web in our case) must be used in all these steps. This is because of two reasons, the first one is that your deployment pipeline should be run as fast as possible and obviously compiling in each step the code is not the best way to get it, the second one is that each time you compile your sources, increases the chance of not being compiling sources of previous steps. To achieve this goal we can follow two strategies, the first one is uploading binary files to artifact repository (like Nexus or Artifactory) and get from there in each job. The second one is using copy-artifacts Jenkins plugin to get binary files generated by previous step. Let’s see how to configure Jenkins for the first approach. Using artifact repository approach, requires that you download the version we want to deploy from repository and then deploy it to external environment; in our case deploying to a web server. All these steps are done by using maven-cargo-plugin. <build> <plugins> <plugin> <groupId>org.codehaus.cargo<groupId> <artifactId>cargo-maven2-plugin<artifactId> <version>1.0<version> <!-- Container configuration --> <container> <containerId>tomcat6x<containerId> <type>remote<type> <container> <configuration> <type>runtime<type> <properties> <cargo.remote.username>admin<cargo.remote.username> <cargo.remote.password><cargo.remote.password> <cargo.tomcat.manager.url>http:localhost:8888manager<cargo.tomcat.manager.url> <properties> <configuration> <deployer> <deployables> <deployable> <groupId>com.lordofthejars.bar<groupId> <artifactId>bar-web<artifactId> <type>war<type> <deployable> <deployables> <deployer> <plugin> <plugins> <build> <dependencies> <dependency> <groupId>com.lordofthejars.bar<groupId> <artifactId>bar-web<artifactId> <type>war<type> <version>${target.version}<version> <dependency> <dependencies>Then we only have to create a new Jenkins job, named bar-to-staging, which will run cargo:redeploy Maven goal, and Cargo plugin will be the responsible to deploy bar-web to web server. This approach has one advantage and one disadvantage. The main advantage is that you are not bound to Jenkins, you can use Maven alone, or any other CI that supports Maven. The main disadvantage is that relies on artefacts repository, and this plan a new problem, deployment pipeline involves many steps, and between these steps (normally if you are building a snapshot version), a new artefact could be uploaded to artefacts repository with same version, and use it in the middle of pipeline execution. Of course this scenario can be avoided by managing permissions in artefact repository. The other approach is use Jenkins plugin, called copy-artifact-plugin. In this case Jenkins acts as an artefact repository, so artifacts created in previous step are used in next step without involving any external repository. Using this approach we cannot use maven-cargo-plugin, but we can use deploy- jenkins-plugin in conjunction with copy-artifacts-plugin. So let’s see how to implement this approach.First thing is create a Jenkins build job ( bar-web), which creates the war file. Note that two Post-build actions are defined, first one is Archive the artifacts, which is used to store generated files so copy artifacts plugin can copy them to another workspace. The other one is Build other projects, which in this case, calls a job which is responsible of deploying war file to staging directory ( bar deploy-to-staging). Next thing is create bar deploy-to-staging build job, which main action is deploying war file generated by previous build job to Tomcat server.For this second build job, you should configure Copy artifacts plugin to copy previous generated files to current workspace, so in Build section, in Copy artifacts from another project section, we set from which build job we want to copy the artifact (in our case bar-web) and which artifacts we want to copy. And finally in Post-build actions section, we must configure which file should be deployed to Tomcat ( bar.web), remember that this file is the compiled and packaged by previous build jobs, and finally set Tomcat parameters. And execution pipeline looks something like:Note that a third build job has been added which deploys war file to production server. This second approach is the counter part of the first approach, you can be sure that the artefact used in previous step of pipeline will be the one used in all steps, but you are bounded to Jenkins/Hudson. So if you are going to create a policy in your artefact repository so only pipeline executor can upload artefacts to repository, first approach is better, but if you are not using an external artefact repository (you use Jenkins as is) then second approach is the best one to assure that packaged artefact in previous steps are not modified by parallel steps. After file is deployed to server, acceptance tests or UAT tests can be executed without any problem. I wish that now we can address the final steps of our deployment pipeline in a secure and better way. Reference: Deploying JEE Artifacts with Jenkins from our JCG partner Alex Soto at the One Jar To Rule Them All blog....
jenkins-logo

Behavior-Driven Development (BDD) with JBehave, Gradle, and Jenkins

Behavior-Driven Development (BDD) is a collaborative process where the Product Owner, developers, and testers cooperate to deliver software that brings value to the business. BDD is the logical next step up from Test-Driven Development (TDD).Behavior-Driven Development In essence, BDD is a way to deliver requirements. But not just any requirements, executable ones! With BDD, you write scenarios in a format that can be run against the software to ascertain whether the software behaves as desired.Scenarios Scenarios are written in Given, When, Then format, also known as Gherkin: Given the ATM has $250 And my balance is $200 When I withdraw $150 Then the ATM has $100 And my balance is $50 Given indicates the initial context, When indicates the occurrence of an interesting event, and Then asserts an expected outcome. And may be used to in place of a repeating keyword, to make the scenario more readable. Given/When/Then is a very powerful idiom, that allows for virtually any requirement to be described. Scenarios in this format are also easily parsed, so that we can automatically run them. BDD scenarios are great for developers, since they provide quick and unequivocal feedback about whether the story is done. Not only the main success scenario, but also alternate and exception scenarios can be provided, as can abuse cases. The latter requires that the Product Owner not only collaborates with testers and developers, but also with security specialists. The payoff is that it becomes easier to manage security requirements. Even though BDD is really about the collaborative process and not about tools, I’m going to focus on tools for the remainder of this post. Please keep in mind that tools can never save you, while communication and collaboration can. With that caveat out of the way, let’s get started on implementing BDD with some open source tools.JBehaveJBehave is a BDD tool for Java. It parses the scenarios from story files, maps them to Java code, runs them via JUnit tests, and generates reports.JUnit Here’s how we run our stories using JUnit: @RunWith(AnnotatedEmbedderRunner.class) @UsingEmbedder(embedder = Embedder.class, generateViewAfterStories = true, ignoreFailureInStories = true, ignoreFailureInView = false, verboseFailures = true) @UsingSteps(instances = { NgisRestSteps.class }) public class StoriesTest extends JUnitStories {@Override protected List<String> storyPaths() { return new StoryFinder().findPaths( CodeLocations.codeLocationFromClass(getClass()).getFile(), Arrays.asList(getStoryFilter(storyPaths)), null); }private String getStoryFilter(String storyPaths) { if (storyPaths == null) { return '*.story'; } if (storyPaths.endsWith('.story')) { return storyPaths; } return storyPaths + '.story'; }private List<String> specifiedStoryPaths(String storyPaths) { List<String> result = new ArrayList<String>(); URI cwd = new File('src/test/resources').toURI(); for (String storyPath : storyPaths.split(File.pathSeparator)) { File storyFile = new File(storyPath); if (!storyFile.exists()) { throw new IllegalArgumentException('Story file not found: ' + storyPath); } result.add(cwd.relativize(storyFile.toURI()).toString()); } return result; }@Override public Configuration configuration() { return super.configuration() .useStoryReporterBuilder(new StoryReporterBuilder() .withFormats(Format.XML, Format.STATS, Format.CONSOLE) .withRelativeDirectory('../build/jbehave') ) .usePendingStepStrategy(new FailingUponPendingStep()) .useFailureStrategy(new SilentlyAbsorbingFailure()); }} This uses JUnit 4?s @RunWith annotation to indicate the class that will run the test. The AnnotatedEmbedderRunner is a JUnit Runner that JBehave provides. It looks for the @UsingEmbedder annotation to determine how to run the stories:generateViewAfterStories instructs JBehave to create a test report after running the stories ignoreFailureInStories prevents JBehave from throwing an exception when a story fails. This is essential for the integration with Jenkins, as we’ll see belowThe @UsingSteps annotation links the steps in the scenarios to Java code. More on that below. You can list more than one class. Our test class re-uses the JUnitStories class from JBehave that makes it easy to run multiple stories. We only have to implement two methods: storyPaths() and configuration(). The storyPaths() method tells JBehave where to find the stories to run. Our version is a little bit complicated because we want to be able to run tests from both our IDE and from the command line and because we want to be able to run either all stories or a specific sub-set. We use the system property bdd.stories to indicate which stories to run. This includes support for wildcards. Our naming convention requires that the story file names start with the persona, so we can easily run all stories for a single persona using something like -Dbdd.stories=wanda_*. The configuration() method tells JBehave how to run stories and report on them. We need output in XML for further processing in Jenkins, as we’ll see below. One thing of interest is the location of the reports. JBehave supports Maven, which is fine, but they assume that everybody follows Maven conventions, which is really not. The output goes into a directory called target by default, but we can override that by specifying a path relative to the target directory. We use Gradle instead of Maven, and Gradle’s temporary files go into the build directory, not target. More on Gradle below.Steps Now we can run our stories, but they will fail. We need to tell JBehave how to map the Given/When/Then steps in the scenarios to Java code. The Steps classes determine what the vocabulary is that can be used in the scenarios. As such, they define a Domain Specific Language (DSL) for acceptance testing our application. Our application has a RESTful interface, so we wrote a generic REST DSL. However, due to the HATEOAS constraint in REST, a client needs a lot of calls to discover the URIs that it should use. Writing scenarios gets pretty boring and repetitive that way, so we added an application-specific DSL on top of the REST DSL. This allows us to write scenarios in terms the Product Owner understands. Layering the application-specific steps on top of generic REST steps has some advantages:It’s easy to implement new application-specific DSL, since they only need to call the REST-specific DSL The REST-specific DSL can be shared with other projectsGradleWith the Steps in place, we can run our stories from our favorite IDE. That works great for developers, but can’t be used for Continuous Integration (CI). Our CI server runs a headless build, so we need to be able to run the BDD scenarios from the command line. We automate our build with Gradle and Gradle can already run JUnit tests. However, our build is a multi-project build. We don’t want to run our BDD scenarios until all projects are built, a distribution is created, and the application is started. So first off, we disable running tests on the project that contains the BDD stories: test { onlyIf { false } // We need a running server } Next, we create another task that can be run after we start our application: task acceptStories(type: Test) { ignoreFailures = true doFirst { // Need 'target' directory on *nix systems to get any output file('target').mkdirs()def filter = System.getProperty('bdd.stories') if (filter == null) { filter = '*' } def stories = sourceSets.test.resources.matching { it.include filter }.asPath systemProperty('bdd.stories', stories) } } Here we see the power of Gradle. We define a new task of type Test, so that it already can run JUnit tests. Next, we configure that task using a little Groovy script. First, we must make sure the target directory exists. We don’t need or even want it, but without it, JBehave doesn’t work properly on *nix systems. I guess that’s a little Maven-ism Next, we add support for running a sub-set of the stories, again using the bdd.stories system property. Our story files are located in src/test/resources, so that we can easily get access to them using the standard Gradle test source set. We then set the system property bdd.stories for the JVM that runs the tests.Jenkins So now we can run our BDD scenarios from both our IDE and the command line. The next step is to integrate them into our CI build. We could just archive the JBehave reports as artifacts, but, to be honest, the reports that JBehave generates aren’t all that great. Fortunately, the JBehave team also maintains a plug-in for the Jenkins CI server. This plug-in requires prior installation of the xUnit plug-in. After installation of the xUnit and JBehave plug-ins into jenkins, we can configure our Jenkins job to use the JBehave plug-in. First, add an xUnit post-build action. Then, select the JBehave test report.With this configuration, the output from running JBehave on our BDD stories looks just like that for regular unit tests:Note that the yellow part in the graph indicates pending steps. Those are used in the BDD scenarios, but have no counterpart in the Java Steps classes. Pending steps are shown in the Skip column in the test results:Notice how the JBehave Jenkins plug-in translates stories to tests and scenarios to test methods. This makes it easy to spot which scenarios require more work. Although the JBehave plug-in works quite well, there are two things that could be improved:The output from the tests is not shown. This makes it hard to figure out why a scenario failed. We therefore also archive the JUnit test report If you configure ignoreFailureInStories to be false, JBehave throws an exception on a failure, which truncates the XML output. The JBehave Jenkins plug-in can then no longer parse the XML (since it’s not well formed), and fails entirely, leaving you without test resultsAll in all these are minor inconveniences, and we ‘re very happy with our automated BDD scenarios. Happy coding and don’t forget to share! Reference: Behavior-Driven Development (BDD) with JBehave, Gradle, and Jenkins from our JCG partner Remon Sinnema at the Secure Software Development blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close