Featured FREE Whitepapers

What's New Here?


Capacity Planning and the Project Portfolio

I was problem-solving with a potential client the other day. They want to manage their project portfolio. They use Jira, so they think they can see everything everyone is doing. (I’m a little skeptical, but, okay.) They want to know how much the teams can do, so they can do capacity planning based on what the teams can do. (Red flag #1) The worst part? They don’t have feature teams. They have component teams: front end, middleware, back end. You might, too. (Red flag #2) Problem #1: They have a very large program, not a series of unrelated projects. They also have projects.   Problem #2: They want to use capacity planning, instead of flowing work through teams. They are setting themselves up to optimize at the lowest level, instead of optimizing at the highest level of the organization. If you read Manage Your Project Portfolio: Increase Your Capacity and Finish More Projects, you understand this problem. A program is a strategic collection of projects where the business value of the all of the projects is greater than any one of the projects itself. Each project has value. Yes. But all together, the program, has much more value. You have to consider the program has a whole. Don’t Predict the Project Portfolio Based on Capacity If you are considering doing capacity planning on what the teams can do based on their estimation or previous capacity, don’t do it. First, you can’t possibly know based on previous data. Why? Because the teams are interconnected in interesting ways. When you have component teams, not feature teams, their interdependencies are significant and unpredictable. Your ability to predict the future based on past velocity? Zero. Nada. Zilch. This is legacy thinking from waterfall. Well, you can try to do it this way. But you will be wrong in many dimensions:You will make mistakes because of prediction based on estimation. Estimates are guesses. When you have teams using relative estimation, you have problems. Your estimates will be off because of the silent interdependencies that arise from component teams. No one can predict these if you have large stories, even if you do awesome program management. The larger the stories, the more your estimates are off. The longer the planning horizon, the more your estimates are off. You will miss all the great ideas for your project portfolio that arise from innovation that you can’t predict in advance. As the teams complete features, and as the product owners realize what the teams do, the teams and the product owners will have innovative ideas. You, the management team, want to be able to capitalize on this feedback.It’s not that estimates are bad. It’s that estimates are off. The more teams you have, the less your estimates are normalized between teams. Your t-shirt sizes are not my Fibonacci numbers, are not that team’s swarming or mobbing. (It doesn’t matter if you have component teams or feature teams for this to be true.) When you have component teams, you have the additional problem of not knowing how the interdependencies affect your estimates. Your estimates will be off, because no one’s estimates take the interdependencies into account. You don’t want to normalize estimates among teams. You want to normalize story size. Once you make story size really small, it doesn’t matter what the estimates are. When you  make the story size really small, the product owners are in charge of the team’s capacity and release dates. Why? Because they are in charge of the backlogs and the roadmaps. The more a program stops trying to estimate at the low level and uses small stories and manages interdependencies at the team level, the more the program has momentum. The part where you gather all the projects? Do that part. You need to see all the work. Yes. that part works and helps the program see where they are going. Use Value for the Project Portfolio Okay, so you try to estimate the value of the features, epics, or themes in the roadmap of the project portfolio. Maybe you even use the cost of delay as Jutta and I suggest in Diving for Hidden Treasures: Finding the Real Value in Your Project Portfolio (yes, this book is still in progress). How will you know if you are correct? You don’t. You see the demos the teams provide, and you reassess on a reasonable time basis. What’s reasonable? Not every week or two. Give the teams a chance to make progress. If people are multitasking, not more often than once every two months, or every quarter. They have to get to each project. Hint: stop the multitasking and you get tons more throughput.Reference: Capacity Planning and the Project Portfolio from our JCG partner Johanna Rothman at the Managing Product Development blog....

Infeasible software projects are launched all the time

Infeasible software projects are launched all the time and teams are continually caught up in them, but what is the real source of the problem? There are 2 year actual projects for which the executives set a 6 month deadline. The project is guaranteed to fail but is this due to executive ignorance or IT impotence?      There is no schedule risk in an infeasible project because the deadline will be missed.  Schedule risk only exists in the presence of uncertainty (see Schedule Risk is a Red Herring!!!) As you might expect, all executives and IT manager share responsibility for infeasible projects that turn into death marches.  Learn about the nasty side effects Death March Calculus. The primary causes for infeasible projects are:Rejection of formal estimates No estimation or improper estimation methods are usedRejecting Formal EstimatesThis situation occurs frequently; an example would be the Denver Baggage Handling System (see Case Study). The project was automatically estimated (correctly) to take 2 years; however, executives declared that IT would only have 1 year to deliver. Of course, they failed 1. The deadline was rejected by executives because it did not fit their desires.  They could not have enjoyed the subsequent software disaster and bad press.        When executives ignore formal estimates they get what they deserve.  Formal estimates are ignored because executives believe through sheer force of will that they can set deadlines. If IT managed to get the organization to pay for formal tools for estimating then it is not their problem that the executives refuse to go along with it.   Improper Estimation Methods The next situation that occurs frequently is using estimation processes that have low validity.  Estimation has been extensively studied and documented by Tom DeMarco, Capers Jones, Ed Yourdon, and others. Improper estimation methods will underestimate a software project every time. Fast estimates will be based on what you can think of, unfortunately, software is not tangible and so what you are aware of is like the tip of an iceberg. None of this prevents executives demanding fast estimates from development.  Even worse, development managers will cave in to ridiculous demands and actually give fast estimates. Poor estimates are guaranteed to lead to infeasible projects (see Who needs Formal Measurement?) Poor estimates are delivered by IT managers that:Can’t convince executives to use formal tools Give in to extreme pressure for fast estimatesInfeasible projects that result from poor estimates are a matter of IT impotence. ConclusionBoth executive ignorance and IT impotence lead to infeasible projects on a regular basis because of poor estimates and rejecting estimates; so there is no surprise here. However, infeasible projects are a failure of executives and IT equally because we are all on the same team. It is not possible for part of the organization to succeed if the other one fails. Possibly a greater share of problem is with IT management.  After all, whose responsibility is a bad decision — the guys that know what the issues are or the ones that don’t.   If a child wants ice cream before they eat dinner then whose fault is it if you cave in and give them the ice cream? Unfortunately, even after 60 years of developing software projects, IT managers are either as ignorant as the executives or simply have no intestinal fortitude.Even when IT managers convince executives of the importance of estimating tools, the estimates are routinely discarded because they do not meet executive expectations. Rejection of automated estimates: productivity -16%, quality -22% Until we can get a generation of IT managers that are prepared to educate executives on the necessity of proper estimation and be stubborn about holding to those estimates, we are likely to continue to have an estimated $3 trillion in failures of software projects every year.   End Notes 1For inquiring minds, good automated estimation systems have been shown to be within 5% of time and cost on a regular basis. Contact me for additional information. ReferencesJones, Capers. SCORING AND EVALUATING SOFTWARE METHODS, PRACTICES, AND RESULTS. 2008. Jones, Capers. The Economics of Software Quality. 2011 Kahneman, Daniel. Thinking, Fast and Slow. 2011Reference: Infeasible software projects are launched all the time from our JCG partner Dalip Mahal at the Accelerated Development blog....

Don’t waste time on Code Reviews

Less than half of development teams do code reviews and the other half are probably not getting as much out of code reviews as they should. Here’s how to not waste time on code reviews. Keep it Simple Many people still think of code reviews as expensive formal code inspection meetings, with lots of prep work required before a room full of reviewers can slowly walk through the code together around a table with the help of a moderator and a secretary. Lots of hassles and delays and paperwork. But you don’t have to do code reviews this way – and you shouldn’t. There are several recent studies which prove that setting up and holding formal code review meetings add to development delays and costs without adding value. While it can take weeks to schedule a code review meeting, only 4% of defects are found in the meeting itself – the rest are all found by reviewers looking through code on their own. At shops like Microsoft and Google, developers don’t attend formal code review meetings. Instead, they take advantage of collaborative code review platforms like Gerrit, CodeFlow, Collaborator, or ReviewBoard or Crucible, or use e-mail to request reviews asynchronously and to exchange information with reviewers. These light weight reviews (done properly) are just as effective at finding problems in code as inspections, but much less expensive and much easier to schedule and manage. Which means they are done more often. And these reviews fit much better with iterative, incremental development, providing developers with faster feedback (within a few hours or at most a couple of days, instead of weeks for formal inspections). Keep the number of reviewers small Some people believe that if two heads are better than one, then three heads are even better, and four heads even more better and so on… So why not invite everyone on the team into a code review? Answer: because it is a tragic waste of time and money. As with any practice, you will quickly reach a point of diminishing returns as you try to get more people to look at the same code. On average, one reviewer will find roughly half of the defects on their own. In fact, in a study at Cisco, developers who double-checked their own work found half of the defects without the help of a reviewer at all! A second reviewer will find ½ as many new problems as the first reviewer. Beyond this point, you are wasting time and money. One study showed no difference in the number of problems found by teams of 3, 4 or 5 individuals, while another showed that 2 reviewers actually did a better job than 4. This is partly because of overlap and redundancy – more reviewers means more people looking for and finding the same problems (and more people coming up with false positive findings that the author has to sift through). And as Geoff Crain at Atlassian explains, there is a “social loafing” problem: complacency and a false sense of security set in as you add more reviewers. Because each reviewer knows that somebody else is looking at the same code, they are under less pressure to find problems. This is why at shops like Google and Microsoft where reviews are done successfully, the median number of reviewers is 2 (although there are times when an author may ask for more input, especially when the reviewers don’t agree with each other). But what’s even more important than getting the right number of reviewers is getting the right people to review your code. Code Reviews shouldn’t be done by n00bs – but they should be done for n00bs By reviewing other people’s code a developer will get exposed to more of the code base, and learn some new ideas and tricks. But you can’t rely on new team members to learn how the system works or to really understand the coding conventions and architecture just by reviewing other developers’ code. Asking a new team member to review other people’s code is a lousy way to train people, and a lousy way to do code reviews. Research backs up what should be obvious: the effectiveness of code reviews depend heavily on the reviewer’s skill and familiarity with the problem domain and with the code. Like other areas in software development, the differences in revew effectiveness can be huge, as much as 10x between best and worst performers. A study on code reviews at Microsoft found that reviewers from outside of the team or who were new to the team and didn’t know the code or the problem area could only do a superficial job of finding formatting issues or simple logic bugs. This means that your best developers, team leads and technical architects will spend a lot of time reviewing code – and they should. You need reviewers who are good at reading code and good at debugging, and who know the language, framework and problem area well. They will do a much better job of finding problems, and can provide much more valuable feedback, including suggestions on how to solve the problem in a simpler or more efficient way, or how to make better use of the language and frameworks. And they can do all of this much faster. If you want new developers to learn about the code and coding conventions and architecture, it will be much more effective to pair new developers up with an experienced team member in pair programming or pair debugging. If you want new, inexperienced developers to do reviews (or if you have no choice), lower your expectations. Get them to review straightforward code changes (which don’t require in depth reviews), or recognize that you will need to depend a lot more on static analysis tools and another reviewer to find real problems. Substance over Style Reviewing code against coding standards is a sad way for a developer to spend their valuable time. Fight the religious style wars early, get everyone to use the same coding style templates in their IDEs and use a tool like Checkstyle to ensure that code is formatted consistently. Free up reviewers to focus on the things that matter: helping developers write better code, code that works correctly and that is easy to maintain.“I’ve seen quite a few code reviews where someone commented on formatting while missing the fact that there were security issues or data model issues.” Senior developer at Microsoft, from a study on code review practices Correctness – make sure that the code works, look for bugs that might be hard to find in testing:Functional correctness: does the code do what it is supposed to do – the reviewer needs to know the problem area, requirements and usually something about this part of the code to be effective at finding functional correctness issues Coding errors: low-level coding mistakes like using <= instead of <, off-by-one errors, using the wrong variable (like mixing up lessee and lessor), copy and paste errors, leaving debugging code in by accident Design mistakes: errors of omission, incorrect assumptions, messing up architectural and design patterns like MVC, abuse of trust Safety and defensiveness: data validation, threading and concurrency (time of check/time of use mistakes, deadlocks and race conditions), error handling and exception handling and other corner cases Malicious code: back doors or trap doors, time bombs or logic bombs Security: properly enforcing security and privacy controls (authentication, access control, auditing, encryption)Maintainability:Clarity: class and method and variable naming, comments, … Consistency: using common routines or language/library features instead of rolling your own, following established conventions and patterns Organization: poor structure, duplicate or unused/dead code Approach: areas where the reviewer can see a simpler or cleaner or more efficient implementationWhere should reviewers spend most of their time? Research shows that reviewers find far more maintainability issues than defects (a ratio of 75:25) and spend more time on code clarity and understandability problems than correctness issues. There are a few reasons for this. Finding bugs in code is hard. Finding bugs in someone else’s code is even harder. In many cases, reviewers don’t know enough to find material bugs or offer meaningful insight on how to solve problems. Or they don’t have time to do a good job. So they cherry pick easy code clarity issues like poor naming or formatting inconsistencies. But even experienced and serious reviewers can get caught up in what at first seem to be minor issues about naming or formatting, because they need to understand the code before they can find bugs, and code that is unnecessarily hard to read gets in the way and distracts them from more important issues. This is why programmers at Microsoft will sometimes ask for 2 different reviews: a superficial “code cleanup” review from one reviewer that looks at standards and code clarity and editing issues, followed by a more in depth review to check correctness after the code has been tidied up. Use static analysis to make reviews more efficient Take advantage of static analysis tools upfront to make reviews more efficient. There’s no excuse not to at least use free tools like Findbugs and PMD for Java to catch common coding bugs and inconsistencies, and sloppy or messy code and dead code before submitting the code to someone else for review. This frees the reviewer up from having to look for micro-problems and bad practices, so they can look for higher-level mistakes instead. But remember that static analysis is only a tool to help with code reviews – not a substitute. Static analysis tools can’t find functional correctness problems or design inconsistencies or errors of omission, or help you to find a better or simpler way to solve a problem. Where’s the risk? We try to review all code changes. But you can get most of the benefits of code reviews by following the 80:20 rule: focus reviews on high risk code, and high risk changes. High risk code:Network-facing APIs Plumbing (framework code, security libraries….) Critical business logic and workflows Command and control and root admin functions Safety-critical or performance-critical (especially real-time) sections Code that handles private or sensitive data Old code, code that is complex, code that has been worked on by a lot of different people, code that has had a lot of bugs in the past – error prone codeHigh risk changes:Code written by a developer who has just joined the team Big changes Large-scale refactoring (redesign disguised as refactoring)Get the most out of code reviews Code reviews add to the cost of development, and if you don’t do them right they can destroy productivity and alienate the team. But they are also an important way to find bugs and for developers to help each other to write better code. So do them right. Don’t waste time on meetings and moderators and paper work. Do reviews early and often. Keep the feedback loops as tight as possible. Ask everyone to take reviews seriously – developers and reviewers. No rubber stamping, or letting each other off of the hook. Make reviews simple, but not sloppy. Ask the reviewers to focus on what really matters: correctness issues, and things that make the code harder to understand and harder to maintain. Don’t waste time arguing about formatting or style. Make sure that you always review high risk code and high risk changes. Get the best people available to do the job – when it comes to reviewers, quality is much more important than quantity. Remember that code reviews are only one part of a quality program. Instead of asking more people to review code, you will get more value by putting time into design reviews or writing better testing tools or better tests. A code review is a terrible thing to waste.Reference: Don’t waste time on Code Reviews from our JCG partner Jim Bird at the Building Real Software blog....

Route 53 Benchmark: The New AWS Geolocation’s Surprising Results

Latency vs. Geolocation: Testing DNS configurations across multiple EC2 regions using AWS Route 53 If you’re using the AWS stack, you’ve probably been through this: Deciding which EC2 instance to fire up and which region to deploy them on is tricky. Some of you might have started multiple EC2 instances behind a load balancer – but that’s almost never enough. Our Aussie friends shouldn’t have to wait for resources coming from Virginia. What we really need is an easy to use global solution. This is where Amazon’s Route 53 DNS routing comes in handy. Adding routing policies to your domain will help guarantee that users get the fastest responses, and as we all know, speed == happiness. At the end of July 2014, Amazon announced a new Route 53 routing policy: Geolocation. We are big advocates of latency-based routing so we wanted to put the new policy to the test. Results Since this benchmark’s goal is to make sense of DNS, we looked at the DNS name lookup time. We used curl to get a breakdown of how long each step lasted. Here are the average lookup times from EC2 regions:Full results can be found here. InsightsThe tests show that switching to geolocation adds 75ms on average – a very high figure, especially if you’re trying to optimize your cluster and first user experience. If we exclude Sao Paulo from the list altogether we get an impressive 127 ms average lookup time across other regions for both policies. I checked the numbers twice to make sure it’s not a mirage. Exactly 1 2 7 ms whether you go with geolocation or latency. On our EC2->S3 benchmark it was Sydney that was kicked out, with Route 53 it’s Sao Paulo. The biggest winner is Europe. It had the lowest latency-based lookup, the lowest geolocation-based lookup and the lowest difference between latency and geolocation – only 3 ms! At the bottom of the list – Sao Paulo performed the worst. It came in last in all three criteria: latency, geolocation and difference. Geolocation lookup in South America took 3x times more than the latency lookup. Zooming into North America, the fastest name lookup in both latency and geolocation was California. The slowest one was Virginia, which had the second biggest difference between latency and geolocation. Geolocation in our tests was around 1.5x times slower. Geolocation was faster in Oregon, California and Singapore. Latency was faster in Virginia, Europe, Japan and Brazil.Setting up the test EC2 – We deployed a simple Tomcat/Nginx web application into EC2 instances (m3.medium) on all AWS regions available (excluding China’s beta region). The webapp contained several Java Servlets that returned HTTP_OK 200 upon request. Route 53 – We purchased two domains beforehand. One for latency and one for geolocation. AWS has great docs on how to setup the record sets for latency-based and geolocation-based routing. For the latency-based we redirected the domain to all regions where we have EC2 running. For geolocation we redirected each continent to the closest region. Bash – After setting up all instances, we ran this snippet to test for lookup time for the domains. We decided to look at lookup times alone, since the connect time shown by curl was ~1ms and didn’t change the results. sudo /etc/init.d/nscd restart ## Restarting the DNS on Ubuntu so we won’t have cache curl –no-sessionid -s -w ‘\nLookup time:\t%{time_namelookup}\n’ -o /dev/null http://takipi-route53-testing-{latency|geo}.com/webapp/speed/normal ## Measuring name lookup time Conclusion There was no knock-out winner here. Although latency-based routing proved to be faster, there were some cases where geolocation-based routing performed better. The fastest average lookup was latency-based originating from Europe. In the end, unless you require some country-specific routing, DNS routing policy’s sweet spot is (still) latency-based routing.Reference: Route 53 Benchmark: The New AWS Geolocation’s Surprising Results from our JCG partner Chen Harel at the Takipi blog....

Writing Tests for Data Access Code – Data Matters

When we write tests for our data access code, we use datasets for two different purposes:                  We initialize our database into a known state before our data access tests are run. We verify that the correct changes are found from the database.These seem like easy tasks. However, it is very easy to mess things up in a way that makes our life painful and costs us a lot of time. That is why I decided to write this blog post. This blog post describes the three most common mistakes we can make when we use DbUnit datasets, and more importantly, this blog post describes how we can avoid making them. The Three Deadly Sins of DbUnit Datasets The most common reason why libraries like DbUnit have such a bad reputation is that developers use them in the wrong way and complain after they have shot themselves in the foot. It is true that when we use DbUnit datasets, we can make mistakes that cause a lot of frustration and cost us a lot time. That is why we must understand what these mistakes are so that we avoid making them. There are three common (and costly) mistakes that we can make when we are using DbUnit datasets: 1. Initializing the Database by Using a Single Dataset The first mistake that we can make is to initialize our database by using a single dataset. Although this is pretty handy if our application has only a handful of functions and a small database with a few database tables, this might not be the case if we are working in a real-life software project. The odds are that our application has many functions and a large database with tens (or hundreds) of database tables. If we use this approach in a real-life software project, our dataset is going to be HUGE because:Every database table increases the size of our dataset. The number of tests increases the size of our dataset because different tests require different data.The size of our dataset is a big problem because:The bigger the dataset, the slower it is to initialize the used database into a known state before our tests are run. To make matters worse, our tests become slower and slower when we add new database tables or write new tests. It is impossible to find out what data is relevant for a specific test case without reading the tested code. If a test case fails, figuring out the reason for that is a lot harder than it should be.Example Let’s assume that we have to write tests for a CRM that is used to manage the information of our customers and offices. Also, each customer and office is located in a city. The first version of our dataset could look as follows: <?xml version='1.0' encoding='UTF-8'?> <dataset> <cities id="1" name="Helsinki"/> <customers id="1" city_id="1" name="Company A"/> <offices id="1" city_id="1" name="Office A"/> </dataset> We can see immediately that our test suite has to invoke one unnecessary INSERT statement per test case. This might not seem like a big deal but let’s see what happens when we have to have to write tests for functions that lists customers and offices that are located in a specific city. After we have written these tests, our dataset looks as follows: <?xml version='1.0' encoding='UTF-8'?> <dataset> <cities id="1" name="Helsinki"/> <cities id="2" name="Tampere"/> <cities id="3" name="Turku"/> <customers id="1" city_id="1" name="Company A"/> <customers id="2" city_id="2" name="Company B"/> <offices id="1" city_id="1" name="Office A"/> <offices id="2" city_id="3" name="Office B"/> </dataset> As we can see,Our test suite has to invoke three unnecessary INSERT statements per test case. It is not clear what data is relevant for a specific test case because our dataset initializes the whole database before each test is run.This might not seem like a catastrophic failure (and it isn’t), but this example still demonstrates why we shouldn’t follow this approach when we write tests for real-life applications. 2. Creating One Dataset per Each Test Case or a Group of Test Cases We can solve the problems created by a single dataset by splitting that dataset into smaller datasets. If we decide to do this, we can create one dataset per each test case or a group test cases. If we follow this approach, each one of our datasets should contain only the data that is relevant to the test case (or test cases). This seems like a good idea because our datasets are smaller and each dataset contains only the relevant data. However, we must remember that the road to hell is paved with good intentions. Although our tests are faster than the tests that use a single dataset, and it is easy to find the data that is relevant for a specific test case, this approach has one major drawback: Maintaining our datasets becomes hell. Because many datasets contains data that is inserted to the same tables, maintaining these datasets takes a lot of work if the structure of those database tables is changed (or should we say when?). Example If we use this approach when we write tests for the CRM that was introduced earlier, we could split our single dataset into two smaller datasets. The first dataset contains the information that is required when we write tests for the functions that are used to manage the information of our customers. It looks as follows: <?xml version='1.0' encoding='UTF-8'?> <dataset> <cities id="1" name="Helsinki"/> <cities id="2" name="Tampere"/> <customers id="1" city_id="1" name="Company A"/> <customers id="2" city_id="2" name="Company B"/> </dataset> The second dataset contains the information that we need when we are writing tests for the functions that are used to manage the information of our offices. The second dataset looks as follows: <?xml version='1.0' encoding='UTF-8'?> <dataset> <cities id="1" name="Helsinki"/> <cities id="3" name="Turku"/> <offices id="1" city_id="1" name="Office A"/> <offices id="2" city_id="3" name="Office B"/> </dataset> What happens if we make changes to the structure of the cities table? Exactly! That is why following this approach is not a good idea. 3. Asserting Everything We can create a dataset which is used to verify that the correct data is found from the database by following these steps:Copy the data found from the dataset that is used to initialize the database into a known state before our tests are run. Paste its content to the dataset that is used to verify that the correct data is found from the database. Make the required changes to it.Following these steps is dangerous because it makes sense. After all, if we have initialized our database by using the dataset X, it seems logical that we use that dataset when we create the dataset that is used to ensure that the correct information is found from the database. However, this approach has three drawbacks:It is hard to figure the expected result because often these datasets contains information that is not changed by the tested code. This is a problem especially if we have made either mistake one or two. Because these datasets contains information that isn’t changed by tested code (such as common database tables), maintaining these datasets is going to take a lot of unnecessary work. If we change the structure of those database tables, we have to make the same change to our datasets as well. This is something that we don’t want to do. Because these datasets often contain unnecessary information (information that is not changed by the tested code), verifying that expected information is found from the database is slower than it could be.Example Let’s assume that we have to write tests for a function that updates the information of a customer (the id of the updated customer is 2). The dataset that initializes the used database into a known state before this test is run looks as follows: <?xml version='1.0' encoding='UTF-8'?> <dataset> <cities id="1" name="Helsinki"/> <cities id="2" name="Tampere"/> <customers id="1" city_id="1" name="Company A"/> <customers id="2" city_id="2" name="Company B"/> </dataset> The dataset that ensures that the correct information is saved to the database looks as follows: <?xml version='1.0' encoding='UTF-8'?> <dataset> <cities id="1" name="Helsinki"/> <cities id="2" name="Tampere"/> <customers id="1" city_id="1" name="Company A"/> <customers id="2" city_id="1" name="Company B"/> </dataset> Let’s go through the drawbacks of this solution one by one:It is pretty easy to figure out what information should be updated because the size of our dataset is so small, but it isn’t as easy as it could be. If our dataset would be bigger, this would naturally be a lot harder. This dataset contains the information found from the cities table. Because this information isn’t modified by the tested function, our tests have to make irrelevant assertions and this means that our tests are slower than they could be. If we change the structure of the cities database table, we have to modify the dataset that verifies that the correct information is saved to the database. This means that maintaining these datasets takes a lot of time and forces us to do unnecessary work.Datasets Done Right We have now identified the three most common mistakes developers make when they are using DbUnit datasets. Now it is time to find out how we can avoid making these mistakes and use datasets effectively in our tests. Let’s start by taking a closer look at the requirements of a good test suite. The requirements of a good test suite are:It must be easy to read. If our test suite is easy to read, it acts as a documentation that is always up-to-date, and it is faster to figure out what is wrong when a test case fails. It must be easy to maintain. A test suite that is easy to maintain will save us a lot of time that we can use more productively. Also, it will probably save us from a lot of frustration. It must be as fast as possible because a fast test suite ensures fast feedback, and fast feedback means that we can use our time more productively. Also, we must understand that although an integration test suite is typically a lot slower than a unit test suite, it makes no sense to abandon this requirement. In fact, I claim that we must pay more attention to it because if we do so, we can significantly reduce the execution time of our test suite.You might be wondering why I didn’t mention that each test case must be independent. This is indeed an important requirement of a good test suite but I left it out because if we are already using a tool such as DbUnit, we have probably figured out that the our test cases must not depend from the other test cases. Now that we know what are the requirements of our test suite, it is a whole lot easier to figure out how we can fulfil them by using DbUnit datasets. If we want to fulfil these requirements, we must follow these rules: 1. Use Small Datasets We must use small datasets because they are easier to read and they ensure that our tests are as fast as possible. In other words, we must identify the minimum amount of data that is required to write our tests and use only that data. Example The dataset that is used to initialize our database when we test customer related functions looks as follows: <?xml version='1.0' encoding='UTF-8'?> <dataset> <cities id="1" name="Helsinki"/> <cities id="2" name="Tampere"/> <customers id="1" city_id="1" name="Company A"/> <customers id="2" city_id="2" name="Company B"/> </dataset> On the other hand, the dataset that initializes our database when we run the tests that test office related functions looks as follows: <?xml version='1.0' encoding='UTF-8'?> <dataset> <cities id="1" name="Helsinki"/> <cities id="3" name="Turku"/> <offices id="1" city_id="1" name="Office A"/> <offices id="2" city_id="3" name="Office B"/> </dataset> If we take look at the highlighted rows, we notice that our datasets use different cities. We can fix this by modifying the second dataset to use the same cities than the first dataset. After we have do this, the second dataset looks as follows: <?xml version='1.0' encoding='UTF-8'?> <dataset> <cities id="1" name="Helsinki"/> <cities id="2" name="Tampere"/> <offices id="1" city_id="1" name="Office A"/> <offices id="2" city_id="2" name="Office B"/> </dataset> So, what is the point? It might seem that we didn’t achieve much, but we were able to reduce the amount of used cities from three to two. The reason why this little improvement is valuable becomes obvious when we take a look at the next rule. Typically DbUnit datasets are big and messy, and they contain a lot of redundant data. If this is the case, following this approach will make our datasets a lot more readable and make our tests a lot faster. 2. Divide Large Datasets into Smaller Datasets We have already created two datasets that contain the minimum amount of data that is required to initialize our database before our tests are run. The problem is that both datasets contain “common” data and this makes our datasets hard to maintain. We can get rid of this problem by following these steps:Identify the data that is used in more than one dataset. Move that data to a separate dataset (or to multiple datasets).Example We have two datasets that looks as follows (common data is highlighted): <?xml version='1.0' encoding='UTF-8'?> <dataset> <cities id="1" name="Helsinki"/> <cities id="2" name="Tampere"/> <customers id="1" city_id="1" name="Company A"/> <customers id="2" city_id="2" name="Company B"/> </dataset> <?xml version='1.0' encoding='UTF-8'?> <dataset> <cities id="1" name="Helsinki"/> <cities id="2" name="Tampere"/> <offices id="1" city_id="1" name="Office A"/> <offices id="2" city_id="2" name="Office B"/> </dataset> We can eliminate our maintenance problem by creating a single dataset that contains the information inserted to the cities table. After we have done this, we have got three datasets that looks as follows: <?xml version='1.0' encoding='UTF-8'?> <dataset> <cities id="1" name="Helsinki"/> <cities id="2" name="Tampere"/> </dataset> <?xml version='1.0' encoding='UTF-8'?> <dataset> <customers id="1" city_id="1" name="Company A"/> <customers id="2" city_id="2" name="Company B"/> </dataset> <?xml version='1.0' encoding='UTF-8'?> <dataset> <offices id="1" city_id="1" name="Office A"/> <offices id="2" city_id="2" name="Office B"/> </dataset> What did we just do? Well, the most significant improvement is that if we make changes to the cities table, we have to make these changes only to one dataset. In other words, maintaining these datasets is a lot easier than before. 3. Assert Only the Information that Can Be Changed by the Tested Code Earlier we took a look at a dataset that ensured that the correct information is found from the used database when we update the information of a customer. The problem is that the dataset contains data that is not changed by the tested code. This means that:It is hard to figure out the expected result because our dataset contains irrelevant data. Our tests are slower than they could be because they have to make irrelevant assertions. Our tests are hard to maintain because if we make changes to the database, we have to make the same changes to our datasets as well.We can solve everyone of these problems by following this simple rule: We must assert only the information that can be changed by the tested code. Let’s find out what this rule means. Example Earlier we created a (problematic) dataset which ensures that the correct information information is saved to the database when we update the information of a customer (the id of the updated customer is 2). This dataset looks as follows: <?xml version='1.0' encoding='UTF-8'?> <dataset> <cities id="1" name="Helsinki"/> <cities id="2" name="Tampere"/> <customers id="1" city_id="1" name="Company A"/> <customers id="2" city_id="1" name="Company B"/> </dataset> We can fix its problems by keeping the essential data and removing other data. If we are writing a test that ensures that the information of the correct customer is updated to the database, it is pretty obvious that we don’t care about the information that is found from the cities table. The only thing that we care about is the data that is found from the customers table. After we have removed the irrelevant information from our dataset, it looks as follows: <?xml version='1.0' encoding='UTF-8'?> <dataset> <customers id="1" city_id="1" name="Company A"/> <customers id="2" city_id="1" name="Company B"/> </dataset> We have now fixed the performance and maintenance problems, but there is still one problem left: Our dataset has two rows and it not clear which row contains the updated information. This isn’t a huge problem because our dataset is rather small, but it can become a problem when we use bigger datasets. We can fix this issue by adding a comment to our dataset. After we have done this, our dataset looks as follows: <?xml version='1.0' encoding='UTF-8'?> <dataset> <customers id="1" city_id="1" name="Company A"/> <!-- The information of the updated customer --> <customers id="2" city_id="1" name="Company B"/> </dataset> A lot better. Right? Summary This blog post has taught us that:The road to hell is paved with good intentions. The three most common mistakes that we can make when we are using DbUnit datasets seem like a good idea, but if we do these mistakes in a real life software project, we shoot ourselves in the foot. We can avoid the problems caused by DbUnit datasets by using small datasets, dividing large datasets into smaller datasets, and asserting only the information that can be changed by tested code.Reference: Writing Tests for Data Access Code – Data Matters from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....

The anatomy of Hibernate dirty checking

Introduction The persistence context enqueues entity state transitions that get translated to database statements upon flushing. For managed entities, Hibernate can auto-detect incoming changes and schedule SQL UPDATES on our behalf. This mechanism is called automatic dirty checking.           The default dirty checking strategy By default Hibernate checks all managed entity properties. Every time an entity is loaded, Hibernate makes an additional copy of all entity property values. At flush time, every managed entity property is matched against the loading-time snapshot value:So the number of individual dirty checks is given by the following formula:where n = The number of managed entities p = The number of entities of a given entity Even if only one property of a single entity has ever changed, Hibernate will still check all managed entities. For a large number of managed entities, the default dirty checking mechanism may have a significant CPU and memory footprint. Since the initial entity snapshot is held separately, the persistence context requires twice as much memory as all managed entities would normally occupy. Bytecode instrumentation A more efficient approach would be to mark dirty properties upon value changing. Analogue to the original deep comparison strategy, it’s good practice to decouple the domain model structures from the change detection logic. The automatic entity change detection mechanism is a cross-cutting concern, that can be woven either at build-time or at runtime. The entity class can be appended with bytecode level instructions implementing the automatic dirty checking mechanism. Weaving types The bytecode enhancement can happen at:Build-timeAfter the hibernate entities are compiled, the build tool (e.g. ANT, Maven) will insert bytecode level instructions into each compiled entity class. Because the classes are enhanced at build-time, this process exhibits no extra runtime penalty. Testing can be done against enhanced class versions, so that the actual production code is validated before the project gets built. RuntimeThe runtime weaving can be done using:A Java agent, doing bytecode enhancement upon entity class loading A runtime container (e.g. Spring), using JDK Instrumentation supportTowards a default bytecode enhancement dirty checking Hibernate 3 has been offering bytecode instrumentation through an ANT target but it never became mainstream and most Hibernate projects are still currently using the default deep comparison approach. While other JPA providers (e.g. OpenJPA, DataNucleus) have been favouring the bytecode enhancement approach, Hibernate has only recently started moving in this direction, offering better build-time options and even custom dirty checking callbacks. In my next post I’ll show you how you can customize the dirty checking mechanism with your own application specific strategy.Reference: The anatomy of Hibernate dirty checking from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

Debugging OpenJDK

Sometimes debugging Java code is not enough and we need to step over the native part of Java. I spent some time to achieve proper state of my JDK to do that, so short description probably will be useful for ones starting their trip. I’ll use the brand new OpenJDK 9! At first you have to obtain main repository by typing: hg clone http://hg.openjdk.java.net/jdk9/jdk9 openjdk9 Then in the openjdk9 directory type: bash get_source.sh That will download all sources to you local filesystem. Theoretically compiling openjdk is not a big deal, but there are some (hmmm….) strange behaviours if you want to use it for debugging. At first of course we need to call ./configure to prepare specific makefiles for our system. We can read in documentation that we have to add –enable-debug flag to prepare fastdebug build. If you don’t have proper libs or tools installed in your system it’s right moment to install dependencies (configure output will clearly point out any lacks). After configuring and invoking make command you can face with such problem: warning _FORTIFY_SOURCE requires compiling with optimization (-O) Generating buffer classes Generating exceptions classes cc1plus: all warnings being treated as errors Cool! It happens only on some specific linux instalations (unfortunately including Fedora 20!). To solve it we have to remove _FORTIFY_SOURCE flag. Just comment (#) lines containing _FORTIFY_SOURCE in the following files:hotspot/make/linux/makefiles/gcc.make common/autoconf/flags.m4Then you can move on with making jdk project and after dozen minutes you should see: Finished building OpenJDK for target 'default' Now it’s time to import project to IDE Since we’re still waiting for good C++ IDE from JetBrains we have to use NetBeans or even Eclipse. After finishing few steps needed to setup debugging commands (for example even for java -version). Start debugging, and… SIGSEGV received. Let’s solve it by creating .gdbinit file in user home directory containing following lines: handle SIGSEGV pass noprint nostop handle SIGUSR1 pass noprint nostop handle SIGUSR2 pass noprint nostop Start debugging one more time – now it’s better! Let’s continue by adding line breakpoint. Start debugging, and… not working…! I’ve extended .gdbinit by adding: set logging on One more debug try and in gdb.txt I saw such line: No source file named hotspot/src/share/vm/memory/genCollectedHeap.cpp I was pretty sure that –enable-debug will add -g flag to gcc compiler, but it seems I was wrong. I spent few hours googling and trying to solve it by changing gdb configurations, NetBeans config, etc. Still no effect. Fortunately Michal Warecki pointed me that probably OpenJDK during debug builds zips all debuginfo and if you want to debug (of course prepared debug build due to some other purposes?). After grepping makefiles I’ve found promising disable-zip-debug-info flag. So let’s include it into our configure invocation. Also believe me it’s hard to debug optimized code in C++ (you can try, but you will encounter strange thing happening, like debugger stepping lines in incorrect order (like starting method from line 4, going back to 2, then to 5 and to 3!). So we’ll choose slowdebug option to avoid code optimization. Whole proper configure command is: bash ./configure --with-debug-level=slowdebug --with-target-bits=64 --disable-zip-debug-info Now we can invoke: make and wait for compilation to finish. Now you can check if everything works correctly by invoking ./java -version in build/linux-x86_64-normal-server-slowdebug/jdk/bin directory. You should see: openjdk version "1.9.0-internal-debug" OpenJDK Runtime Environment (build 1.9.0-internal-debug-kuba_2014_08_20_14_02-b00) OpenJDK 64-Bit Server VM (build 1.9.0-internal-debug-kuba_2014_08_20_14_02-b00, mixed mode) Let’s try debugging. Add line breakpoint, start debugging, and… finally it’s green! Have fun!Reference: Debugging OpenJDK from our JCG partner Jakub Kubrynski at the Java(B)Log blog....

Akka Cluster with Docker containers

This article will show you how to build docker images that contain a single akka cluster application. You will be able to run multiple seed nodes and multiple cluster nodes. The code can be found on Github and will be available as a Typesafe Activator. If you don’t know docker or akka Docker is the new shiny star in the devops world. It lets you easily deploy images toany OS running docker, while providing an isolated environment for the applications running inside the container image. Akka is a framework to build concurrent, resilient, distributed and scalable software systems. The cluster feature lets you distribute your Actors across multiple machines to achieve load balancing, fail-over and the ability to scale up and out. The big picture This is what the running application will look like. No matter where your docker containers will run at the end of the day. The numbers at the top left describe the starting order of the containers.First you have to start your seed nodes, which will “glue” the cluster together. After the first node is started all following seed-nodes have to know the ip address of the initial seed node in order to build up a single cluster. The approach describe in this article is very simple, but easily configurable so you can use it with other provision technologies like chef, puppet or zookeeper. All following nodes that get started need at least one seed-node-ip in order to join the cluster. The application configuration We will deploy a small akka application which only logs cluster events. The entrypoint is fairly simple: object Main extends App {   val nodeConfig = NodeConfig parse args   // If a config could be parsed - start the system nodeConfig map { c => val system = ActorSystem(c.clusterName, c.config)   // Register a monitor actor for demo purposes system.actorOf(Props[MonitorActor], "cluster-monitor")   system.log info s"ActorSystem ${system.name} started successfully" }   } The tricky part is the configuration. First, the akka.remote.netty.tcp.hostname configuration needs to be set to the docker ip address. The port configuration is unimportant as we have unique ip address thanks  to docker. You can read more about docker networking here. Second, the seed nodes should add themselves to the akka.cluster.seed-nodes list. And at last, everything should be configurable through system properties and environment variables. Thanks to the Typesafe Config Library this is achievable (even with some sweat and tears).Generate a small commandline parser with scopt and the following two parameters: –seed flag which determines if  this node starting should act as a seed node ([ip]:[port])… unbounded list of [ip]:[port] which represent the seed nodes Split the configuration in three filesapplication.conf which contains the common configuration node.cluster.conf contains only  the node specific configuration node.seed.conf contains only the seed-node specific configurationA class NodeConfig which orchestrates all settings and cli parameters in the right order and builds a Typesafe Config object.Take a closer look at the NodeConfig  class. The core part is this: // seed nodes as generated string from cli (ConfigFactory parseString seedNodesString) // the hostname .withValue("clustering.ip", ipValue) // node.cluster.conf or node.seed.conf .withFallback(ConfigFactory parseResources configPath) // default ConfigFactory.load but unresolved .withFallback(config) // try to resolve all placeholders (clustering.ip and clustering.port) .resolve The part to resolve the IP address is a bit hacky, but should work in default docker environments. First the eth0 interfaces is searched and then the first isSiteLocalAddress is being returned. IP adresses in the following ranges are local: 172.16.xxx.xxx, 172.31.xxx.xxx , 192.168.xxx.xxx, 10.xxx.xxx.xxx. The main cluster configuration is done inside the clustering section of the application.conf: clustering { # ip = "" # will be set from the outside or automatically port = 2551 cluster.name = "application" } The ip adress will be filled by the algorithm describe above if nothing else is set. You can easily override all settings with system properties. e.g if you want to run a seed node and a cluster node inside your IDE without docker start both like this: # the seed node -Dclustering.port=2551 -Dclustering.ip= --seed # the cluster node -Dclustering.port=2552 -Dclustering.ip= For sbt this looks like this: # the seed node sbt runSeed # the cluster node sbt runNode The build Next we build our docker image. The sbt-native-packager plugin recently added experimental docker support, so we only need to  configure our build to be docker-ready. First add the plugin to your plugins.sbt. addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "0.7.4") Now we add a few required settings to our build.sbt. You should use sbt 0.13.5 or higher. // adds start script and jar mappings packageArchetype.java_application   // the docker maintainer. You could scope this to "in Docker" maintainer := "Nepomuk Seiler"   // Short package description packageSummary := s"Akka ${version.value} Server" And now we are set. Start sbt and run docker:publishLocal and a docker image will be created for you. The Dockerfile is in target/docker if you want to take a closer look what’s created. Running the cluster Now it’s time to run our containers. The image name is by default name:version. For the our activator it’s akka-docker:2.3.4. The seed ip adresses may vary. You can read it out of the console output of your seed nodes. docker run -i -t -p 2551:2551 akka-docker:2.3.4 --seed docker run -i -t -p 2551:2551 akka-docker:2.3.4 --seed docker run -i -t -p 2551:2551 akka-docker:2.3.4 docker run -i -t -p 2551:2551 akka-docker:2.3.4 What about linking? This blog entry describes a different approach to build an akka cluster with docker. I used some of the ideas, but the basic concept is build ontop of linking the docker contains. This allows you to get the ip and port information of the running seed nodes. While this is approach is suitable for single host machines, it seems to get more messy when working with multiple docker machines. The setup in this blog requires only one thing: A central way of assigning host ips. If your seed nodes don’t change their IP adresses you can basically configure almost everything already in your application.conf.Reference: Akka Cluster with Docker containers from our JCG partner Nepomuk Seiler at the mukis.de blog....

JPA Tutorial – Setting Up JPA in a Java SE Environment

JPA stands for Java Persistence API, which basically is a specification that describes a way to persist data into a persistent storage, usually a database. We can think of it as something similar to ORM tools like Hibernate, except that it is an official part of the Java EE specification (and it’s also supported on Java SE). There are many reasons to learn an ORM tool like JPA. I will not go into the details of this because there are already many posts on the web which perfectly answer this question, like this one, or this one. However, we should also keep in mind that this is not a single magic bullet which will solve our every problem. When I first started out with JPA, I had real difficulties to set it up because most of the articles on the web are written for Java EE environment only, whereas I was trying to use it in a Java SE environment. I hope that this article will be helpful for those who wish to do the same in the future. In this example we will use Maven to set up our required dependencies. Since JPA is only a specification, we will also need an implementation. There are many good implementations of JPA available freely (like EclipseLink, Hibernate etc.). For this article I have chosen to use Hibernate. As for the database, I will use MySQL. Let us first create a simple maven project. I have created mine using the quick start archetype from the command line. If you do not know how to do that, you can follow this tutorial. OK, so let us get the dependencies for the JPA next. Include the following lines in your pom.xml: <dependency> <groupId>javax.persistence</groupId> <artifactId>persistence-api</artifactId> <version>1.0.2</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>4.3.6.Final</version> <exclusions> <exclusion> <groupId>org.hibernate.javax.persistence</groupId> <artifactId>hibernate-jpa-2.1-api</artifactId> </exclusion> </exclusions> </dependency> The first dependency specifies the standard JPA interface, and the second one specifies the implementation. Including JPA dependencies this way is desirable because it gives us the freedom to switch vendor-specific implementation in the future without much problem (see details here). However we will not be able to use the latest version of the API this way because the API version 1.0.2 is the last version that is released as an independent JAR. At the time of writing this article, the latest version of the JPA specification is 2.1 which is not available independently (there are lots of requests for it though). If we want to use that one now then our only options are to choose from either a vendor-specific JAR or use an application server which provides the API along with its implementation. I have decided to use the API specification provided by Hibernate. In that case including only the following dependency will suffice: <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>4.3.6.Final</version> </dependency> Next step is to include the dependency for MySQL. Include the following lines in your pom.xml: <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.31</version> </dependency> After including the rest of the dependencies (i.e., jUnit, Hamcrest etc.) the full pom.xml looks like below: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>com.keertimaan.javasamples</groupId> <artifactId>jpa-example</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging><name>jpa-example</name> <url>http://sayemdb.wordpress.com</url><properties> <java.version>1.8</java.version> <hibernate.version>4.3.6.Final</hibernate.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties><dependencies> <!-- JPA --> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>${hibernate.version}</version> </dependency><!-- For connection pooling --> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-c3p0</artifactId> <version>${hibernate.version}</version> </dependency><!-- Database --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.31</version> </dependency><!-- Test --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> <exclusions> <exclusion> <groupId>org.hamcrest</groupId> <artifactId>hamcrest-core</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.hamcrest</groupId> <artifactId>hamcrest-all</artifactId> <version>1.3</version> <scope>test</scope> </dependency> </dependencies><build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.5.1</version> <configuration> <source>${java.version}</source> <target>${java.version}</target> <compilerArgument>-Xlint:all</compilerArgument> <showWarnings>true</showWarnings> <showDeprecation>true</showDeprecation> </configuration> </plugin> </plugins> </build> </project> Now it’s time to configure our database. I will use the following schema in all of my future JPA examples which I found from this excellent online book:  Create an equivalent database following the above schema in your local MySQL installation. Our next step is to create the persistence.xml file which will contain our database specific information for JPA to use. By default JPA expects this file to be in the class path under the META-INF folder. For our maven project, I have created this file under project_root/src/main/resources/META-INF folder: <persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistencehttp://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"version="2.1"><persistence-unit name="jpa-example" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.jpa.HibernatePersistenceProvider</provider><properties> <property name="javax.persistence.jdbc.url" value="jdbc:mysql://localhost/jpa_example" /> <property name="javax.persistence.jdbc.user" value="root" /> <property name="javax.persistence.jdbc.password" value="my_root_password" /> <property name="javax.persistence.jdbc.driver" value="com.mysql.jdbc.Driver" /><property name="hibernate.show_sql" value="true" /> <property name="hibernate.format_sql" value="true" /> <property name="hibernate.dialect" value="org.hibernate.dialect.MySQL5InnoDBDialect" /> <property name="hibernate.hbm2ddl.auto" value="validate" /><!-- Configuring Connection Pool --> <property name="hibernate.c3p0.min_size" value="5" /> <property name="hibernate.c3p0.max_size" value="20" /> <property name="hibernate.c3p0.timeout" value="500" /> <property name="hibernate.c3p0.max_statements" value="50" /> <property name="hibernate.c3p0.idle_test_period" value="2000" /> </properties> </persistence-unit> </persistence> The above file requires some explanation if you are an absolute begineer in JPA. In my next article I will try to explain it as much as possible, but for running this example you will only need to change the first three property values to match your environment (namely the database name, username and password). Also keep a note of the value of the name attribute of the persistence-unit element. This value will be used to instantiate our EntityManagerFactory instance later in the code. Ok, let us now create an entity to test our configuration. Create a class called Address with the following contents: import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.Table;@Entity @Table(name = "address") public class Address { @Id @GeneratedValue private Integer id;private String street; private String city; private String province; private String country; private String postcode;/** * @return the id */ public Integer getId() { return id; }/** * @param id the id to set */ public Address setId(Integer id) { this.id = id; return this; }/** * @return the street */ public String getStreet() { return street; }/** * @param street the street to set */ public Address setStreet(String street) { this.street = street; return this; }/** * @return the city */ public String getCity() { return city; }/** * @param city the city to set */ public Address setCity(String city) { this.city = city; return this; }/** * @return the province */ public String getProvince() { return province; }/** * @param province the province to set */ public Address setProvince(String province) { this.province = province; return this; }/** * @return the country */ public String getCountry() { return country; }/** * @param country the country to set */ public Address setCountry(String country) { this.country = country; return this; }/** * @return the postcode */ public String getPostcode() { return postcode; }/** * @param postcode the postcode to set */ public Address setPostcode(String postcode) { this.postcode = postcode; return this; } } This class has been properly mapped to the address table and its instances are fully ready to be persisted in the database. Now let us create a helper class called PersistenceManager with the following contents: import javax.persistence.EntityManager; import javax.persistence.EntityManagerFactory; import javax.persistence.Persistence;public enum PersistenceManager { INSTANCE;private EntityManagerFactory emFactory;private PersistenceManager() { // "jpa-example" was the value of the name attribute of the // persistence-unit element. emFactory = Persistence.createEntityManagerFactory("jpa-example"); }public EntityManager getEntityManager() { return emFactory.createEntityManager(); }public void close() { emFactory.close(); } } Now let us write some sample persistence code in our Main method to test everything out: import javax.persistence.EntityManager;public class Main { public static void main(String[] args) { Address address = new Address(); address.setCity("Dhaka") .setCountry("Bangladesh") .setPostcode("1000") .setStreet("Poribagh");EntityManager em = PersistenceManager.INSTANCE.getEntityManager(); em.getTransaction() .begin(); em.persist(address); em.getTransaction() .commit();em.close(); PersistenceManager.INSTANCE.close(); } } If you check your database, you will see that a new record has been inserted in your address table. This article explains how to set up JPA without using any other frameworks like Spring. However it is a very good idea to use Spring to set up JPA because in that case we do not need to worry about managing entity managers, transactions etc. Beside setting up JPA, spring is also very good for many other purposes too. That’s it for today. In the next article I will try to explain the persistence.xml file and the corresponding configuration values as much as possible. Stay tuned!The full code can be found at github.Reference: JPA Tutorial – Setting Up JPA in a Java SE Environment from our JCG partner Sayem Ahmed at the Random Thoughts blog....

Node, Grunt, Bower and Yeoman – A Modern web dev’s Toolkit

This article aims at introducing you to some of the currently most popular tools when developing modern web applications with JavaScript. These are totally not new at all and have been around for a couple of years now. Still, I found many devs still don’t use or know about them (as you might), wherefore this article tries to give you a quick, concise intro to get you started. Node and NPM Node.js brings JavaScript to the server and the desktop. While initially JavaScript was mainly used as a browser based language, with Node you can now also create your server-side backend or even a desktop application with node-webkit (for the crazy ones among you). Node.js® is a platform built on Chrome’s JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. nodejs.orgCreating a web server is as simple as these couple of lines of code. var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(1337, ''); console.log('Server running at'); To run it, execute: $ node start Server running at One of the great things about node is its enourmous community which creates and publishes so-called node modules on the NPM directory, Node’s package manager. Currently there are about ~90.000 modules and there have been around ~390.000 downloads last month.Besides creating server-side applications with Node, it has also become the VM for JavaScript development tools like minifiers, code linters etc. Both, Grunt and Yeoman (described in this article) are based upon Node’s infrastructure. More on nodejs.org and npmjs.org. Installing Node So, to get started, you have to first install the Node runtime. The best way to do so is to download the desired package from the official site. This will also automatically install NPM on your machine. Once that’s done, typing.. $ node -v ..into your terminal should output the installed version of node and thus confirm you’re ready to go.Installing node packages Installing a node package is as simple as executing: $ npm install grunt This installs the grunt node package into a folder called node_modules. The best practices approach though is to create a package.json file. Since the suggested approach is to not commit the content of your node_modules folder to your VCS, but rather to automatically reinstall them during the build process, you need a place to keep track about the installed package and its according version: package.json. To create a new package.json file, simply execute npm init inside a clean folder. You’ll have to answer a few questions but ultimately you will get a nice new package config file. Whenever you install new packages you then use the --save or --save-dev option to persist the package into the package.json file. For instance, executing… $ npm install --save-dev grunt …will automatically add grunt to the devDependencies section of the package config file: { ... "devDependencies": { "grunt": "^0.4.5" } } Similarly, if you add --save it’ll be added to the dependencies section. The difference is mainly that dependencies are actively used by your appliation and should be deployed together with it. On the other side, devDependencies are tools you use during the development of the application, which normally do not require to be deployed together with it. Examples are code minifier scripts, test runners etc. To uninstall a package, use.. $ npm uninstall --save-dev grunt ..which uninstalls grunt and removes it from package.json. Restoring packages As I mentioned, you normally don’t commit the node_modules folder to your VCS. Thus, when a you as a developer, or the buildserver retrieves the source code from your VCS, somehow, the packages need to be restored. This is where the package.json file comes into play again. By having it in the root directory, executing.. $ npm init ..instructs NPM to read the dependencies in the config file and to restore them using the specified version. Versioning NPM packages use Semantic Versioning. Given a version number MAJOR.MINOR.PATCH, increment the:MAJOR version when you make incompatible API changes, MINOR version when you add functionality in a backwards-compatible manner, and PATCH version when you make backwards-compatible bug fixes.http://semver.org/ Each package inside package.json is listed with its according version and upgrade behavior. You can have the following schemes:1.3.5: tells npm to use exactly this given version of the package (most restrictive). ~1.3.5 or 1.3.x: tells npm to only upgrade the given package for increments of the patch version (normally just bugfixes). NPM defines it as ~1.3.5 := >=1.3.5-0 <1.4.0-0. ^1.3.5: tells npm it can upgrade to any version smaller than the next major release: <2.0.0. This is the new default behavior when you install node packages (before it was ~). NPM defines it as 1.3.5 := >=1.3.5-0 <2.0.0-0. latest or *: tells npm to always update to the latest version (not recommended).Bower Bower is to the web browser what NPM is to Node.js. It is a package manager for your front-end development libraries like jQuery, Bootstrap and so on.You install Bower as a global package through NPM (obviously) $ npm install -g bower Then, similarly as you did with NPM, you execute bower init on your terminal to create a new bower.json configuration file (the equivalent of package.json for NPM).Installing packages is identical to NPM. $ bower install --save jquery You can also download a specific version by appending jquery#1.9.1. Note, the --save (or -S) option adds the dependency to your bower.json config file. Installed packages will be placed in the bower_components directory. It is suggested to not commit that one to your VCS (just as with the node_modules directory). To uninstall a package simply use: $ bower uninstall --save jquery What’s particularly interesting is that Bower allows you to install packages from any git repository or even a plain URL. $ bower install git:/github.com/user/package.git or $ bower install http://example.com/script.js If you require some more advanced configuration, like changing the name of the dependencies directory or its location, you may want to use a .bowerrc configuration file placed at the root of your project directory structure. More about the available configuration options can be found at the official site. There’s another nice article I found on Medium that gives a quick introduction to Bower which you might want to take a look at as well. Yeoman Yeoman has become the de-facto standard scaffolding toolkit for creating modern JavaScript applications.Yeoman is build around generators which are either developed by the Yeoman team (official generators) or by the open source community. Yeoman itself basically just provides the infrastructure for building and running those generators. Yeoman helps you kickstart new projects, prescribing best practices and tools to help you stay productive. From the official site What’s nice about such approach is:that you can quickly get up to speed. Creating a project setup with proper tools and dev support can cost you lots of time and requires expert knowledge. that you don’t necessarly have to know all the best practices tools that are currently available on the market. Yeoman assembles them for you, s.t. you can get started immediately. Then once you get more expertise, you can adjust Yeoman’s configuration to make it fit even more to your project needs. a great way for you to learn lots and lots of new tools.Yeoman as well as its generators are distributed as a node modules. Simply install it globally: $ npm install -g yo Then find your generator (i.e. for angular) and install it using the following command. $ npm install -g generator-angular Finally, execute the generator within your project directory to create a new app. $ yo angular [app-name] This will create the initial scaffold from which you can then start building your application. But Yeoman goes even further, based on the generator you use, you may also generate single components, like Angular controllers, directives etc. while you develop. $ yo angular:controller user That’s all regarding Yeoman’s usage. More advanced topics are about creating your own custom generators. Simply study the docs as they’re quite detailed. Grunt Grunt is automation. It is a task-based command line build tool for JavaScript projects. The official headline: “The JavaScript Task Runner”.To get started, simply follow the online guide on the official site. There’s also a great book Getting Started with Grunt – The JavaScript Task Runner published by PacktPub, which is ideal for beginners. Installation Grunt runs on top of the Node.js platform and is distributed through the npm repository. It comes as two different tools:grunt-cli which is the Grunt Command-line interface grunt moduleThe reason for having two components is to make sure we can run different grunt versions side-by-side (i.e. legacy versions in older projects). Hence, grunt-cli is installed globally while grunt is installed on a per-project basis. $ npm install -g grunt-cli Then enter the project where you wish to use Grunt and execute: $ npm install grunt Gruntfile.js The Gruntfile.js is the place where you configure the Grunt tasks for your project. It starts as simple as this file: module.exports = function(grunt) { // Do grunt-related things in here }; The grunt object is Grunt’s API: http://gruntjs.com/api/grunt. It allows you to interact with Grunt, to register your tasks and adjust its configuration. Grunt modules Grunt modules are distributed through Node’s NPM directory. Normally, they are prefixed with grunt- and official grunt plugins are prefixed with grunt-contrib. Example: grunt-contrib-uglify. Hence, Grunt modules are node modules and thus you install them just as I’ve shown before. $ npm install --save-dev grunt-contrib-uglify Anatomy of Grunt tasks You normally start by defining the build tasks like this example of a stringCheck task taken from the Grunt book I mentioned before. module.exports = function(grunt){ ... grunt.initConfig({ stringCheck: { file: './src/somefile.js', string: 'console.log(' } }); } As you can see, a task is simply a function that you register with Grunt. module.exports = function(grunt){ grunt.registerTask('stringCheck', function() { //fail if configuration is not provided grunt.config.requires('stringCheck.file'); grunt.config.requires('stringCheck.string');//retrieve filename and load it var file = grunt.config('stringCheck.file'); var contents = grunt.file.read(file);//retrieve string to search for var string = grunt.config('stringCheck.string');if(contents.indexOf(string >= 0)) grunt.fail.warn('"' + string + '" found in "' + file + '"'); }); } Note, externally downloaded tasks through NPM have to be loaded first, in order to be used in your Gruntfile.js. This is done by using the loadNpmTasks on the grunt object. module.exports = function(grunt){ grunt.loadNpmTasks('grunt-contrib-concat'); ... } In order not having to do this for every single task you use (which can be quite a lot), you may want to use the load-grunt-tasks plugin and execute require('load-grunt-tasks')(grunt) at the beginning of your Gruntfile.js. This will autoload all grunt modules, ready to be used. Multitasks Grunt also allows you to group a task execution as follows: module.exports = function(grunt){ ... grunt.initConfig({ stringCheck: { target1: { file: './src/somefile.js', string: 'console.log(' }, target2: { file: './src/somefile.js', string: 'eval(' } } }); } You can then execute them with grunt stringCheck:target1 and runt stringCheck:target2. target1 and target2 can (and should) obviously be named differently. Globbing File globbing or wildcard matching is a way to capture a large group of files with a single expression rather than listing all of them individually which is often not even possible. From the official docs:* matches any number of characters, but not / ? matches a single character, but not / ** matches any number of characters, including /, as long as it’s the only thing in a path part {} allows for a comma-separated list of “or” expressions ! at the beginning of a pattern will negate the matchAll most people need to know is that foo/*.js will match all files ending with .js in the foo/ subdirectory, but foo/**/*.js will match all files ending with .js in the foo/ subdirectory and all of its subdirectories. Since most of the tasks ultimately interact with the file system, Grunt already predisposes a structure to make task devs’ life easier. If a globbing expession is specified, Grunt tries to match it against the file system and places all matches in the this.files array within your Grunt task function. Hence, you will see a lot of tasks having a syntax like: target1: { src: ['src/a.js', 'src/b.js'] } or target1: { src: `src/{a,b}.js`, dest: `dest/ab.js` } It is also possible to define multiple source sets with according destination. For this purpose the files array is used. target1: { files: [ { src: 'src/{a,b,c}.js', dest: 'dest/abc.js' }, { src: 'src/{x,y,z}.js', dest: 'dest/xyz.js' } ] } The following, more compact, object notation is equivalent: target1: { files: { 'dest/abc.js': 'src/{a,b,c}.js', 'dest/xyz.js': 'src/{x,y,z}.js' } } Another common task is to copy a set of files to a given directory (for example with preprocessors like SASS or CoffeeScript compilers). Instead of providing the single src and dest instructions we can use the following syntax: target2: { files: [ { expand: true, cwd: 'lib/', src: '**/*.js', dest: 'build/', ext: '.min.js' }, ], } The expand property tells Grunt to generate a corresponding destination for each matched file. cwd stands for the current working directory, src and dest are self explanatory and ext is the extension to be used for the destination files. More options can be found in the official docs. Running tasks Ultimately your goal is to execute the Grunt tasks you defined. If you remember, you previously installed the grunt-cli tool globally which you can now use to run a task. $ grunt task1 task2 If you have a multitarget task, then use : to specify it. $ grunt task:target1 If you run $ grunt instead, the default task will be executed which you can configure as follows: module.exports = function(grunt) { grunt.registerTask('build', function() { console.log('building...'); });grunt.registerTask('test', function() { console.log('testing...'); });grunt.registerTask('default', ['build', 'test']); }; Having this Gruntfile.js configuration executes build and test when you type grunt into your console.gulp. Gulp This intro wouldn’t be complete if it doesn’t mention Gulp. Gulp is the JavaScript task runner newcomer build on top of Node.js streams. It aims at making build scripts easier to use by “preferring code over configuration” (unlike Grunt which is based on configuration).gulp’s use of streams and code-over-configuratin makes for a simpler and more intuitive build. gulpjs.com I didn’t study it in detail yet, but you should definitely keep an eye on it as it is fastly growing and gaining in popularity. For now I won’t include more details, but I definitely will update this article once I’ve taken a closer look at it.Reference: Node, Grunt, Bower and Yeoman – A Modern web dev’s Toolkit from our JCG partner Juri Strumpflohner at the Juri Strumpflohner’s TechBlog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: