Featured FREE Whitepapers

What's New Here?

agile-logo

Notes to students doing research about Agile

The dissertation/thesis writing season is approaching so I expect the recent set of questions from a student is the first of many. Actually they occur all year round but March to September is the busy period. So, to pre-empt any students bearing questions about Agile – and so I can just repeat my stock answers – here are some ideas you might want to consider first. Lets start with the request itself. I and many other people who are silly enough to write and blog about Agile software development get plenty of e-mail from students doing research on Agile. Fair enough, I like to help but if you don’t ask nicely I’m unlikely to respond.I won’t be answering stock surveys that are sent from a mail program. I’m much more likely to answer a personal request which indicates you actually know my name and even read some of what I have written.I’m skeptical of most quantitive research (e.g. surveys) research on Agile simply because the question that I normally see are rubbish. Qualitative research is probably better butIt is not as satisfying as quantitive, you can’t conclude “58% of teams use Agile” you come up with some wishy-washy case studies You actually have to go and talk to people, you can’t just compile a mailing list and feed it into Survey Monkey. That means getting out and finding people to talk to.If you must go down the quantitative route please do your homework, do some qualitative before you begin so you can ask good questions. Now the number one mistake made by students who approach me is: Assuming there is such a thing as Agile. That is to say assuming:one is either Agile or not Agile, secondly: one knows that one is, or is not Agile and thirdly: one is prepared to openly and honestly state it.Agile is not binary. Nor for that matter is Waterfall. While we are about it Agile and Waterfall are not necessarily binary alternatives. Have a look at my Agile Spectrum article: the thing that is called “Agile”, and the thing that is called “Waterfall” are simply points on a spectrum – probably the two extremes – and most teams are somewhere in-between. Nor is there anyway (at the moment) to objectively determine where a team is on the spectrum. Well actually there are plenty of people who will do an assessment of one sort or another but you have to look at what criteria they are using. My suggestion is get away from Agile as a whole and look at Agile practices. Likewise get away from the methods – Scrum, XP, Kanban, etc. If you must look at different methods look at them in the context of practices. Agile methods are sets of practices, sometimes with values and principles thrown in. Practices are the easiest of these things to observe and measure. They are objective, the other bits are subjective. If you really want to dig deep – and make work for yourselves – you might want to investigate team values – do teams actually value the things Agile claims to value? This is going to take some doing. You probably want to look at what actually happens, shadow a manager for a week and see if their decisions accord with the values. You might do it quantitatively by composing one of those surveys that asks people to choose between answers, e.g. “Under pressure to ship more often which are you most likely to do:make your stories smaller, offer financial incentives to the team, reduce unit testing, increase unit testing”.Now some suggestions for some research, Agile research questions:Validate my spectrum: its a hypothesis, come up with some criteria and do a survey to see if you can determine the spread Practices: what are the practices that teams actually use? Specifically, what are the practices they say they use and which ones do they actually use? Which practices are the most popular? What difference do the practices make? How does Agile differ between product development companies and corporate IT? Add “solution providers” (e.g. Accenture, EDS, CapGemini and thousands of smaller companies) for extra marks. Correlation of practices to approach: do teams which call themselves Agile actually use practices associated with Agile? And likewise for Waterfall. A couple of years ago a student at Loughborough University, Adam Williams, rose to my challenge. He tried this, his survey was small, 20-odd companies, and he found little correlation at all. In fact he found some teams who described themselves as Waterfall but used several Agile practices. I would love to see this study expanded. What do Scrum Masters really do and how are they different from Project Managers? What are the recruitment practices of Agile teams and companies? Forget Agile, look at Waterfall, find teams who actively claim to do Waterfall and find out what happens, how do the unsuccessful ones differ from the successful? Are there any successful waterfall teams? Examine some of the more controversial practices: TDD, pair programming, refactoring. Do a meta-analysis of the studies to date or do your own studies. Extend Keith Briathwaite’s work the correlation between TDD and cyclomatic complexity. TDD, benefits and the power law – I should blog about this but for now just take my word for it.Finally the big one. Real Agility is not about methods, practices or names, it is the state of being Agile. (Something that was meant to be in the title of my first book but was messed up.)What is the state of Agile? – in theory and in reality What advantage does the state of Agile confer?So if any student out there wants to raise to the challenge on these questions please get in contact, I’d love to help and to see the final research.Reference: Notes to students doing research about Agile from our JCG partner Allan Kelly at the Agile, Lean, Patterns blog....
git-logo

Custom Git Commands in 3 Steps

I’m lazy and so I seek ways to reduce repetitious activities. For instance, I’ve spent a lot of time in a terminal typing Git commands. A few of the more common commands, I’ve aliased. If I want to see a list of branches, I used to type:                 Listing Git branches $> git branch -v -a But after adding an alias to my bash profile, I simply type gb. I’ve done this for a few commands like git commit, which is gc and gca for the -a flag. Occasionally, aliases aren’t enough and when it comes to Git, you can create custom commands that can be referenced like so: Your custom Git command $> git my-command To create a custom command, you first need to create a file named git-my-command; second, you must place the resultant file on your path. Finally, you need to make the file executable. You can write this file in Bash, Ruby, or Python – it doesn’t matter. For example, I tend to find myself stashing some uncommitted changes and then later popping those stashed changes onto a new branch. I end up executing the following steps: A simple Git flow $> git stash $> git stash branch some_branch The key step I want to simplify is the last one – I’m lazy and I’d rather not type 4 words. I’d rather type git unstash some_branch because it saves me one word. Following the three simple steps I mentioned above, I’ll first create a file in my ~/bin directory called git-unstash. The ~/bin directory is in my path because my .bashrc has this line: PATH=$PATH:$HOME/bin. My git-unstash script will be simple – it takes an argument (the branch name, i.e. $1); therefore, the script does a simple check to ensure the branch name is provided. Custom Git command: unstash #!/bin/bash((!$#)) && echo No branch name, command ignored! && exit 1git stash branch $1 After I’m done writing it, I’ll do a quick chomd +x and all three steps are accomplished. Now my new flow is this: A simple Git flow $> git stash $> git unstash some_branch Custom Git commands are that simple to invent – first, create a file named git-my-command. Next, place it on your path; and, finally, make it executable. Be lazy and carry on, baby!Reference: Custom Git Commands in 3 Steps from our JCG partner Andrew Glover at the The Disco Blog blog....
jboss-hibernate-logo

Careful With Native SQL in Hibernate

I really like Hibernate, but I also don’t know a tool that would be nearly as powerful and deceptive at the same time. I could write a book on surprises in production and cargo cult programming related to Hibernate alone. It’s more of an issue with the users than with the tool, but let’s not get too ranty. So, here’s a recent example. Problem We need a background job that lists all files in a directory and inserts an entry for each of them to a table. Naive Solution The job used to be written in Bash and there is some direct SQL reading from the table. So, blinders on and let’s write some direct SQL! for (String fileName : folder.list()) { SQLQuery sql = session.getDelegate().createSQLQuery( "insert into dir_contents values (?)"); sql.setString(0, fileName); sql.executeUpdate(); } Does it work? Sure it does. Now, what happens if there are 10,000 files in the folder? What if you also have a not so elegant domain model, with way too many entity classes, thousands of instances and two levels of cache all in one context? All of a sudden this trivial job takes 10 minutes to execute, all that time keeping 2 or 3 CPUs busy at 100%. What, for just a bunch of inserts? Easy Fix The problem is that it’s Hibernate. It’s not just a dumb JDBC wrapper, but it has a lot more going on. It’s trying to keep caches and session state up to date. If you run a bare SQL update, it has no idea what table(s) you are updating, what it depends on and how it affects everything, so just in case it pretty much flushes everything. If you do this 10,000 times in such a crowded environment, it adds up. Here’s one way to fix it – rather than running 10,000 updates with flushes, execute everything in one block and flush once. session.doWork(new Work() { public void execute(Connection connection) throws SQLException { PreparedStatement ps = connection .prepareStatement("insert into dir_contents values (?)"); for (String fileName : folder.list()) { ps.setString(1, fileName); ps.executeUpdate(); } } }); Other Solutions Surprise, surprise:Do use Hibernate. Create a real entity to represent DirContents and just use it like everything else. Then Hibernate knows what caches to flush when, how to batch updates and so on. Don’t use Hibernate. Use plain old JDBC, MyBatis, or whatever else suits your stack or is there already.Takeaway Native SQL has its place, even if this example is not the best use case. Anyway, the point is: If you are using native SQL with Hibernate, mind the session state and caches.Reference: Careful With Native SQL in Hibernate from our JCG partner Konrad Garus at the Squirrel’s blog....
software-development-2-logo

Error Tracking Reports – Part 3 – Strategy and Package Private

This is the third blog in a series that’s loosely looking at tracking application errors. In this series I’m writing a lightweight, but industrial strength, application that periodically scans application log files, looking for errors and, if any are found, generates and publishes a report. If you’ve read the first blog in the series you may remember that I initially said that I needed a Report class and that “if you look at the code, you won’t find a class named Report, it was renamed Results and refactored to create a Formatter interface, the TextFormatter and HtmlFormatter classes together with the Publisher interface and EmailPublisher class”. This blog covers the design process, highlighting the reasoning behind the refactoring and how I arrived at the final implementation. If you read on, you may think that the design logic given below is somewhat contrived. That’s because it is. The actual process of getting from the Report class to the Results class, the Formatter and Publisher interfaces together with their implementations probably only took a few seconds to dream up; however, writing it all down took some time. The design story goes like this… If you have a class named Report then how do you define its responsibility? You could say something like this: “The Report class is responsible for generating an error report. That seems to fit the Single Responsibility Principal, so we should be okay… or are we? Saying that a Report is responsible for generating a report is rather tautological. It’s like saying that a table is responsible for being a table, it tells us nothing. We need to break this down further. What does “generating a report” mean? What are the steps involved? Thinking about it, to generate a report we need to:marshall the error data. format the error data into a readable document. publish the report to a known destination.If you include all this in the Report class’s responsibility definition you get something like this: “The Report class is responsible for marshalling the error data and formatting the data into a readable document and publishing the report to a known destination.” Obviously that breaks the Single Responsibility Principal because the Report class has three responsibilities instead of one; you can tell by the use of the word ‘and’. This really means we have three classes: one to handle the results, one to format the report and one to publish the report, and these three loosely coupled classes must collaborate to get that report delivered. If you look back at the original requirements, points 6 and 7 said: 6. When all the files have been checked, format the report ready for publishing. 7 . Publish the report using email or some other technique. Requirement 6 is pretty straight forward and concrete, we know that we’ve got to format the report. In a real project, you’d either have to come up with the format yourself or ask the customer what it was they wanted to see in their report. Requirement 7 is somewhat more problematical. The first part is okay, it says “publish the report using email” and that’s no problem with Spring. The second is very badly written: which other technique? Is this required for this first release? If this was a real-world project, one that you’re doing for a living, then this is where you need to ask a few questions – very loudly if necessary. That’s because an unquantifiable requirement will have an impact on timescales, which could also make you look bad. Questioning badly defined requirements or stories is a key skill when it comes to being a good developer. If a requirement is wrong or vague, no ones going to thank you if you just make things up and interpret it your own way. How you phrase your question is another matter… it’s usually a good idea to be ‘professional’ about it and say something like: “excuse me, have you got five minutes to explain this story to me, I don’t understand it”. There are only several answers you will get and they are usually:“Don’t bother me now, come back later…” “Oh yes, that’s a mistake in the requirements – thanks for spotting it, I’ll sort it out.” “The end user was really vague here, I’ll get in touch with them and clarify what they meant.” ”I’ve no idea – take a guess..” “This requirement means that you need to do X, Y, Y…”…and remember to make a note of your outstanding requirements questions and chase them up: someone else’s inactivity could threaten your deadlines. In this particular case, the clarification would be that I’m going to add additional publishing methods in later blogs and that I want the code designed to be extensible, which in plain English means using interfaces…The diagram above shows that the initial idea of a Report class has been split into three parts: Results, Formatter and Publisher. Anyone familiar with Design Patterns will notice that I’ve used the Strategy Pattern to inject a Formatter and Publisher implementations into the Results class. This allows me to tell the Results class to generate() a report without the Results class knowing anything about the report, its construction, or where it’s going to. @Service public class Results {  private static final Logger logger = LoggerFactory.getLogger(Results.class);  private final Map<String, List<ErrorResult>> results = new HashMap<String, List<ErrorResult>>();  /**    * Add the next file found in the folder.    *    * @param filePath    *            the path + name of the file    */   public void addFile(String filePath) {    Validate.notNull(filePath);     Validate.notBlank(filePath, "Invalid file/path");    logger.debug("Adding file {}", filePath);     List<ErrorResult> list = new ArrayList<ErrorResult>();     results.put(filePath, list);   }  /**    * Add some error details to the report.    *    * @param path    *            the file that contains the error    * @param lineNumber    *            The line number of the error in the file    * @param lines    *            The group of lines that contain the error    */   public void addResult(String path, int lineNumber, List<String> lines) {    Validate.notBlank(path, "Invalid file/path");     Validate.notEmpty(lines);     Validate.isTrue(lineNumber > 0, "line numbers must be positive");    List<ErrorResult> list = results.get(path);     if (isNull(list)) {       addFile(path);       list = results.get(path);     }    ErrorResult errorResult = new ErrorResult(lineNumber, lines);     list.add(errorResult);     logger.debug("Adding Result: {}", errorResult);   }  private boolean isNull(Object obj) {     return obj == null;   }  public void clear() {     results.clear();   }  Map<String, List<ErrorResult>> getRawResults() {     return Collections.unmodifiableMap(results);   }  /**    * Generate a report    *    * @return The report as a String    */   public <T> void generate(Formatter formatter, Publisher publisher) {    T report = formatter.format(this);     if (!publisher.publish(report)) {       logger.error("Failed to publish report");     }   }  public class ErrorResult {    private final int lineNumber;     private final List<String> lines;    ErrorResult(int lineNumber, List<String> lines) {       this.lineNumber = lineNumber;       this.lines = lines;     }    public int getLineNumber() {       return lineNumber;     }    public List<String> getLines() {       return lines;     }    @Override     public String toString() {       return "LineNumber: " + lineNumber + "\nLines:\n" + lines;     }   } } Taking the Results code first, you can see that there are four public methods; three that are responsible for marshalling the result data and one that generates the report:addFile(…) addResults(…) clear(…) generate(…)The first three methods above manage the Results internal Map<String, List<ErrorResult>> results hash map. The keys in this map are the names of any log files that the FileLocator class finds, whilst the values are Lists of ErrorResult beans. The ErrorResult bean is a simple inner bean class that’s used to group together the details of any errors found. addFile() is a simple method that’s use to register a file with the Results class. It generates an entry in the results map and creates an empty list. If this remains empty, then we can say that this file is error free. Calling this method is optional. addResult() is the method that adds a new error result to the map. After validating the input arguments using org.apache.commons.lang3.Validate it tests whether this file is already in the results map. If it isn’t, it creates a new entry before finally creating a new ErrorResult bean and adding it to appropriate List in the Map. The clear()method is very straight forward: it will clear down the current contents of the results map. The remaining public method, generate(…), is responsible for generating the final error report. It’s our strategy pattern implementation, taking two arguments: a Formatter implementation and a Publisher implementation. The code is is very straight forward as there are only three lines to consider. The first line calls the Formatter implementation to format the report, the second publishes the report and the third line logs any error if the report generation fails. Note that this is a Generic Method (as shown by the <T> attached to the method signature). In this case, the only “Gotcha” to watch out for is that this ’T’ has to be the same type for both the Formatter implementation and the Publisher implementation. If it isn’t the whole thing will crash. public interface Formatter {  public <T> T format(Results report); } Formatter is an interface with a single method: public <T> T format(Results report). This method takes the Report class as an argument and returns the formatted report as any type you like @Service public class TextFormatter implements Formatter {  private static final String RULE = "\n==================================================================================================================\n";  @SuppressWarnings("unchecked")   @Override   public <T> T format(Results results) {    StringBuilder sb = new StringBuilder(dateFormat());     sb.append(RULE);    Set<Entry<String, List<ErrorResult>>> entries = results.getRawResults().entrySet();     for (Entry<String, List<ErrorResult>> entry : entries) {       appendFileName(sb, entry.getKey());       appendErrors(sb, entry.getValue());     }    return (T) sb.toString();   }  private String dateFormat() {    SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");     return df.format(Calendar.getInstance().getTime());   }  private void appendFileName(StringBuilder sb, String fileName) {     sb.append("File:  ");     sb.append(fileName);     sb.append("\n");   }  private void appendErrors(StringBuilder sb, List<ErrorResult> errorResults) {    for (ErrorResult errorResult : errorResults) {       appendErrorResult(sb, errorResult);     }  }  private void appendErrorResult(StringBuilder sb, ErrorResult errorResult) {     addLineNumber(sb, errorResult.getLineNumber());     addDetails(sb, errorResult.getLines());     sb.append(RULE);   }  private void addLineNumber(StringBuilder sb, int lineNumber) {     sb.append("Error found at line: ");     sb.append(lineNumber);     sb.append("\n");   }  private void addDetails(StringBuilder sb, List<String> lines) {    for (String line : lines) {       sb.append(line);       // sb.append("\n");     }   } } This is really boring code. All it does is to create a report using a StringBuilder, carefully adding text until the report is complete. There’s only on point of interest and that’s in third line of code in the format(…) method:     Set<Entry<String, List<ErrorResult>>> entries = results.getRawResults().entrySet(); This is a textbook case of what Java’s rarely used package visibility is all about. The Results class and the TextFormatter class have to collaborate to generate the report. To do that, the TextFormatter code needs access of the Results class’s data; however, that data is part of the Result class’s internal workings and should not be publicly available. Therefore, it makes sense to make that data accessible via a package private method, which means that only those classes that need the data to under take their allotted responsibility can get hold of it. The final part of generating a report is the publication of the formatted results. This is again done using the strategy pattern; The second argument of the Report class’s generate(…) method is an implementation of the Publisher interface: public interface Publisher {  public <T> boolean publish(T report); } This also contains a single method: public <T> boolean publish(T report);. This Generic method takes a report argument of type ’T’, returning true if the report is published successfully. What about the implementation(s)? of this class? The first implementation uses Spring’s email classes and will be the subject of my next blog, which will be published shortly…The code for this blog is available on Github at: https://github.com/roghughe/captaindebug/tree/master/error-track.If you want to look at other blogs in this series take a look here…Tracking Application Exceptions With Spring Tracking Exceptions With Spring – Part 2 – Delegate PatternReference: Error Tracking Reports – Part 3 – Strategy and Package Private from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....
java-logo

Java 8′s Functional Fomentation

Java 8 has revolutionized Java. It’s easily the most significant release of Java in the last 10 years. There are a ton of new features including default methods, method and constructor references, and lambdas, just to name a few. One of the more interesting features is the new java.util.stream API, which as the Javadoc states, enables   functional-style operations on streams of elements, such as map-reduce transformations on collections Combine this new API with lambda expressions and you end up with a terse, yet, powerful syntax that significantly simplifies code through the application of projections. Take, for example, the ostensibly simple task of filtering a collection. In this case, a simple Collection of Message types, created like so: Creating a Collection of Messages List<Message> messages = new ArrayList<>(); messages.add(new Message("aglover", "foo", 56854)); messages.add(new Message("aglover", "foo", 85)); messages.add(new Message("aglover", "bar", 9999)); messages.add(new Message("rsmith", "foo", 4564)); With this collection, I’d like to filter out Messages with a delay (3rd constructor parameter) greater than 3,000 seconds. Previous to Java 8, you could hand jam this sort of logic like so: Filtering old school style for (Message message : messages) { if (message.delay > 3000) { System.out.println(message); } } In Java 8, however, this job becomes a lot more concise. Collections now support the stream method, which converts the underlying data structure into a iterate-able steam of objects and thereby permits a new breed of functional operations that leverage lambda expressions. Most of these operations can be chained as well. These chain-able methods are dubbed intermediate, methods that cannot be chained are denoted as terminal. Briefly, lambda expressions are a lot like anonymous classes except with a lot less syntax. For example, if you look at the Javadocs for the parameter to a Stream’s filter method, you’ll see that it takes a Predicate type. Yet, you don’t have to implement this interface as you would, say, before Java 8 with an anonymous class. Consequently, the Predicate lambda expression for filtering all values of delay greater than 3000 would be: Lambda expression x -> x.delay > 3000 Where x is the parameter passed in for each value in the stream and everything to the right of the -> being the expression evaluated. Putting this all together in Java 8 yields: Streaming lambdas! messages.stream().filter(m -> m.delay > 3000).forEach(item -> System.out.println(item)); Interestingly, due to some other new features of Java 8, the forEach’s lambda can be simplified further to: Streaming lambdas are even shorter! messages.stream().filter(m -> m.delay > 3000).forEach(System.out::println); Because the parameter of the forEach lambda is simply consumed by the println, Java 8 now permits you to drop the parameter entirely. Earlier, I mentioned that streams permit you to chain lambdas – in the case above, the filter method is an intermediate method, while the forEach is a terminal method. Other intermediate methods, that are immediately recognizable to functional programmers, are: map, flatMap, and reduce, to name a few. To elaborate, I’d like to find all Messages that are delayed more than 3,000 seconds and sum up the total delay time. Without functional magic, I could write: Prosaic Java long totalWaitTime = 0; for (Message message : messages) { if (message.delay > 3000) { totalWaitTime += message.delay; } } Nevertheless, with Java 8 and a bit of functional-foo, you can achieve a more elegant code construct like so: Java 8 elegance long totWaitTime = messages.stream().filter(m -> m.delay > 3000).mapToLong(m -> m.delay).sum(); Note how I am able to chain the filter and mapToLong methods, along with a terminal sum. Incidentally, the sum method requires a specific map style method that yields a collection of primitive types, such as mapToLong, mapToInt, etc. Functional style programming as a core language feature is an astoundingly powerful construct. And while a lot of these techniques have been available in various 3rd party libraries like Guava and JVM languages like Scala and Groovy, having these features core to the language will surely reach a wider audience of developers and have the biggest impact to the developmental landscape. Java 8, without a doubt, drastically changes the Java language for the better.Reference: Java 8′s Functional Fomentation from our JCG partner Andrew Glover at the The Disco Blog blog....
git-logo

Automated bug finding with git bisect and mvn test

Do you know the feeling when you discover a bug in a functionality that was working couple of weeks (or versions) ago? Too bad we didn’t have any automated tests and what used to be fine, now is broken. Let’s take this simple repository as an example:  Write test first We noticed that some particular functionality was OK in version 1.0 but is broken in 1.1. What is the first thing we do? Of course write a test case to make sure this bug, once fixed, never comes back! Writing (failing) test for every bug you find has many advantages:It documents bugs and proves they were fixed Non-obvious workarounds and solutions will not be removed (“why was he checking for null here?! It’s impossible, let’s simplify it”) by accident You gradually improve overall code coverage, even in legacy codebase.So you have a failing test case. But even with isolated test you can’t reliably figure out what is wrong. If only we could find a commit that broke that test – assuming commits are small and focused. But if we commit our test right now and check in one of the older versions to run it – it’s not yet there. After all, test made its why to the codebase just now, if it was there from the first revision, we wouldn’t have the problem altogether:Interactive rebasing Maybe instead of committing the test after version 1.1 (where we know it’s broken) we should make a patch or stash this test? This way we could go through all revisions between 1.0 and 1.1, unstashing or applying patch with test and running it. Hope you agree this is far from perfect. The first trick is to use interactive rebase in order to shift commit with failing test case back in time. However we don’t want to rebase master branch so we make a temporary copy and rebase it instead: $ git checkout -b tmp Switched to a new branch 'tmp' $ git rebase -i 1.0 tmp Successfully rebased and updated refs/heads/tmp.Interactive rebase will ask us to rearrange commits before proceeding, just move commit with test case from last to first position: pick 10dbcc9 Feature 2 pick f4cf58a Feature 3 pick 8287434 Feature 4 pick e79d56f Feature 5 pick 50614b6 Feature 6 pick 21ae08f Feature 7 pick 1e5b5a5 Feature 8 pick f703abf Feature 9 pick 686d7a9 Feature 10 pick b5b5cf1 Feature 11 pick 8e58593 Feature 12 pick 3ab419a Feature 13 pick 0e769a0 Feature 14 pick 8bfdbea Feature 15 pick 0a95b7f Feature 16 pick 4622cbc Feature 17 pick 757c4eb Feature 18 pick 3d94d7e Feature 19 pick da69f6a Feature 20 pick 733bd17 Test for bug #123Now our repository should look somewhat like this:  git bisect Most importantly, our test case is now injected right after version 1.0 (known to be good). All we have to do is check in all revisions one after another and run this test. STOP! If you are smart (or lazy) you will start from commit right in the middle and if this one is broken you proceed with first half the same way – or take second half otherwise. It’s sort of like binary search. However keeping track of which commit was last seen good and bad and also manually checking in revision in the middle is quite cumbersome. Luckily git can do this for us with git bisect command. In principal after starting bisecting we specify last known good and first known bad commit. Git will check in revision in between and ask us whether it’s good or bad, continuing until we find exactly which commit broke code. In our case we simply run mvn test and proceed depending on its outcome: $ git bisect start $ git bisect good 1.0 $ git bisect bad tmp Bisecting: 9 revisions left to test after this (roughly 3 steps) [13ed8405beb387ec86874d951cf630de2c4fd927] Feature 10 $ mvn test -Dcom.nurkiewicz.BugTest ... [INFO] BUILD SUCCESS $ git bisect good Bisecting: 4 revisions left to test after this (roughly 2 steps) [b9e610428b61ba1436219edbaa1c5c435a1907ae] Feature 15 $ mvn test -Dcom.nurkiewicz.BugTest ... [INFO] BUILD SUCCESS $ git bisect good Bisecting: 2 revisions left to test after this (roughly 1 step) [e8a5ddd4dea219d826a15f7a085e412c29333b10] Feature 17 $ mvn test -Dcom.nurkiewicz.BugTest ... [INFO] BUILD FAILURE $ git bisect bad Bisecting: 0 revisions left to test after this (roughly 0 steps) [6d974faffa042781a098914a80d962953a492cb5] Feature 16 $ mvn test -Dcom.nurkiewicz.BugTest ... [INFO] BUILD SUCCESS $ git bisect good e8a5ddd4dea219d826a15f7a085e412c29333b10 is the first bad commit commit e8a5ddd4dea219d826a15f7a085e412c29333b10 Author: Tomasz Nurkiewicz Date: Wed Mar 19 19:43:40 2014 +0100 Feature 17 :100644 100644 469c856b4ede8 90d6b2233832 M SomeFile.javaSee how we iteratively call git good/bad executing our test case in between? Also notice how quickly the number of commits to test shrinks. You might think this is neat and fast (logarithmic time!), but we can actually go much faster. git bisect has a hidden gem called run mode. Instead of relying on manual answer from the user after each iteration we can provide a script that tells whether given revision is good or bad. By convention if this script exits with code 0 it means success while any other exit code signals an error. Luckily mvn script follows this convention so we can simply execute git bisect mvn test -Dcom.nurkiewicz.BugTest, sit back and relax: $ git bisect start $ git bisect good 1.0 $ git bisect bad tmp Bisecting: 9 revisions left to test after this (roughly 3 steps) [13ed8405beb387ec86874d951cf630de2c4fd927] Feature 10 $ git bisect run mvn test -Dcom.nurkiewicz.BugTest running mvn test -Dcom.nurkiewicz.BugTest ... [INFO] BUILD SUCCESS ... Bisecting: 4 revisions left to test after this (roughly 2 steps) [b9e610428b61ba1436219edbaa1c5c435a1907ae] Feature 15 running mvn test -Dcom.nurkiewicz.BugTest ... [INFO] BUILD SUCCESS ... Bisecting: 2 revisions left to test after this (roughly 1 step) [e8a5ddd4dea219d826a15f7a085e412c29333b10] Feature 17 running mvn test -Dcom.nurkiewicz.BugTest ... [INFO] BUILD FAILURE ... Bisecting: 0 revisions left to test after this (roughly 0 steps) [6d974faffa042781a098914a80d962953a492cb5] Feature 16 running mvn test -Dcom.nurkiewicz.BugTest ... [INFO] BUILD SUCCESS ... e8a5ddd4dea219d826a15f7a085e412c29333b10 is the first bad commit commit e8a5ddd4dea219d826a15f7a085e412c29333b10 Author: Tomasz Nurkiewicz Date: Wed Mar 19 19:43:40 2014 +0100 Feature 17 :100644 100644 469c856b4ede8 90d6b2233832 M SomeFile.java bisect run successProgram above is non-interactive and fully automated. git, after few iterations, points precisely which commit was the first one to break the test. We can run all tests, but there is no point since we know only this one fails. Of course you can use any other command rather than mvn. You can even write some simple script in any JVM language of your choice (use System.exit ()). git bisect, combined with interactive rebasing, are wonderful tools to look for regressions and bugs. Also they promote automated testing and automation in general.Reference: Automated bug finding with git bisect and mvn test from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....
spring-logo

Integration Testing for Spring Applications with JNDI Connection Pools

We all know we need to use connection pools where ever we connect to a database. All of the modern drivers using JDBC type 4 supports it. In this post we will have look at an overview of connection pooling in spring applications and how to deal with same context in a non JEE enviorements (like tests). Most examples of connecting to database in spring is done using DriverManagerDataSource.If you don’t read the documentation properly then you are going to miss a very important point.     NOTE: This class is not an actual connection pool; it does not actually pool Connections. It just serves as simple replacement for a full-blown connection pool, implementing the same standard interface, but creating new Connections on every call. Useful for test or standalone environments outside of a J2EE container, either as a DataSource bean in a corresponding ApplicationContext or in conjunction with a simple JNDI environment. Pool-assuming Connection.close() calls will simply close the Connection, so any DataSource-aware persistence code should work. Yes, by default the spring applications does not use pooled connections. There are two ways to implement the connection pooling. Depending on who is managing the pool. If you are running in a JEE environment, then it is prefered use the container for it. In a non-JEE setup there are libraries which will help the application to manage the connection pools. Lets discuss them in bit detail below. 1. Server (Container) managed connection pool (Using JNDI)When the application connects to the database server, establishing the physical actual connection takes much more than the execution of the scripts. Connection pooling is a technique that was pioneered by database vendors to allow multiple clients to share a cached set of connection objects that provide access to a database resource. The JavaWorld article gives a good overview about this. In a J2EE container, it is recommended to use a JNDI DataSource provided by the container. Such a DataSource can be exposed as a DataSource bean in a Spring ApplicationContext via JndiObjectFactoryBean, for seamless switching to and from a local DataSource bean like this class. The below articles helped me in setting up the data source in JBoss AS.DebaJava Post JBoss Installation Guide JBoss WikiNext step is to use these connections created by the server from the application. As mentioned in the documentation you can use the JndiObjectFactoryBean for this. It is as simple as below <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="java:/my-ds"/> </bean> If you want to write any tests using springs “SpringJUnit4ClassRunner” it can’t load the context becuase the JNDI resource will not be available. For tests, you can then either set up a mock JNDI environment through Spring’s SimpleNamingContextBuilder, or switch the bean definition to a local DataSource (which is simpler and thus recommended).  As I was looking for a good solutions to this problem (I did not want a separate context for tests) this SO answer helped me. It sort of uses the various tips given in the Javadoc to good effect. The issue with the above solution is the repetition of code to create the JNDI connections. I have solved it using a customized runner SpringWithJNDIRunner. This class adds the JNDI capabilities to the SpringJUnit4ClassRunner. It reads the data source from “test-datasource.xml” file in the class path and binds it to the JNDI resource with name “java:/my-ds”. After the execution of this code the JNDI resource is available for the spring container to consume.import javax.naming.NamingException;import org.junit.runners.model.InitializationError; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import org.springframework.mock.jndi.SimpleNamingContextBuilder; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;/** * This class adds the JNDI capabilities to the SpringJUnit4ClassRunner. * @author mkadicha * */ public class SpringWithJNDIRunner extends SpringJUnit4ClassRunner {public static boolean isJNDIactive;/** * JNDI is activated with this constructor. * * @param klass * @throws InitializationError * @throws NamingException * @throws IllegalStateException */ public SpringWithJNDIRunner(Class<?> klass) throws InitializationError, IllegalStateException, NamingException { super(klass); synchronized (SpringWithJNDIRunner.class) { if (!isJNDIactive) {ApplicationContext applicationContext = new ClassPathXmlApplicationContext( "test-datasource.xml");SimpleNamingContextBuilder builder = new SimpleNamingContextBuilder(); builder.bind("java:/my-ds", applicationContext.getBean("dataSource")); builder.activate();isJNDIactive = true; } } } }<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"><bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="" /> <property name="url" value="" /> <property name="username" value="" /> <property name="password" value="" /> </bean> </beans> To use this runner you just need to use the annotation @RunWith(SpringWithJNDIRunner.class) in your test. This class extends SpringJUnit4ClassRunner beacuse a there can only be one class in the @RunWith annotation. The JNDI is created only once is a test cycle. This class provides a clean solution to the problem. 2. Application managed connection pool If you need a “real” connection pool outside of a J2EE container, consider Apache’s Jakarta Commons DBCP or C3P0. Commons DBCP’s BasicDataSource and C3P0′s ComboPooledDataSource are full connection pool beans, supporting the same basic properties as this class plus specific settings (such as minimal/maximal pool size etc). Below user guides can help you configure this.Spring Docs C3P0 Userguide DBCP UserguideThe below articles speaks about the general guidelines and best practices in configuring the connection pools.SO question on Spring JDBC Connection pools Connection pool max size in MS SQL Server 2008 How to decide the max number of connections Monitoring the number of active connections in SQL Server 2008Reference: Integration Testing for Spring Applications with JNDI Connection Pools from our JCG partner Manu PK at the The Object Oriented Life blog....
spring-logo

Spring Framework 4.0.3 and Spring Data Redis 1.2.1 with Java 8 support

Spring Framework 4.0.3 Spring Framework 4.0.3 is now avalaible, as announced by the Spring community. It is the first release of the framework after Java 8′s launch last week, and so it is built with OpenJDK 8 GA and includes the latest ASM 5.0.1. Spring Framework 4.0.3 brings significant enhancements for WebSockets. It comes along with a lot of real-life feedback incorporated back into the framework and its configuration options. On a forward-looking note, the Spring Framework team is moving on to 4.1 development now, with a first set of new features to show up in 4.1 snapshots soon. The 4.0.4 release is prepared to come in May, however, this will only be a maintenance release, since the 4.0.x feature set is considered to be complete at this point. Our Spring tutorials can guide you through Spring framework. Spring Data Redis 1.2.1 Spring Data Redis 1.2.1 is also released. It is a maintenance release, that contains some bugfixes in RedisTemplate as well as in RedisCacheManager. This version is tested against Java 6, 7 and it is also tested against Java 8, for compatibility with Redis 2.6 and 2.8 as well as Spring Framework 4.0.3. You can take a quick tour on Spring Data Redis and Spring Data in general. ...
java-logo

The Dark Side Of Lambda Expressions in Java 8

This post may not make me any new friends. Oh well, I was never really popular at school anyway. But let’s get to the point. Java 8’s biggest feature in terms of the language is undoubtedly Lambda expressions. It’s been a flagship feature for functional languages such as Scala and Clojure for a few years, and now Java has finally joined in. The second biggest feature (depending of course on who you ask) is Nashorn – the new JVM JavaScript engine that’s supposed to bring Java up to par with other JS engines such as V8 and its node.js container. But these new features have a dark side to them. I’ll explain. The Java platform is built out of two main components. The JRE, which JIT compiles and executes bytecode, and the JDK which contains dev tools and the javac source compiler. These two components are fairly (but not fully) decoupled, which is what enables folks to write their own JVM languages, with Scala rising to prominence in the last few years. And therein lies some of the problem. The JVM was built to be language agnostic in the sense that it can execute code written in any language, as long as it can be translated into bytecode. The bytecode specification itself is fully OO, and was designed to closely match the Java language. That means that bytecode compiled from Java source will pretty much resemble it structurally. But the farther away you get from Java – the more that distance grows. When you look at Scala which is a functional language, the distance between the source code and the executed bytecode is pretty big. Large amounts of synthetic classes, methods and variables are added by the compiler to get the JVM to execute the semantics and flow controls required by the language. When you look at fully dynamic languages such as JavaScript, that distance becomes huge. And now with Java 8, this is beginning to creep into Java as well. So why should I care? I wish this could be a theoretical discussion, that while interesting, has no practical implication on our everyday work. Unfortunately it does, and in a very big way. With the push to add new elements into Java, the distance between your code and the runtime grows, which means that what you’re writing and what you’re debugging will be two different things. To see how let’s (re)visit the example below. Java 6 & 7 This is the traditional method by which we would iterate over a list of strings to map their lengths. // simple check against empty strings public static int check(String s) { if (s.equals("")) { throw new IllegalArgumentException(); } return s.length(); } //map names to lengths List lengths = new ArrayList(); for (String name : Arrays.asList(args)) { lengths.add(check(name)); } This will throw an exception if an empty string is passed. The stack trace will look like - at LmbdaMain.check(LmbdaMain.java:19) at LmbdaMain.main(LmbdaMain.java:34) Here we see a 1:1 correlation between the stack trace we see and the code we wrote, which makes debugging this call stack pretty straightforward. This is what most Java devs are used to. Now let’s look at Scala and Java 8. Scala Let’s look at the same code in Scala. Here we’ve got two big changes. The first is the use of a Lambda expression to map the lengths, and the second is that the iteration is carried out by the framework (i.e. internal iteration). val lengths = names.map(name => check(name.length)) Here we really start to notice the difference between how the code you wrote looks, and how the JVM (and you) will see it at run time. If an exception is thrown, the call stack is an order of magnitude longer, and much harder to understand. at Main$.check(Main.scala:6) at Main$$anonfun$1.apply(Main.scala:12) at Main$$anonfun$1.apply(Main.scala:12) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.immutable.List.foreach(List.scala:318) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at Main$delayedInit$body.apply(Main.scala:12) at scala.Function0$class.apply$mcV$sp(Function0.scala:40) at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) at scala.App$$anonfun$main$1.apply(App.scala:71) at scala.App$$anonfun$main$1.apply(App.scala:71) at scala.collection.immutable.List.foreach(List.scala:318) at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32) at scala.App$class.main(App.scala:71) at Main$.main(Main.scala:1) at Main.main(Main.scala) * Remember, this example is very simple. With real-world nested Lambdas and complex structures you’ll be looking at much longer synthetic call stacks, from which you’ll need to understand what happened. This has long been an issue with Scala, and one of the reasons we built the Scala Stackifier. And now in Java 8 Up until now Java developers were pretty immune to this. This will change as Lambda expressions become an integral part of Java. Let’s look at the corresponding Java 8 code, and the resulting call stack. Stream lengths = names.stream().map(name -> check(name));at LmbdaMain.check(LmbdaMain.java:19) at LmbdaMain.lambda$0(LmbdaMain.java:37) at LmbdaMain$$Lambda$1/821270929.apply(Unknown Source) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:512) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:502) at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.LongPipeline.reduce(LongPipeline.java:438) at java.util.stream.LongPipeline.sum(LongPipeline.java:396) at java.util.stream.ReferencePipeline.count(ReferencePipeline.java:526) at LmbdaMain.main(LmbdaMain.java:39) This is becoming pretty similar to Scala. We’re paying the price for shorter, more concise code with more complex debugging, and longer synthetic call stacks. The reason is that while javac has been extended to support Lambda functions, the JVM still remains oblivious to them. This has been a design decision by the Java folks in order to to keep the JVM operating at a lower-level, and without introducing new elements into its specification. And while you can debate the merits of this decision, it means that as Java developers the cost figuring out these call stacks when we get a ticket now sadly lies on our shoulders, whether we want to or not. JavaScript in Java 8 Java 8 introduces a brand new JavaScript compiler. Now we can finally integrate Java + JS in an efficient and straightforward manner. However, nowhere is the dissonance between the code we write and the code we debug bigger than here. Here’s the same function in Nashorn - ScriptEngineManager manager = new ScriptEngineManager(); ScriptEngine engine = manager.getEngineByName("nashorn");String js = "var map = Array.prototype.map \n"; js += "var a = map.call(names, function(name) { return Java.type(\"LmbdaMain\").check(name) }) \n"; js += "print(a)"; engine.eval(js); In this case the bytecode code is dynamically generated at runtime using a nested tree of Lambda expressions. There is very little correlation between our source code, and the resulting bytecode executed by the JVM. The call stack is now two orders of magnitude longer. In the poignant words of Mr.T – I pity the fools who will need to debug the call stack you’ll be getting here. Questions, comments? (assuming you can scroll all the way below this call stack). Let me know in the comments section. LmbdaMain [Java Application] LmbdaMain at localhost:51287 Thread [main] (Suspended (breakpoint at line 16 in LmbdaMain)) LmbdaMain.wrap(String) line: 16 1525037790.invokeStatic_L_I(Object, Object) line: not available 1150538133.invokeSpecial_LL_I(Object, Object, Object) line: not available 538592647.invoke_LL_I(MethodHandle, Object[]) line: not available 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 2150540.interpret_I(MethodHandle, Object, Object) line: not available 538592647.invoke_LL_I(MethodHandle, Object[]) line: not available 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 92150540.interpret_I(MethodHandle, Object, Object) line: not available 38592647.invoke_LL_I(MethodHandle, Object[]) line: not available 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 731260860.interpret_L(MethodHandle, Object, Object) line: not available LambdaForm$NamedFunction.invoke_LL_L(MethodHandle, Object[]) line: 1108 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 2619171.interpret_L(MethodHandle, Object, Object, Object) line: not available 1597655940.invokeSpecial_LLLL_L(Object, Object, Object, Object, Object) line: not available LambdaForm$NamedFunction.invoke_LLLL_L(MethodHandle, Object[]) line: 1118 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 2619171.interpret_L(MethodHandle, Object, Object, Object) line: not available 1353530305.linkToCallSite(Object, Object, Object, Object) line: not available Script$\^eval\_._L3(ScriptFunction, Object, Object) line: 3 1596000437.invokeStatic_LLL_L(Object, Object, Object, Object) line: not available 1597655940.invokeSpecial_LLLL_L(Object, Object, Object, Object, Object) line: not available LambdaForm$NamedFunction.invoke_LLLL_L(MethodHandle, Object[]) line: 1118 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 484673893.interpret_L(MethodHandle, Object, Object, Object, Object, Object) line: not available LambdaForm$NamedFunction.invoke_LLLLL_L(MethodHandle, Object[]) line: 1123 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 282496973.interpret_L(MethodHandle, Object, Object, Object, long, Object) line: not available 93508253.invokeSpecial_LLLLJL_L(Object, Object, Object, Object, Object, long, Object) line: not available 1850777594.invoke_LLLLJL_L(MethodHandle, Object[]) line: not available 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 282496973.interpret_L(MethodHandle, Object, Object, Object, long, Object) line: not available 293508253.invokeSpecial_LLLLJL_L(Object, Object, Object, Object, Object, long, Object) line: not available 1850777594.invoke_LLLLJL_L(MethodHandle, Object[]) line: not available 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 1840903588.interpret_L(MethodHandle, Object, Object, Object, Object, long, Object) line: not available 2063763486.reinvoke(Object, Object, Object, Object, Object, long, Object) line: not available 850777594.invoke_LLLLJL_L(MethodHandle, Object[]) line: not available 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 82496973.interpret_L(MethodHandle, Object, Object, Object, long, Object) line: not available 220309324.invokeExact_MT(Object, Object, Object, Object, long, Object, Object) line: not available NativeArray$10.forEach(Object, long) line: 1304 NativeArray$10(IteratorAction).apply() line: 124 NativeArray.map(Object, Object, Object) line: 1315 1596000437.invokeStatic_LLL_L(Object, Object, Object, Object) line: not available 504858437.invokeExact_MT(Object, Object, Object, Object, Object) line: not available FinalScriptFunctionData(ScriptFunctionData).invoke(ScriptFunction, Object, Object...) line: 522 ScriptFunctionImpl(ScriptFunction).invoke(Object, Object...) line: 207 ScriptRuntime.apply(ScriptFunction, Object, Object...) line: 378 NativeFunction.call(Object, Object...) line: 161 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available 1740189450.invokeSpecial_LLL_L(Object, Object, Object, Object) line: not available LambdaForm$NamedFunction.invoke_LLL_L(MethodHandle, Object[]) line: 1113 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 2619171.interpret_L(MethodHandle, Object, Object, Object) line: not available LambdaForm$NamedFunction.invoke_LLL_L(MethodHandle, Object[]) line: 1113 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 323326911.interpret_L(MethodHandle, Object, Object, Object, Object) line: not available LambdaForm$NamedFunction.invoke_LLLL_L(MethodHandle, Object[]) line: 1118 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 323326911.interpret_L(MethodHandle, Object, Object, Object, Object) line: not available 263793464.invokeSpecial_LLLLL_L(Object, Object, Object, Object, Object, Object) line: not available LambdaForm$NamedFunction.invoke_LLLLL_L(MethodHandle, Object[]) line: 1123 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 1484673893.interpret_L(MethodHandle, Object, Object, Object, Object, Object) line: not available 587003819.invokeSpecial_LLLLLL_L(Object, Object, Object, Object, Object, Object, Object) line: not available 811301908.invoke_LLLLLL_L(MethodHandle, Object[]) line: not available 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 484673893.interpret_L(MethodHandle, Object, Object, Object, Object, Object) line: not available LambdaForm$NamedFunction.invoke_LLLLL_L(MethodHandle, Object[]) line: 1123 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available LambdaForm$NamedFunction.invokeWithArguments(Object...) line: 1147 LambdaForm.interpretName(LambdaForm$Name, Object[]) line: 625 LambdaForm.interpretWithArguments(Object...) line: 604 323326911.interpret_L(MethodHandle, Object, Object, Object, Object) line: not available 2129144075.linkToCallSite(Object, Object, Object, Object, Object) line: not available Script$\^eval\_.runScript(ScriptFunction, Object) line: 3 1076496284.invokeStatic_LL_L(Object, Object, Object) line: not available 1709804316.invokeExact_MT(Object, Object, Object, Object) line: not available FinalScriptFunctionData(ScriptFunctionData).invoke(ScriptFunction, Object, Object...) line: 498 ScriptFunctionImpl(ScriptFunction).invoke(Object, Object...) line: 207 ScriptRuntime.apply(ScriptFunction, Object, Object...) line: 378 NashornScriptEngine.evalImpl(ScriptFunction, ScriptContext, ScriptObject) line: 544 NashornScriptEngine.evalImpl(ScriptFunction, ScriptContext) line: 526 NashornScriptEngine.evalImpl(Source, ScriptContext) line: 522 NashornScriptEngine.eval(String, ScriptContext) line: 193 NashornScriptEngine(AbstractScriptEngine).eval(String) line: 264 LmbdaMain.main(String[]) line: 44Reference: The Dark Side Of Lambda Expressions in Java 8 from our JCG partner Tal Weiss at the Takipi blog....
spring-logo

The Builder pattern and the Spring framework

Introduction I like to make use of the builder pattern whenever an object has both mandatory and optional properties. But building objects is usually the Spring framework responsibility, so let’s see how you can employ it using both Java and XML-based Spring configurations. A Builder example Let’s start from the following Builder class.   public final class Configuration<T extends DataSource> extends ConfigurationProperties<T, Metrics, PoolAdapter<T>> {public static final long DEFAULT_METRIC_LOG_REPORTER_PERIOD = 5;public static class Builder<T extends DataSource> { private final String uniqueName; private final T targetDataSource; private final PoolAdapterBuilder<T> poolAdapterBuilder; private final MetricsBuilder metricsBuilder; private boolean jmxEnabled = true; private long metricLogReporterPeriod = DEFAULT_METRIC_LOG_REPORTER_PERIOD;public Builder(String uniqueName, T targetDataSource, MetricsBuilder metricsBuilder, PoolAdapterBuilder<T> poolAdapterBuilder) { this.uniqueName = uniqueName; this.targetDataSource = targetDataSource; this.metricsBuilder = metricsBuilder; this.poolAdapterBuilder = poolAdapterBuilder; }public Builder setJmxEnabled(boolean enableJmx) { this.jmxEnabled = enableJmx; return this; }public Builder setMetricLogReporterPeriod(long metricLogReporterPeriod) { this.metricLogReporterPeriod = metricLogReporterPeriod; return this; }public Configuration<T> build() { Configuration<T> configuration = new Configuration<T>(uniqueName, targetDataSource); configuration.setJmxEnabled(jmxEnabled); configuration.setMetricLogReporterPeriod(metricLogReporterPeriod); configuration.metrics = metricsBuilder.build(configuration); configuration.poolAdapter = poolAdapterBuilder.build(configuration); return configuration; } }private final T targetDataSource; private Metrics metrics; private PoolAdapter poolAdapter;private Configuration(String uniqueName, T targetDataSource) { super(uniqueName); this.targetDataSource = targetDataSource; }public T getTargetDataSource() { return targetDataSource; }public Metrics getMetrics() { return metrics; }public PoolAdapter<T> getPoolAdapter() { return poolAdapter; } } Java-based configuration If you’re using Spring Java-based configuration then this is how you’d do it: @org.springframework.context.annotation.Configuration public class FlexyDataSourceConfiguration {@Autowired private PoolingDataSource poolingDataSource;@Bean public Configuration configuration() { return new Configuration.Builder( UUID.randomUUID().toString(), poolingDataSource, CodahaleMetrics.BUILDER, BitronixPoolAdapter.BUILDER ).build(); }@Bean(initMethod = "start", destroyMethod = "stop") public FlexyPoolDataSource dataSource() { Configuration configuration = configuration(); return new FlexyPoolDataSource(configuration, new IncrementPoolOnTimeoutConnectionAcquiringStrategy.Builder(5), new RetryConnectionAcquiringStrategy.Builder(2) ); } } XML-based configuration The XML-based configuration is more verbose and not as intuitive as the Java-based configuration: <bean id="configurationBuilder" class="com.vladmihalcea.flexypool.config.Configuration$Builder"> <constructor-arg value="uniqueId"/> <constructor-arg ref="poolingDataSource"/> <constructor-arg value="#{ T(com.vladmihalcea.flexypool.metric.codahale.CodahaleMetrics).BUILDER }"/> <constructor-arg value="#{ T(com.vladmihalcea.flexypool.adaptor.BitronixPoolAdapter).BUILDER }"/> </bean><bean id="configuration" factory-bean="configurationBuilder" factory-method="build"/><bean id="dataSource" class="com.vladmihalcea.flexypool.FlexyPoolDataSource" init-method="start" destroy-method="stop"> <constructor-arg ref="configuration"/> <constructor-arg> <array> <bean class="com.vladmihalcea.flexypool.strategy.IncrementPoolOnTimeoutConnectionAcquiringStrategy$Builder"> <constructor-arg value="5"/> </bean> <bean class="com.vladmihalcea.flexypool.strategy.RetryConnectionAcquiringStrategy$Builder"> <constructor-arg value="2"/> </bean> </array> </constructor-arg> </bean> Conclusion You can make use of the Builder pattern no matter the Spring configuration mode you’ve already chosen. If you have doubts about it’s usefulness, here are three compelling reasons you should be aware of.Reference: The Builder pattern and the Spring framework from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books