Featured FREE Whitepapers

What's New Here?

scala-logo

Scala and Java 8 type inference in higher order functions sample

One of the concepts mentioned in the Functional Programming in Scala is about the type inference in higher order functions in Scala and how it fails in certain situations and a workaround for the same. So consider a sample higher order function, purely for demonstration:               def filter[A](list: List[A], p: A => Boolean):List[A] = { list.filter(p) } Ideally, passing in a list of say integers, you would expect the predicate function to not require an explicit type: val l = List(1, 5, 9, 20, 30)filter(l, i => i < 10) Type inference does not work in this specific instance however, the fix is to specify the type explicitly: filter(l, (i:Int) => i < 10) Or a better fix is to use currying, then the type inference works! def filter[A](list: List[A])(p: A=>Boolean):List[A] = { list.filter(p) }filter(l)(i => i < 10) //OR filter(l)(_ < 10) I was curious whether Java 8 type inference has this issue and tried a similar sample with Java 8 Lambda expression, the following is an equivalent filter function: public <A> List<A> filter(List<A> list, Predicate<A> condition) { return list.stream().filter(condition).collect(toList()); } and type inference for the predicate works cleanly: List ints = Arrays.asList(1, 5, 9, 20, 30); List lessThan10 = filter(ints, i -> i < 10);Another blog entry on a related topic by the author of the “Functional Programming in Scala” book is available here – http://pchiusano.blogspot.com/2011/05/making-most-of-scalas-extremely-limited.htmlReference: Scala and Java 8 type inference in higher order functions sample from our JCG partner Biju Kunjummen at the all and sundry blog....
java-logo

Autoboxing

Autoboxing is clear for all Java developers since Java 1.5 Well, I may be too optimistic. At least all developers are supposed to be ok with autoboxing. After all there is a good tutorial about it on the page of ORACLE. Autoboxing is the phenomena when the Java compiler automatically generates code creating an object from a primitive type when it is needed. For example you can write:           Integer a = 42; and it will automatically generate JVM code that puts the value int 42 into an Integer object. This is so nice of the compiler to do it for us that after a while we, programmers just tend to forget about the complexity behind it and from time to time we run against the wall. For example we have double.class and Double.class. Both of them are objects (as being a class and each class itself is an object in permgen or just on the heap in post-permgen version of JVM). Both of these objects are of type Class. What is more: since Java 1.5 both of them are of type Class<Double>. If two objects have the same type, they also have to be assignment compatible aren’t they. Seems to be an obvious statement. If you have object O a and object O b then you can assign a = b. Looking at the code, however we may realize being oblivious instead of obvious: public class TypeFun { public static void main(String[] args) { // public static final Class<Double> TYPE = (Class<Double>)Class.getPrimitiveClass("double"); System.out.println("Double.TYPE == double.class: " + (Double.TYPE == double.class)); System.out.println("Double.TYPE == Double.class: " + (Double.TYPE == Double.class)); System.out.println("double.class.isAssignableFrom(Double.class): " + (double.class.isAssignableFrom(Double.class))); System.out.println("Double.class.isAssignableFrom(double.class): " + (Double.class.isAssignableFrom(double.class))); } } resulting: Double.TYPE == double.class: true Double.TYPE == Double.class: false double.class.isAssignableFrom(Double.class): false Double.class.isAssignableFrom(double.class): false This means that the primitive pair of Double is double.class (not surprising). Even though one can not be assigned from the other. We can look at the source at least of the one of the them. The source of the class Double is in the RT.jar and it is open source. There you can see that: public static final Class<Double> TYPE = (Class<Double>) Class.getPrimitiveClass("double"); Why does it use that weird Class.getPrimitiveClass("double") instead of double.class? That is the primitive pair of the type Double. The answer is not trivial and you can dig deep into the details of Java and JVM. Since double is not a class, there is nothing like double.class in reality. You can still use this literal in the Java source code though and this is where the Java language, compiler and the run-time has some strong bondage. The compiler knows that the class Double defines a field named TYPE denoting the primitive type of it. Whenever the compiler sees double.class in the source code it generates JVM code Double.TYPE (give it a try and then use javap to decode the generated code!). For this very reason the developer of the RT could not write: public static final Class<Double> TYPE = double.class; into the source of the class Double. It would compile to the code equivalent: public static final Class<Double> TYPE = TYPE; How is autoboxing going on then? The source: Double b = (double)1.0; results: 0: dconst_1 1: invokestatic #2 // Method java/lang/Double.valueOf:(D)Ljava/lang/Double; 4: astore_1 however if we replace the two ‘d’ letters: double b = (Double)1.0; then we get: 0: dconst_1 1: invokestatic #2 // Method java/lang/Double.valueOf:(D)Ljava/lang/Double; 4: invokevirtual #3 // Method java/lang/Double.doubleValue:()D 7: dstore_1 which ineed explains a lot of things. The instances of the class double.class the class Double.class are not assign compatible. Autoboxing solves this. Java 4 was a long time ago and we, luckily forgot it. Your homework: reread what happens related to autoboxing when you have overloaded methods that have arguments of the “class” type and the corresponding primitive type.Reference: Autoboxing from our JCG partner Peter Verhas at the Java Deep blog....
software-development-2-logo

A beginner’s guide to database locking and the lost update phenomena

Introduction A database is highly concurrent system. There’s always a chance of update conflicts, like when two concurring transactions try to update the same record. If there would be only one database transaction at any time then all operations would be executed sequentially. The challenge comes when multiple transactions try to update the same database rows as we still have to ensure consistent data state transitions. The SQL standard defines three consistency anomalies (phenomena):    Dirty reads, prevented by Read Committed, Repeatable Read and Serializable isolation levels Non-repeatable reads, prevented by Repeatable Read and Serializable isolation levels Phantom reads, prevented by the Serializable isolation levelA lesser-known phenomena is the lost updates anomaly and that’s what we are going to discuss in this current article. Isolation levels Most database systems use Read Committed as the default isolation level (MySQL using Repeatable Read instead). Choosing the isolation level is about finding the right balance of consistency and scalability for our current application requirements. All the following examples are going to be run on PostgreSQL 9.3. Other database systems may behave differently according to their specific ACID implementation. PostgreSQL uses both locks and MVCC (Multiversion Concurrency Control). In MVCC read and write locks are not conflicting, so reading doesn’t block writing and writing doesn’t block reading either. Because most applications use the default isolation level, it’s very important to understand the Read Committed characteristics:Queries only see data committed before the query began and also the current transaction uncommitted changes Concurrent changes committed during a query execution won’t be visible to the current query UPDATE/DELETE statements use locks to prevent concurrent modificationsIf two transactions try to update the same row, the second transaction must wait for the first one to either commit or rollback, and if the first transaction has been committed, then the second transaction DML WHERE clause must be reevaluated to see if the match is still relevant.In this example Bob’s UPDATE must wait for Alice’s transaction to end (commit/rollback) in order to proceed further. Read Committed accommodates more concurrent transactions than other stricter isolation levels, but less locking leads to better chances of losing updates. Lost updates If two transactions are updating different columns of the same row, then there is no conflict. The second update blocks until the first transaction is committed and the final result reflects both update changes. If the two transactions want to change the same columns, the second transaction will overwrite the first one, therefore loosing the first transaction update. So an update is lost when a user overrides the current database state without realizing that someone else changed it between the moment of data loading and the moment the update occurs.In this example Bob is not aware that Alice has just changed the quantity from 7 to 6, so her UPDATE is overwritten by Bob’s change. The typical find-modify-flush ORM strategy Hibernate (like any other ORM tool) automatically translates entity state transitions to SQL queries. You first load an entity, change it and let the Hibernate flush mechanism syncronize all changes with the database. public Product incrementLikes(Long id) { Product product = entityManager.find(Product.class, id); product.incrementLikes(); return product; }public Product setProductQuantity(Long id, Long quantity) { Product product = entityManager.find(Product.class, id); product.setQuantity(quantity); return product; } As I’ve already pointed out, all UPDATE statements acquire write locks, even in Read Committed isolation. The persistence context write-behind policy aims to reduce the lock holding interval but the longer the period between the read and the write operations the more chances of getting into a lost update situation. Hibernate includes all row columns in an UPDATE statement. This strategy can be changed to include only the dirty properties (through the @DynamicUpdate annotation) but the reference documentation warns us about its effectiveness: Although these settings can increase performance in some cases, they can actually decrease performance in others. So let’s see how Alice and Bob concurrently update the same Product using an ORM framework:Alice Bobstore=# BEGIN; store=# SELECT * FROM PRODUCT WHERE ID = 1; ID | LIKES | QUANTITY —-+——-+———- 1 | 5 | 7 (1 ROW) store=# BEGIN; store=# SELECT * FROM PRODUCT WHERE ID = 1; ID | LIKES | QUANTITY —-+——-+———- 1 | 5 | 7 (1 ROW)store=# UPDATE PRODUCT SET (LIKES, QUANTITY) = (6, 7) WHERE ID = 1;store=# UPDATE PRODUCT SET (LIKES, QUANTITY) = (5, 10) WHERE ID = 1;store=# COMMIT; store=# SELECT * FROM PRODUCT WHERE ID = 1; ID | LIKES | QUANTITY —-+——-+———- 1 | 6 | 7 (1 ROW)store=# COMMIT; store=# SELECT * FROM PRODUCT WHERE ID = 1; ID | LIKES | QUANTITY —-+——-+———- 1 | 5 | 10 (1 ROW)store=# SELECT * FROM PRODUCT WHERE ID = 1; ID | LIKES | QUANTITY —-+——-+———- 1 | 5 | 10 (1 ROW)Again Alice’s update is lost without Bob ever knowing he overwrote her changes. We should always prevent data integrity anomalies, so let’s see how we can overcome this phenomena. Repeatable Read Using Repeatable Read (as well as Serializable which offers a even stricter isolation level) can prevent lost updates across concurrent database transactions.Alice Bobstore=# BEGIN; store=# SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; store=# SELECT * FROM PRODUCT WHERE ID = 1; ID | LIKES | QUANTITY —-+——-+———- 1 | 5 | 7 (1 ROW) store=# BEGIN; store=# SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; store=# SELECT * FROM PRODUCT WHERE ID = 1; ID | LIKES | QUANTITY —-+——-+———- 1 | 5 | 7 (1 ROW)store=# UPDATE PRODUCT SET (LIKES, QUANTITY) = (6, 7) WHERE ID = 1;store=# UPDATE PRODUCT SET (LIKES, QUANTITY) = (5, 10) WHERE ID = 1;store=# COMMIT; store=# SELECT * FROM PRODUCT WHERE ID = 1; ID | LIKES | QUANTITY —-+——-+———- 1 | 6 | 7 (1 ROW)ERROR: could not serialize access due to concurrent update store=# SELECT * FROM PRODUCT WHERE ID = 1; ERROR: current transaction is aborted, commands ignored until end of transaction block (1 ROW)This time, Bob couldn’t overwrite Alice’s changes and his transaction was aborted. In Repeatable Read, a query will see the data snapshot as of the start of the current transaction. Changes committed by other concurrent transactions are not visible to the current transaction. If two transactions attempt to modify the same record, the second transaction will wait for the first one to either commit or rollback. If the first transaction commits, then the second one must be aborted to prevent lost updates. SELECT FOR UPDATE Another solution would be to use the FOR UPDATE with the default Read Committed isolation level. This locking clause acquires the same write locks as with UPDATE and DELETE statements.Alice Bobstore=# BEGIN; store=# SELECT * FROM PRODUCT WHERE ID = 1 FOR UPDATE; ID | LIKES | QUANTITY —-+——-+———- 1 | 5 | 7 (1 ROW) store=# BEGIN; store=# SELECT * FROM PRODUCT WHERE ID = 1 FOR UPDATE;store=# UPDATE PRODUCT SET (LIKES, QUANTITY) = (6, 7) WHERE ID = 1; store=# COMMIT; store=# SELECT * FROM PRODUCT WHERE ID = 1; ID | LIKES | QUANTITY —-+——-+———- 1 | 6 | 7 (1 ROW)id | likes | quantity —-+——-+———- 1 | 6 | 7 (1 row)store=# UPDATE PRODUCT SET (LIKES, QUANTITY) = (6, 10) WHERE ID = 1; UPDATE 1 store=# COMMIT; COMMIT store=# SELECT * FROM PRODUCT WHERE ID = 1; id | likes | quantity —-+——-+———- 1 | 6 | 10 (1 row)Bob couldn’t proceed with the SELECT statement because Alice has already acquired the write locks on the same row. Bob will have to wait for Alice to end her transaction and when Bob’s SELECT is unblocked he will automatically see her changes, therefore Alice’s UPDATE won’t be lost. Both transactions should use the FOR UPDATE locking. If the first transaction doesn’t acquire the write locks, the lost update can still happen.Alice Bobstore=# BEGIN; store=# SELECT * FROM PRODUCT WHERE ID = 1; id | likes | quantity —-+——-+———- 1 | 5 | 7 (1 row)store=# BEGIN; store=# SELECT * FROM PRODUCT WHERE ID = 1 FOR UPDATE id | likes | quantity —-+——-+———- 1 | 5 | 7 (1 row)store=# UPDATE PRODUCT SET (LIKES, QUANTITY) = (6, 7) WHERE ID = 1;store=# UPDATE PRODUCT SET (LIKES, QUANTITY) = (6, 10) WHERE ID = 1; store=# SELECT * FROM PRODUCT WHERE ID = 1; id | likes | quantity —-+——-+———- 1 | 6 | 10 (1 row) store=# COMMIT;store=# SELECT * FROM PRODUCT WHERE ID = 1; id | likes | quantity —-+——-+———- 1 | 6 | 7 (1 row) store=# COMMIT;store=# SELECT * FROM PRODUCT WHERE ID = 1; id | likes | quantity —-+——-+———- 1 | 6 | 7 (1 row)Alice’s UPDATE is blocked until Bob releases the write locks at the end of his current transaction. But Alice’s persistence context is using a stale entity snapshot, so she overwrites Bob changes, leading to another lost update situation. Optimistic Locking My favorite approach is to replace pessimistic locking with an optimistic locking mechanism. Like MVCC, optimistic locking defines a versioning concurrency control model that works without acquiring additional database write locks. The product table will also include a version column that prevents old data snapshots to overwrite the latest data.Alice Bobstore=# BEGIN; BEGIN store=# SELECT * FROM PRODUCT WHERE ID = 1; id | likes | quantity | version —-+——-+———-+——— 1 | 5 | 7 | 2 (1 row) store=# BEGIN; BEGIN store=# SELECT * FROM PRODUCT WHERE ID = 1; id | likes | quantity | version —-+——-+———-+——— 1 | 5 | 7 | 2 (1 row)store=# UPDATE PRODUCT SET (LIKES, QUANTITY, VERSION) = (6, 7, 3) WHERE (ID, VERSION) = (1, 2); UPDATE 1store=# UPDATE PRODUCT SET (LIKES, QUANTITY, VERSION) = (5, 10, 3) WHERE (ID, VERSION) = (1, 2);store=# COMMIT; store=# SELECT * FROM PRODUCT WHERE ID = 1; id | likes | quantity | version —-+——-+———-+——— 1 | 6 | 7 | 3 (1 row)UPDATE 0 store=# COMMIT; store=# SELECT * FROM PRODUCT WHERE ID = 1; id | likes | quantity | version —-+——-+———-+——— 1 | 6 | 7 | 3 (1 row)Every UPDATE takes the load-time version into the WHERE clause, assuming no one has changed this row since it was retrieved from the database. If some other transaction manages to commit a newer entity version, the UPDATE WHERE clause will no longer match any row and so the lost update is prevented. Hibernate uses the PreparedStatement#executeUpdate result to check the number of updated rows. If no row was matched, it then throws a StaleObjectStateException (when using Hibernate API) or an OptimisticLockException (when using JPA). Like with Repeatable Read the current transaction and the persistence context are aborted, in respect to atomicity guarantees. Conclusion Lost updates can happen unless you plan for preventing such situations. Other than optimistic locking, all pessimistic locking approaches are effective only in the scope of the same database transaction, when both the SELECT and the UPDATE statements are executed in the same physical transaction. In my next post I will explain why optimistic locking is the only viable solution when using application-level transactions, like it’s the case for most web applications.Reference: A beginner’s guide to database locking and the lost update phenomena from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
software-development-2-logo

Can Static Analysis replace Code Reviews?

In my last post, I explained how to do code reviews properly. I recommended taking advantage of static analysis tools like Findbugs, PMD, Klocwork or Fortify to check for common mistakes and bad code before passing the code on to a reviewer, to make the reviewer’s job easier and reviews more effective. Some readers asked whether static analysis tools can be used instead of manual code reviews. Manual code reviews add delays and costs to development, while static analysis tools keep getting better, faster, and more accurate. So can you automate code reviews, in the same way that many teams automate functional testing? Do you need to do manual reviews too, or can you rely on technology to do the job for you?   Let’s start by understanding what static analysis bug checking tools are good at, and what they aren’t. What static analysis tools can do – and what they can’t do In this article, Paul Anderson at GrammaTech does a good job of explaining how static analysis bug finding works, the trade-offs between recall (finding all of the real problems), precision (minimizing false positives) and speed, and the practical limitations of using static analysis tools for finding bugs. Static analysis tools are very good at catching certain kinds of mistakes, including memory corruption and buffer overflows (for C/C++), memory leaks, illegal and unsafe operations, null pointers, infinite loops, incomplete code, redundant code and dead code. A static analysis tool knows if you are calling a library incorrectly (as long as it recognizes the function), if you are using the language incorrectly (things that a compiler could find but doesn’t) or inconsistently (indicating that the programmer may have misunderstood something). And static analysis tools can identify code with maintainability problems, code that doesn’t follow good practice or standards, is complex or badly structured and a good candidate for refactoring. But these tools can’t tell you when you have got the requirements wrong, or when you have forgotten something or missed something important – because the tool doesn’t know what the code is supposed to do. A tool can find common off-by-one mistakes and some endless loops, but it won’t catch application logic mistakes like sorting in descending order instead of ascending order, or dividing when you meant to multiply, referring to buyer when it should have been seller, or lessee instead of lessor. These are mistakes that aren’t going to be caught in unit testing either, since the same person who wrote the code wrote the tests, and will make the same mistakes. Tools can’t find missing functions or unimplemented features or checks that should have been made but weren’t. They can’t find mistakes or holes in workflows. Or oversights in auditing or logging. Or debugging code left in by accident. Static analysis tools may be able to find some backdoors or trapdoors – simple ones at least. And they might find some concurrency problems – deadlocks, races and mistakes or inconsistencies in locking. But they will miss a lot of them too. Static analysis tools like Findbugs can do security checks for you: unsafe calls and operations, use of weak encryption algorithms and weak random numbers, using hard-coded passwords, and at least some cases of XSS, CSRF, and simple SQL injection. More advanced commercial tools that do inter-procedural and data flow analysis (looking at the sources, sinks and paths between) can find other bugs including injection problems that are difficult and time-consuming to trace by hand. But a tool can’t tell you that you forgot to encrypt an important piece of data, or that you shouldn’t be storing some data in the first place. It can’t find logic bugs in critical security features, if sensitive information could be leaked, when you got an access control check wrong, or if the code could fail open instead of closed. And using one static analysis tool on its own to check code may not be enough. Evaluations of static analysis tools, such as NIST’s SAMATE project (a series of comparative studies, where many tools are run against the same code), show almost no overlap between the problems found by different tools (outside of a few common areas like buffer errors) even when the tools are supposed to be doing the same kinds of checks. Which means that to get the most out of static analysis, you will need to run two or more tools against the same code (which is what SonarQube, for example, which integrates its own static analysis results with other tools, including popular free tools, does for you). If you’re paying for commercial tools, this could get very expensive fast. Tools vs. Manual Reviews Tools can find cases of bad coding or bad typing – but not bad thinking. These are problems that you will have to find through manual reviews. A 2005 study Comparing Bug Finding Tools with Reviews and Tests used Open Source bug finding tools (including Findbugs and PMD) on 5 different code bases, comparing what the tools found to what was found through code reviews and functional testing. Static analysis tools found only a small subset of the bugs found in manual reviews, although the tools were more consistent – manual reviewers missed a few cases that the tools picked up. Just like manual reviews, the tools found more problems with maintainability than real defects (this is partly because one of the tools evaluated – PMD – focuses on code structure and best practices). Testing (black box – including equivalence and boundary testing – and white box functional testing and unit testing) found fewer bugs than reviews. But different bugs. There was no overlap at all between bugs found in testing and the bugs found by the static analysis tools. Finding problems that could happen – or do happen Static analysis tools are good at finding problems that “could happen”, but not necessarily problems that “do happen”. Researchers at Colorado State University ran static analysis tools against several releases of different Open Source projects, and compared what the tools found against the changes and fixes that developers actually made over a period of a few years – to see whether the tools could correctly predict the fixes that needed to be made and what code needed to be refactored. The tools reported hundreds of problems in the code, but found very few of the serious problems that developers ended up fixing. One simple tool (Jlint) did not find anything that was actually fixed or cleaned up by developers. Of 112 serious bugs that were fixed in one project, only 3 were also found by static analysis tools. In another project, only 4 of 136 bugs that were actually reported and fixed were found by the tools. Many of the bugs that developers did fix were problems like null pointers and incorrect string operations – problems that static analysis tools should be good at catching, but didn’t. The tools did a much better job of predicting what code should be refactored: developers ended up refactoring and cleaning up more than 70% of the code structure and code clarity issues that the tools reported (PMD, a free code checking tool, was especially good for this). Ericsson evaluated different commercial static analysis tools against large, well-tested, mature applications. On one C application, a commercial tool found 40 defects – nothing that could cause a crash, but still problems that needed to be fixed. On another large C code base, 1% of the tool’s findings turned out to be bugs serious enough to fix. On the third project, they ran 2 commercial tools against an old version of a C system with known memory leaks. One tool found 32 bugs, another 16: only 3 of the bugs were found by both tools. Surprisingly, neither tool found the already known memory leaks – all of the bugs found were new ones. And on a Java system with known bugs they tried 3 different tools. None of the tools found any of the known bugs, but one of the tools found 19 new bugs that the team agreed to fix. Ericsson’s experience is that static analysis tools find bugs that are extremely difficult to find otherwise. But it’s rare to find stop-the-world bugs – especially in production code – using static analysis. This is backed up by another study on the use of static analysis (Findbugs) at Google and on the Sun JDK 1.6.0. Using the tool, engineers found a lot of bugs that were real, but not worth the cost of fixing: deliberate errors, masked errors, infeasible situations, code that was already doomed, errors in test code or logging code, errors in old code that was “going away soon” or other relatively unimportant cases. Only around 10% of medium and high priority correctness errors found by the tool were real bugs that absolutely needed to be fixed. The Case for Security So far we’ve mostly looked at static analysis checking for run-time correctness and general code quality, not security. Although security builds on code quality – vulnerabilities are just bugs that hackers look for and exploit – checking code for correctness and clarity isn’t enough for a secure app. A lot of investment in static analysis technology over the past 5-10 years has been in finding security problems in code, such as common problems listed in OWASP’s Top 10 or the SANS/CWE Top 25 Most Dangerous Software Errors. A couple of studies have looked at the effectiveness of static analysis tools compared to manual reviews in finding security vulnerabilities. The first study was on a large application that had 15 known security vulnerabilities found through a structured manual assessment done by security experts. Two different commercial static analysis tools were run across the code. The tools together found less than half of the known security bugs – only the simplest ones, the bugs that didn’t require a deep understanding of the code or the design. And of course the tools reported thousands of other issues that needed to be reviewed and qualified or thrown away as false positives. These other issues including some run-time correctness problems, null pointers and resource leaks, and code quality findings (dead code, unused variables), but no other real security vulnerabilities beyond those already found by the manual security review. But this assumes that you have a security expert around to review the code. To find security vulnerabilities, a reviewer needs to understand the code (the language and the frameworks), and they also need to understand what kind of security problems to look for. Another study shows how difficult this is. Thirty developers were hired to do independent security code reviews of a small web app (some security experts, others web developers). They were not allowed to use static analysis tools. The app had 6 known vulnerabilities. 20% of the reviewers did not find any of the known bugs. None of the reviewers found all of the known bugs, although several found a new XSS vulnerability that the researchers hadn’t known about. On average, 10 reviewers would have had only an 80% chance of finding all of the security bugs. And, not Or Static analysis tools are especially useful for developers working in unsafe languages like C/C++ (where there is a wide choice of tools to find common mistakes) or dynamically typed scripting languages like Javascript or PHP (where unfortunately the tools aren’t that good), and for teams starting off learning a new language and framework. Using static analysis is (or should be) a requirement in highly regulated, safety critical environments like medical devices and avionics. And until more developers get more training and understand more about how to write secure software, we will all need to lean on static analysis (and dynamic analysis) security testing tools to catch vulnerabilities. But static analysis isn’t a substitute for code reviews. Yes, code reviews take extra time and add costs to development, even if you are smart about how you do them – and being smart includes running static analysis checks before you do reviews. If you want to move fast and write good, high-quality and secure code, you still have to do reviews.You can’t rely on static analysis alone.Reference: Can Static Analysis replace Code Reviews? from our JCG partner Jim Bird at the Building Real Software blog....
software-development-2-logo

Intel RealSense: What you have to know about the human-computer interaction technology!

You may know it as ‘Perceptual Computing’ but you now have to call it RealSense! The name is not the only thing that changes in Intel’s human-computer interaction technology. 2014 marks a new turning point for Intel with the introduction of its RealSense 3D camera, which allows developers to interpret depth, maximize the points of recognition to finally obtain the quality and accurate human-computer interactions. For example and to begin this article with something fun, have a glimpse at this cool game using Intel® RealSense™ technology:  Feeling excited? Intel has just announced a $1 Million RealSense App Challenge where participants will have to first submit their ideas and if they are selected, will be able to develop their apps with 3D cameras and an SDK provided by Intel. But we’ll get to that later… Since the 3D camera of Intel is one of the key elements of RealSense, let’s focus on its features. Equipped with a 1080p RGB sensor, plus an infrared laser and infrared sensor, Intel tells us on their website that the camera is optimized for close range interactivity and that the hand and finger tracking capability works from 0.2 to 1.2 meters. These 3D cameras will now be integrated into most of the Ultrabooks, Notebooks, 2-in-1s, and all-in-ones, a proof that human-computer interaction is now part of the game. It also comes with a dedicated SDK. The good news is that you don’t need to be an expert to implement gesture into your application, according to Intel. So let’s see what the kit offers and the capabilities it supports. Firstly, we can segment it into 6 categories that deal with most of the interaction that a human can propose: hands, face, speech and the environment. Hand and Finger Tracking: The RealSense SDK can track up to 22 joints in each hand enabling the user to achieve an accurate hand and finger tracking. Face Analysis: This new SDK can identify up to 78 points on the face (the previous SDK supported only 7). Not only can it track your face and head but it can also infer emotions and sentiments. Check this video to see for yourself.Gesture: RealSense offers complete gesture recognition with 8 static poses like thumbs up and 6 dynamic gestures like a wave. You’ll have to think in 3D from now on. Check it out in this demo.Speech Recognition: The kit provides a powerful speech recognition enabling the user to command, control and dictate. Enough to imagine some really cool apps using hands-free interfaces! 3D Augmented Reality: The SDK offers some interesting capabilities such as a precise augmented reality or the possibility for the user to remove the background. 3D Scanning and Printing: It will bring object scanning, editing, and printing to software developers. Let’s see if it inspires them for the Challenge! You just have to look at some of the experimentations already made and available online to realize that this 3D camera also fosters a massive opportunity when associated with other devices. For example, Thomas Endres and Martin Förtsch from The ParrotsOnJava team published recently on Youtube an example of Intel RealSense with a Thymio II robot. In this video (see below), we can observe the great responsiveness of the robot and how natural it is for the user to control it.According to Intel, it’s the combination of this 3D camera and the dedicated development kit that makes RealSense so interesting for developers. If you want to see by yourself the wonders of this SDK, you’ll have to wait a couple of weeks as it won’t be available until after October 1st. Until then, free your imagination and submit your own idea of perceptual computing on the Intel’s challenge platform. “Let’s dream first, we’ll see about the technical aspects later” is the motto. Plus, if you’re idea is selected, you’ll receive the development kit for free on the official release date! Feeling uninspired? The recent news gave us some clues. For example, the celebrity nude pictures shared all over the Internet these last few weeks were quite a scandal. The Face identification system included in the SDK might be a solution to secure someone’s account. Developers, these careless celebrities need you! For those who can’t wait any longer and want to have a glimpse at the SDK, check out this webinar available on Youtube on how to create natural user interface with the RealSense beta SDK 2014.It’s now time to bring human-computer interactions to a whole new level. In the past, developers were often a bit frustrated by the limited capabilities proposed by the gesture recognition technologies. They were unfortunately below our expectations. What Intel proposes with RealSense is just a working base. Let’s assume that it is why they provided a feature-rich SDK for developers. They count on their inventiveness to develop and design apps that will revolutionize the human-computer interaction. The Challenge that they launch seems to give a lot of space to creativity. Indeed, they start with an Ideation phase in which participants have at first to submit their ideas. They can do it here. There are so many possibilities with such a technology that Intel has decided to divide the Challenge into different Innovation categories: Gaming + Play, Learning Edutainment, Interact naturally, Collaboration/creation and open innovation. They symbolize all the things that we can do with a computer. Top scoring ideas will be invited into the track development phase (a total of 1300 ideas will be chosen during the 1st round). As said earlier in this article, every contestant will be loaned the Intel 3D Camera and its SDK as part of the development process. The Pioneer track deadline for the 1st round is October 1st, 2014 (and September 19th for the Ambassador track). Check out the official Challenge page to get more information on the 2 tracks proposed. October 1st is coming around really quickly so hurry up and don’t miss out on this awesome opportunity! ...
java-interview-questions-answers

Schedule Java EE 7 Batch Jobs

Java EE 7 added the capability to perform Batch jobs in a standard way using JSR 352.                   <job id="myJob" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0"> <step id="myStep"> <chunk item-count="3"> <reader ref="myItemReader"/> <processor ref="myItemProcessor"/> <writer ref="myItemWriter"/> </chunk> This code fragment is the Job Specification Language defined as XML, a.k.a. Job XML. It defines a canonical job, with a single step, using item-oriented or chunk-oriented processing. A chunk can have a reader, optional processor, and a writer. Each of these elements are identified using the corresponding elements in the Job XML, and are CDI beans packaged in the archive. This job can be easily started using: BatchRuntime.getJobOperator().start("myJob", new Properties()); A typical question asked in different forums and conferences is how to schedule these jobs in a Java EE runtime. Batch 1.0 API itself does not offer anything to be schedule these jobs. However Java EE platform offers three different ways to schedule these jobs:Use the @javax.ejb.Schedule annotation in an EJB. Here is a sample code that will trigger the execution of batch job at 11:59:59 PM every day. @Singleton public class MyEJB { @Schedule(hour = "23", minute = "59", second = "59") public void myJob() { BatchRuntime.getJobOperator().start("myJob", new Properties()); } } Of course, you can change the parameters of @Schedule to start the batch job at the desired time. Use ManagedScheduledExecutorService using javax.enterprise.concurrent.Trigger as shown: @Stateless public class MyStatelessEJB { @Resource ManagedScheduledExecutorService executor;public void runJob() { executor.schedule(new MyJob(), new Trigger() {public Date getNextRunTime(LastExecution lastExecutionInfo, Date taskScheduledTime) { Calendar cal = Calendar.getInstance(); cal.setTime(taskScheduledTime); cal.add(Calendar.DATE, 1); return cal.getTime(); }public boolean skipRun(LastExecution lastExecutionInfo, Date scheduledRunTime) { return null == lastExecutionInfo; }}); }public void cancelJob() { executor.shutdown(); } } Call runJob to initiate job execution and cancelJob to terminate job execution. In this case, a new job is started a day later than the previous task. And its not started until previous one is terminated. You will need more error checks for proper execution. MyJob is very trivial: public class MyJob implements Runnable {public void run() { BatchRuntime.getJobOperator().start("myJob", new Properties()); }} Of course, you can automatically schedule it by calling this code in @PostConstruct. A slight variation of second technique allows to run the job after a fixed delay as shown: public void runJob2() { executor.scheduleWithFixedDelay(new MyJob(), 2, 3, TimeUnit.HOURS); } The first task is executed 2 hours after the runJob2 method is called. And then with a 3 hours delay between subsequent execution.This support is available to you within the Java EE platform. In addition, you can also invoke BatchRuntime.getJobOperator().start("myJob", new Properties()); from any of your Quartz-scheduled methods as well.You can try all of this on WildFly. And there are a ton of Java EE 7 samples at github.com/javaee-samples/javaee7-samples. This particular sample is available at github.com/javaee-samples/javaee7-samples/tree/master/batch/scheduling.How are you scheduling your Batch jobs ?Reference: Schedule Java EE 7 Batch Jobs from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
agile-logo

Legendary Product Development

“Brand will not save you, marketing will not save you, and account control will not save you. It’s the products.” – Marc Andreessen I believe there is a recipe for winning in product development. It requires a delicate balance between pragmatism in planning, efficient execution, and the ability to see around corners (into the future). I’ve written this post to share some ideas on how to become legendary in product development.   Idea #1: Usage First Products must be built with a ‘usage first’ mindset. Clients need to be attracted to products by their experience with the product. Clients should be begging for more, because they are so delighted with the experience and outcomes. If you build great products, clients will tell each other. The best way to make a product available for usage is through demos, freemium versions, downloads, and easy access via the cloud. “The people with really great products never say they’re great. They don’t have to. Show, don’t tell..” – Unknown “In the old world, you devoted 30% of your time to building a great service and 70% of your time to shouting about it. In the new world, that inverts. If I build a great product or service, my customers will tell each other.” – Jeff Bezos Idea #2: Simplicity and Design Although this is related to Idea #1 and is in fact a pre-requisite to #1, it has some subtle differences. This is about tapping into how a client feels when they use your product. Do they find it shockingly simple, yet highly functional, leading to an ‘ah-hah’ moment? They should. I read once that people don’t buy products; they buy better versions of themselves. When you’re trying to win customers, are you listing the attributes of a product or can you vividly describe how it will improve their lives? Clients will be attracted to the latter. “Taking a design-centric approach to product development is becoming the default. Designers, at last, have their seat at the table.”- Unknown Idea #3 Speed, Accountability, And Relentless Execution Speed drives exponential improvements and outcomes in any organization. If you complete a task in 1 hour, instead of 1 day, your mean time to a positive outcome is 500% faster. In product development, accelerating cycle times is an under-estimated force in determining winners and losers. Pixar has a company principle that states, “We start from the presumption that our people are talented and want to contribute. We accept that, without meaning to, our company is stifling that talent in myriad unseen ways. Finally, we try to identify those impediments and fix them. “ That principle really resonates with me. A product organization has to break down its own barriers, to achieve its potential. “Life is like a ten speed bicycle. Most of us have gears we never use.” – Charles Schulz Idea #4: Open Source Open source is one the most important phenomena in enterprise software. Legendary product teams will shift their approach to an overt embrace and incorporation of open source into their product development processes. The best business model in software today is utilizing open source and effectively surrounding it with proprietary solutions and features. This drives the cycle time improvements alluded to above. “There are no silver bullets for this, only lead bullets.” – Ben Horowitz Idea #5: Product Management Product management, and its interplay with development, is a critical function in a product organization. Development must work with product management to develop forward-looking, client-based insights, and use that insight to push clients faster than they may normally want to move. If you want to learn about product management and how product development should play a role, I recommend 2 things: 1) read every Amazon.com Annual Report and 2) read “Good Product Manager, Bad Product Manager” (you can find it on the web). Great product organizations obsess over feedback and ideas from all constituents. They prefer feedback that challenges their views, instead of reinforcing their views. That enables you to reach the best answer as an organization. “If you’re doing things right, something will always be a little bit broken.” – Unknown Idea #6: D-Teams I believe legendary product development teams need D-teams in the organization. The D stands for Disruption. The role of the D-teams is to disrupt from within. D-teams assess what the organization is working on, identify opportunities, rapidly assemble a team and disrupt. This type of competitive fire will makes the whole team better. Idea #7: Resources One of the most common refrains in every organization today is, “We don’t have enough resources.” Or, “We know what to do, but don’t have the time or money.” This is a choice, not an issue. If something does not have the right resourcing, it is because the organization is choosing that. If you are asking for resources and not getting them, its because you have not prepared a convincing argument. Sometimes, this means you have to “Take the Horse off The Chart”. “Deciding what not to do, is as important as deciding what to do.” – Steve Jobs Idea #8: Client Satisfaction Quality is the taste you leave in a client’s mouth. Most organizations underestimate the negative impact of quality on their business. Its underestimated because it’s hard to quantify. Clients no longer have to buy inferior goods and services since information and alternatives are so easy to obtain. It’s that simple. “What can a sales person say to somebody to get them to buy a product that they already use every day if they don’t like it? Nothing.” -Larry Ellison Idea #9: Clients, Developers, and Users Some product development organizations spend most of their time focused internally. Some take a reprieve from that and think about clients (which is great). But clients are only one of the three constituents that should drive thinking and behavior. Product development organizations will live and die by how they treat, communicate with, and interact with their constituents. They are:Clients Developers UsersThey are all equally important. How do you make it easy for each of them to work with your products and with you? The organization should obsess over answering that question. With each new product idea, you must be able to articulate the “must have” experience and the target of that experience (clients, users, or developers), before debating how and why a product or feature would be useful. This requires a rigorous process for identifying the most passionate stakeholders and getting their unstructured feedback. Idea #10: At the Service of the Sales Team If a product development team spends all their time in the field, then they lose focus on developing outstanding products. On the other hand, a product development team cannot build outstanding products without an intimate understanding of clients, developers, and users. This is the paradox that every product development team faces. It is incumbent upon each team to figure out how to balance this, with a priority placed on being at the service of sales and constituents. “The key is not spending time, but in investing it.’ –Stephen Covey Idea #11: Innovation on the Edge You cannot be a leader in innovation without dedicating resources to explore and try things that, by definition, are likely to fail. In strategy speak; this would be a Horizon 3 project. There are many other areas to explore. Identifying the important waves to ride is important. It’s equally important to actually ride the wave (i.e. execute on it). “If you only do things where you know the answer in advance, your company goes away.” –Jeff Bezos Idea #12: Product Releases Per Benedict Evans, there is a distinct pattern in Apple’s product releases and announcements. In almost every case, they are sure to have:Cool, incremental improvements, which cater to existing users ‘Tent-pole’ features, which become focus points for marketing campaigns Fundamental strategic moves that widen the moat around their competitive advantageThis is a very thoughtful approach to product releases. Every organization can learn something from this. Leading in product development is much more about culture, than it is about management and hierarchy. At times, management and hierarchy encumber product development teams. Sometimes the best way to understand how you need to change is by looking at companies or organizations on the other end of the spectrum. GitHub is one of those companies. GitHub has no managers. The sole focus of the organizational design is on developer productivity. Steve Jobs once said, ‘you have to be run by ideas, not hierarchy.” There is latent talent and creativity in every development organization. Being Legendary is about finding a way to unleash that talent.Reference: Legendary Product Development from our JCG partner Rob Thomas at the Rob’s Blog blog....
java-interview-questions-answers

Defend your Application with Hystrix

In previous post http://www.javacodegeeks.com/2014/07/rxjava-java8-java-ee-7-arquillian-bliss.html we talked about microservices and how to orchestrate them using Reactive Extensions using (RxJava). But what’s happen when one or many services fail because they have been halted or they throw an exception? In a distributed system like microservices architecture it is normal that a remote service may fail so communication between them should be fault tolerant and manage the latency in network calls properly. And this is exactly what Hystrix does. Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable. In a distributed architecture like microservices, one service may require to use other services as dependencies to accomplish his work. Every point in an application that reaches out over the network or into a client library that can potentially result in network requests is a source of failure. Worse than failures, these applications can also result in increased latencies between services. And this leaves us to another big problem, suppose you are developing a service on a Tomcat which will open two connections to two services, if one of this service takes more time than expected to send back a response, you will be spending one thread of Tomcat pool (the one of current request) doing nothing rather than waiting an answer. If you don’t have a high traffic site this may be acceptable, but if you have a considerable amount of traffic all resources may become saturated and and block the whole server. An schema from this scenario is provided on Hystrix wiki:The way to avoid previous problem is to add a thread layer which isolates each dependency from each other. So each dependency (service) may contain a thread pool to execute that service. In Hystrix this layer is implemented by HystricxCommand object, so each call to an external service is wrapped to be executed within a different thread. An schema of this scenario is provided on Hystrix wiki:But also Hystrix provides other features:Each thread has a timeout so a call may not be infinity waiting for a response. Perform fallbacks wherever feasible to protect users from failure. Measure success, failures (exceptions thrown by client), timeouts, and thread rejections and allows monitorizations. Implements a circuit-breaker pattern which automatically or manually to stop all requests to an external service for a period of time if error percentage passes a threshold.So let’s start with a very simple example: public class HelloWorldCommand extends HystrixCommand<String> {public HelloWorldCommand() { super(HystrixCommandGroupKey.Factory.asKey("HelloWorld")); }@Override protected String run() throws Exception { return "Hello World"; } } And then we can execute that command in a synchronous way by using execute method. new HelloWorldCommand().execute(); Although this command is synchronous, it is executed in a different thread. By default Hystrix creates a thread pool for each command defined inside the same HystrixCommandGroupKey. In our example Hystrix creates a thread pool linked to all commands grouped to HelloWorld thread pool. Then for every execution, one thread is get from pool for executing the command. But of course we can execute a command asynchornously (which perfectly fits to asynchronous JAX-RS 2.0 or Servlet 3.0 specifications). To do it simply run: Future<String> helloWorldResult = new HelloWorldCommand().queue(); //some more work Stirng message = helloWorldResult.get(); In fact synchronous calls are implemented internally by Hystrix as return new HelloWorldCommand().queue().get(); internally. We have seen that we can execute a command synchronously and asynchronously, but there is a third method which is reactive execution using RxJava (you can read more about RxJava in my previous post http://www.javacodegeeks.com/2014/07/rxjava-java8-java-ee-7-arquillian-bliss.html). To do it you simply need to call observe method: Observable<String> obs = new HelloWorldCommand().observe(); obs.subscribe((v) -> { System.out.println("onNext: " + v); } But sometimes things can go wrong and execution of command may throw an exception. All exceptions thrown from the run() method except for HystrixBadRequestException count as failures and trigger getFallback() and circuit-breaker logic (more to come about circuit-breaker). Any business exception that you don’t want to count as service failure (for example illegal arguments) must be wrapped in HystrixBadRequestException. But what happens with service failures, what Hystrix can do for us? In summary Hystrix can offer two things:A method to do something in case of a service failure. This method may return an empty, default value or stubbed value, or for example can invoke another service that can accomplish the same logic as the failing one. Some kind of logic to open and close the circuit automatically.Fallback The method that is called when an exception occurs (except for HystrixBadRequestException) is getFallback(). You can override this method and provide your own implementation. public class HelloWorldCommand extends HystrixCommand<String> {public HelloWorldCommand() { super(HystrixCommandGroupKey.Factory.asKey("HelloWorld")); }@Override protected String getFallback() { return "Good Bye"; }@Override protected String run() throws Exception { //return "Hello World"; throw new IllegalArgumentException(); } } Circuit breaker Circuit breaker is a software pattern to detect failures and avoid receiving the same error constantly. But also if the service is remote you can throw an error without waiting for TCP connection timeout. Suppose next typical example: A system need to access database like 100 times per second and it fails. Same error will be thrown 100 times per second and because connection to remote Database implies a TCP connection, each client will wait until TCP timeout expires. So it would be much useful if system could detect that a service is failing and avoid clients do more requests until some period of time. And this is what circuit breaker does. For each execution check if the circuit is open (tripped) which means that an error has occurred and the request will be not sent to service and fallback logic will be executed. But if the circuit is closed then the request is processed and may work. Hystrix maintains an statistical database of number of success request vs failed requests. When Hystrix detects that in a defined spare of time, a threshold of failed commands has reached, it will open the circuit so future request will be able to return the error as soon as possible without having to consume resources to a service which probably is offline. But the good news is that Hystrix is also the responsible of closing the circuit. After elapsed time Hystrix will try to run again an incoming request, if this request is successful, then it will close the circuit and if not it will maintain the circuit opened. In next diagram from Hystrix website you can see the interaction between Hystrix and circuit.Now that we have seen the basics of Hystrix, let’s see how to write tests to check that Hystrix works as expected. Last thing before test. In Hystrix there is an special class called HystrixRequestContext. This class contains the state and manages the lifecycle of a request. You need to initialize this class if for example you want to Hystrix manages caching results or for logging purposes. Typically this class is initialized just before starting the business logic (for example in a Servlet Filter), and finished after request is processed. Let’s use previous HelloWorldComand to validate that fallback method is called when circuit is open. public class HelloWorldCommand extends HystrixCommand<String> {public HelloWorldCommand() { super(HystrixCommandGroupKey.Factory.asKey("HelloWorld")); }@Override protected String getFallback() { return "Good Bye"; }@Override protected String run() throws Exception { return "Hello World"; } } And the test. Keep in mind that I have added a lot of asserts in the test for academic purpose. @Test public void should_execute_fallback_method_when_circuit_is_open() { //Initialize HystrixRequestContext to be able to get some metrics HystrixRequestContext context = HystrixRequestContext.initializeContext(); HystrixCommandMetrics creditCardMetrics = HystrixCommandMetrics.getInstance(HystrixCommandKey.Factory.asKey(HelloWorldRestCommand.class.getSimpleName())); //We use Archaius to set the circuit as closed. ConfigurationManager.getConfigInstance().setProperty("hystrix.command.default.circuitBreaker.forceOpen", false); String successMessage = new HelloWorldRestCommand().execute(); assertThat(successMessage, is("Hello World")); //We use Archaius to open the circuit ConfigurationManager.getConfigInstance().setProperty("hystrix.command.default.circuitBreaker.forceOpen", true); String failMessage = new HelloWorldRestCommand().execute(); assertThat(failMessage, is("Good Bye")); //Prints Request => HelloWorldRestCommand[SUCCESS][19ms], HelloWorldRestCommand[SHORT_CIRCUITED, FALLBACK_SUCCESS][0ms] System.out.println("Request => " + HystrixRequestLog.getCurrentRequest().getExecutedCommandsAsString()); assertThat(creditCardMetrics.getHealthCounts().getTotalRequests(), is(2)); assertThat(creditCardMetrics.getHealthCounts().getErrorCount(), is(1));} This is a very simple example, because execute method and fallback method are pretty simple, but if you think that execute method may contain complex logic and fallback method can be as complex too (for example retrieving data from another server, generate some kind of stubbed data, …), then writing integration or functional tests that validates all this flow it starts having sense. Keep in mind that sometimes your fallback logic may depends on previous calls from current user or other users. Hystrix also offers other features like cashing results so any command already executed within same HystrixRequestContext may return a cache result (https://github.com/Netflix/Hystrix/wiki/How-To-Use#Caching). Another feature it offers is collapsing. It enables automated batching of requests into a single HystrixCommand instance execution. It can use batch size and time as the triggers for executing a batch. As you may see Hystrix is a really simple yet powerful library, that you should take under consideration if your applications call external services. We keep learning, Alex.Sing us a song, you’re the piano man, Sing us a song tonight , Well, we’re all in the mood for a melody , And you’ve got us feelin’ alright (Piano Man – Billy Joel)Music:  https://www.youtube.com/watch?v=gxEPV4kolz0Reference: Defend your Application with Hystrix from our JCG partner Alex Soto at the One Jar To Rule Them All blog....
java-interview-questions-answers

Developing a top-down Web Service project

This is a sample chapter taken from the Advanced JAX-WS Web Services book edited by Alessio Soldano. The bottom-up approach for creating a Web Service endpoint has been introduced in the first chapter. It allows exposing existing beans as Web Service endpoints very quickly: in most cases, turning the classes into endpoints is a matter of simply adding few annotations in the code. However, when developing a service with an already defined contract, it is far simpler (and effective) to use the top-down approach, since a wsdl-to-java tool can generate the annotated code matching the WSDL. This is the preferred solution in multiple scenarios such as the following ones:Creating a service that adheres to the XML Schema and WSDL that have been developed by hand up front; Exposing a service that conforms to a contract specified by a third party (e.g. a vendor that calls the service using an already defined set of messages); Replacing the implementation of an existing Web Service while keeping compatibility with older clients (the contract must not change).In the next sections, an example of top-down Web Service endpoint development is provided, as well as some details on constraints the developer has to be aware of when coding, regardless of the chosen approach. Creating a Web Service using the top-down approach In order to set up a full project which includes a Web Service endpoint and a JAX-WS client we will use two Maven projects. The first one will be a standard webapp-javaee7 project, which will contain the Web Service Endpoint. The second one, will be just a quickstart Maven project that will execute a Test case against the Web Service. Let’s start creating the server project as usual with: mvn -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee7 -DarchetypeVersion=0.4-SNAPSHOT -DarchetypeRepository=https://nexus.codehaus.org/content/repositories/snapshots -DgroupId=com.itbuzzpress.chapter2.wsdemo -DartifactId=ws-demo2 -Dversion=1.0 -Dpackage=com.itbuzzpress.chapter2.wsdemo -Darchetype.interactive=false --batch-mode --update-snapshots archetype:generate Next step will be creating the Web Service interface and stubs from a WSDL contract. The steps are similar to those for building up a client for the same contract. The only difference is that the wsconsume script will output the generated source files into our Maven project: $ wsconsume.bat -k CustomerService.wsdl -o ws-demo-wsdl\src\main\java In addition to the generated classes, which we have discussed at the beginning of the chapter, we need to provide a Service Endpoint Implementation that contains the Web Service functionalities: @WebService(endpointInterface="org.jboss.test.ws.jaxws.samples.webresult.Customer") public class CustomerImpl implements Customer { public CustomerRecord locateCustomer(String firstName, String lastName, USAddress address) { CustomerRecord cr = new CustomerRecord(); cr.setFirstName(firstName); cr.setLastName(lastName); return cr; } }The endpoint implementation class implements the endpoint interface and references it through the @WebService annotation. Our WebService class does nothing fancy, just create a CustomerRecord object using the parameters received as input. In a real world example, you would collect the CustomerRecord using the Persistence Layer for example. Once the implementation class has been included in the project, the project needs to be packaged and deployed to the target container, which will expose the service endpoint with the same contract that was consumed by the tool. It is also possible to reference a local WSDL file in the @WebService wsdlLocation attribute in the Service Interface and include the file in the deployment. That would make the exact provided document be published. If you are deploying the Web Service to WildFly application server, then you can check from a management instrument like the Admin Console that the endpoint is now available. Select the Upper Runtime tab and click on the Web Services link contained in the left Subsystem left option:Requirements of a JAX-WS endpoint Regardless of the approach chosen for developing a JAX-WS endpoint, the actual implementation needs to satisfy some requirements:The implementing class must be annotated with either the javax.jws.WebService or the javax.jws.WebServiceProvider annotation. The implementing class may explicitly reference a service endpoint interface through the endpointInterface element of the @WebService annotation but is not required to do so. If no endpointInterface is specified in @WebService, the service endpoint interface is implicitly defined for the implementing class. The business methods of the implementing class must be public and must not be declared static or final. The javax.jws.WebMethod annotation is to be used on business methods to be exposed to web service clients; if no method is annotated with @WebMethod, all business methods are exposed. Business methods that are exposed to web service clients must have JAXB-compatible parameters and return types. The implementing class must not be declared final and must not be abstract. The implementing class must have a default public constructor and must not define the finalize method. The implementing class may use the javax.annotation.PostConstruct or the javax.annotation.PreDestroy annotations on its methods for lifecycle event callbacks.Requirements for building and running a JAX-WS client A JAX-WS client can be part of any Java project and is not explicitly required to be part of a JAR/WAR archive deployed on a JavaEE container. For instance, the client might simply be contained in a quickstart Maven project as follows: mvn archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-quickstart -DgroupId=com.itbuzzpress.chapter2.wsdemo -DartifactId=client-demo-wsdl -Dversion=1.0 -Dpackage=com.itbuzzpress.chapter2.wsdemo -Dversion=1.0 -Darchetype.interactive=false --batch-mode As your client needs to reference the endpoint interface and stubs, you need to provide them either copying them from the server project or generating them again using wsconsume: $ wsconsume.bat -k CustomerService.wsdl -o client-demo-wsdl\src\main\java Now include a minimal Client Test application, which is part of a JUnit test case: public class AppTest extends TestCase { public void testApp() { CustomerService service = new CustomerService(); Customer port = service.getCustomerPort(); CustomerRecord record = port.locateCustomer("John", "Li", new USAddress()); System.out.println("Customer record is " +record); assertNotNull(record); } }Compiling and running the test In order to run successfully running a WS client application, a classloader needs to be properly setup to include the desired JAX-WS implementation libraries (and the required transitive dependencies, if any). Depending on the environment the client is meant to be run in, this might imply adding some jars to the classpath, or adding some artifact dependencies to the Maven dependency tree, setting the IDE properly, etc. Since Maven is used to build the application containing the client, you can configure your pom.xml as follows so that it includes a dependency to the JBossWS: <dependency> <groupId>org.jboss.ws.cxf</groupId> <artifactId>jbossws-cxf-client</artifactId> <version>4.2.3.Final</version> <scope>provided</scope> </dependency>Now, you can execute the testcase which will call the JAX-WS API to serve the client invocation using JBossWS. mvn clean package test Focus on the JAX-WS implementation used by the client The JAX-WS implementation to be used for running a JAX-WS client is selected at runtime by looking for META-INF/services/javax.xml.ws.spi.Provider resources through the application classloader. Each JAX-WS implementation has a library (jar) including that resource file which internally references the proper class implementing the JAX-WS SPI Provider. On WildFly 8.0.0.Final application server the JAX-WS implementation is contained in the META-INF/services/javax.xml.ws.spi.Provider of the file jbossws-cxf-factories-4.2.3.Final: org.jboss.wsf.stack.cxf.client.ProviderImpl Therefore, it is extremely important to control which artifacts or jar libraries are included in the classpath the application classloader is constructed from. If multiple implementations are found, order matters, hence the first implementation in the classpath will be used. The safest way to avoid any classpath issue (and thus load another JAX-WS implementation) is to set the java.endorsed.dirs system property to include the jbossws-cxf-factories.jar; if you don’t do that, make sure you don’t include ahead of your classpath other META-INF/services/javax.xml.ws.spi.Provider resources which will trigger another JAX-WS implementation. Finally, if the JAX-WS client is meant to run on WildFly as part of a JavaEE application, the JBossWS JAX-WS implementation will be automatically selected for serving the client. This excerpt has been taken from the “Advanced JAX-WS Web Services” book in which you’ll learn the concepts of SOAP based Web services architecture and get practical advice on building and deploying Web services in the enterprise. Starting from the basics and the best practices for setting up a development environment, this book enters into the inner details of the JAX-WS in a clear and concise way. You will also learn about the major toolkits available for creating, compiling and testing SOAP Web services and how to address common issues such as debugging data and securing its content. What you will learn from this book:Move your first steps with SOAP Web services. Installing the tools required for developing and testing applications. Developing Web services using top-down and bottom-up approach. Using Maven archetypes to speed up Web services creation. Getting into the details of JAX-WS types: Java to XML mapping and XML to Java Developing SOAP Web services on WildFly 8 and Tomcat. Running native Apache CXF on WildFly. Securing Web services. Applying authentication policies to your services. Encrypting the communication....
junit-logo

Some more unit test tips

In my previous post I showed some tips on unit testing JavaBeans. In this blog entry I will give two more tips on unit testing some fairly common Java code, namely utility classes and Log4J logging statements. Testing Utility classes If your utility classes follow the same basic design as the ones I tend to write, they consist of a final class with a private constructor and all static methods.       Utility class tester package it.jdev.example;import static org.junit.Assert.*;import java.lang.reflect.*;import org.junit.Test;/** * Tests that a utility class is final, contains one private constructor, and * all methods are static. */ public final class UtilityClassTester {private UtilityClassTester() { super(); }/** * Verifies that a utility class is well defined. * * @param clazz * @throws Exception */ @Test public static void test(final Class<?> clazz) throws Exception { // Utility classes must be final. assertTrue("Class must be final.", Modifier.isFinal(clazz.getModifiers()));// Only one constructor is allowed and it has to be private. assertTrue("Only one constructor is allowed.", clazz.getDeclaredConstructors().length == 1); final Constructor<?> constructor = clazz.getDeclaredConstructor(); assertFalse("Constructor must be private.", constructor.isAccessible()); assertTrue("Constructor must be private.", Modifier.isPrivate(constructor.getModifiers()));// All methods must be static. for (final Method method : clazz.getMethods()) { if (!Modifier.isStatic(method.getModifiers()) && method.getDeclaringClass().equals(clazz)) { fail("Non-static method found: " + method + "."); } } }} This UtilityClassTester itself also follows the utility class constraints noted above, so what better way to demonstrate its use by using it to test itself: Test case for the UtilityClassTester package it.jdev.example;import org.junit.Test;public class UtilityClassTesterTest {@Test public void test() throws Exception { UtilityClassTester.test(UtilityClassTester.class); }} Testing Log4J logging events When calling a method that declares an exception you’ll either re-declare that same exception, or you’ll try to deal with it within a try-catch block. In the latter case, the very least you will do is log the caught exception. A very simplistic example is the following: MyService example package it.jdev.example;import java.lang.invoke.MethodHandles;import org.apache.log4j.Logger; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service;@Service public class MyService {private static final Logger LOGGER = Logger.getLogger(MethodHandles.Lookup.class);@Autowired private MyRepository myRepository;public void doSomethingUseful() { try { myRepository.doSomethingVeryUseful(); } catch (SomeException e) { LOGGER.error("Some very informative error logging.", e); } }} Of course, you will want to test that the exception is logged appropriately. Something along the line of the following: Test case for MyService logging event package it.jdev.example;import static org.junit.Assert.*;import org.apache.log4j.spi.LoggingEvent; import org.junit.*; import org.mockito.*;public class MyServiceTest {@Mock private MyRepository myRepository;@InjectMocks private MyService myService = new MyService();@Before public void setup() { MockitoAnnotations.initMocks(this); }@Test public void thatSomeExceptionIsLogged() throws Exception { TestAppender testAppender = new TestAppender();Mockito.doThrow(SomeException.class).when(myRepository).doSomethingVeryUseful(); myService.doSomethingUseful();assertTrue(testAppender.getEvents().size() == 1); final LoggingEvent loggingEvent = testAppender.getEvents().get(0); assertEquals("Some very informative error logging.", loggingEvent.getMessage().toString()); }} But how can you go about to achieve this? As it turns out it is very easy to add a new LogAppender to the Log4J RootLogger. TestAppender for Log4J package it.jdev.example;import java.util.*;import org.apache.log4j.*; import org.apache.log4j.spi.*;/** * Utility for testing Log4j logging events. * <p> * Usage:<br /> * <code> * TestAppender testAppender = new TestAppender();<br /> * classUnderTest.methodThatWillLog();<br /><br /> * LoggingEvent loggingEvent = testAppender.getEvents().get(0);<br /><br /> * assertEquals()...<br /><br /> * </code> */ public class TestAppender extends AppenderSkeleton {private final List<LoggingEvent> events = new ArrayList<LoggingEvent>();public TestAppender() { this(Level.ERROR); }public TestAppender(final Level level) { super(); Logger.getRootLogger().addAppender(this); this.addFilter(new LogLevelFilter(level)); }@Override protected void append(final LoggingEvent event) { events.add(event); }@Override public void close() { }@Override public boolean requiresLayout() { return false; }public List<LoggingEvent> getEvents() { return events; }/** * Filter that decides whether to accept or deny a logging event based on * the logging level. */ protected class LogLevelFilter extends Filter {private final Level level;public LogLevelFilter(final Level level) { super(); this.level = level; }@Override public int decide(final LoggingEvent event) { if (event.getLevel().isGreaterOrEqual(level)) { return ACCEPT; } else { return DENY; } }}}Reference: Some more unit test tips from our JCG partner Wim van Haaren at the JDev blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close