What's New Here?


“Java Sucks” revisited

Overview An interesting document on Java’s short comings (from C developer’s perspective) was written some time ago (about 2000? ) but many of the arguments issues are as true (or not) today as they were ten years ago. The original Java Sucks posting. Review of short comings Java doesn’t have free(). The author lists this as a benefit and 99% of the time is a win. There are times when not having it is a downside, when you wish escape analysis would eliminate, recycle or free immediately an object you know isn’t needed any more (IMHO the JIT / javac should be able to work it out in theory) lexically scoped local functions The closest Java has is anonymous methods. This is a poor cousin to Closures (coming in Java 8), but it can be made to do the same thing. No macro system Many of the useful tricks you can do with macros, Java can do for you dynamically. Not needing a macro system is an asset because you don’t need to know when Java will give you the same optimisations. There is an application start up cost that macros don’t have and you can’t do the really obfuscated stuff, but this is probably a good thing. Explicitly Inlined functions The JIT can inline methods for you. Java can inline methods from shared libraries, even if they are updated dynamically. This does come at a run time cost, but its nicer not to need to worry about this IMHO. I find lack of function pointers a huge pain Function pointers makes in lining methods more difficult for the compiler. If you are using object orientated programming, I don’t believe you need these. For other situations, I believe Closures in Java 8 is likely to be nicer. The fact that static methods aren’t really class methods is pretty dumb I imagine most Java developers have come across this problem at some stage. IMHO: The nicest solution is to move the “static” functionality to its own class and not use static methods if you want polymorphism. It’s far from obvious how one hints that a method should be inlined, or otherwise go real fast Make it small and call it lots of times. ;) Two identical byte[] arrays aren’t equal and don’t hash the same I agree that its pretty ugly design choice not to make arrays proper objects. They inherit from Object, but don’t have useful implementation for toString, equals, hashCode, compareTo. clone() and getClass() are the most useful methods. You can use helper methods instead, but with many different helper classes called Array, Arrays, ArrayUtil, ArrayUtils in different packages its all a mess for a new developer to deal with. Hashtable/HashMap does allow you to provide a hashing function This is also a pain if you want to change the behaviour. IMHO, The best solution is to write a wrapper class which implements equals/hashCode, but this adds overhead. iterate the characters in a String without implicitly involving half a dozen method calls per character There is now String.toCharArray() but this creates a copy you don’t need and is not eliminated by escape analysis. When it is, this is the obvious solution. The same applies to “The other alternative is to convert the String to a byte[] first, and iterate the bytes, at the cost of creating lots of random garbage” overhead added by Unicode support in those cases where I’m sure that there are no non-ASCII characters. Java 6 has a solution to this which is -XX:+UseCompressedStrings. Unfortunately Java 7 has dropped support for this feature. I have no idea why as this option improves performance (as well as reducing memory usage) in test I have done. Interfaces seem a huge, cheesy copout for avoiding multiple inheritance; they really seem like they were grafted on as an afterthought. I prefer a contract which only lists functionality offered without adding implementation. The newer Virtual Extension Methods in Java 8 will provide default implementations without state. In some cases this will be very useful. There’s something kind of screwy going on with type promotion The problem here is solved by co-variant return types which Java 5.0+ now supports. You can’t write a function which expects and Object and give it a short Today you have auto-boxing. The author complains that Short and short are not the same thing. For efficiency purposes this can make surprisingly little difference in some cases with auto-boxing. In some cases it does make a big difference, and I don’t foresee Java optimising this transparently in the near future. :| it’s a total pain that one can’t iterate over the contents of an array without knowing intimate details about its contents Its rare you really need to do this IMHO. You can use Array.getLength(array) and Array.get(array, n) to handle a generic array. Its ugly but you can do it. Its one of the helper class which should really be methods on the array itself IMHO. The only way to handle overflow is to use BigInteger (and rewrite your code) Languages like Scala support operators for BigInteger and it has been suggested that Java should too. I believe overflow detection is also being considered for Java 8/9. I miss typedef This allows you to use primitives and still get type safety. IMHO, the real issue is that the JIT cannot detect that a type is just a wrapper for a primitive (or two) and eliminate the need for the wrapped class. This would provide the benefits of typedef without changing the syntax and make the code more Object Orientated. I think the available idioms for simulating enum and :keywords are fairly lame Java 5.0+ has enum which are first class objects and are surprising powerful. there’s no efficient way to implement `assert’ assert is now built in. To implement it yourself is made efficient by the JIT. (Probably not tens years ago) By having `new’ be the only possible interface to allocation, … there are a whole class of ancient, well-known optimizations that one just cannot perform. This should be performed by the JIT IMHO. Unfortunately, it rarely does, but this is improving. The finalization system is lame. Most people agree its best avoided. Perhaps it could be more powerful and reliable. ARM (Automatic Resource Management) may be the answer. Relatedly, there are no “weak pointers.” Java has always had weak, soft and phantom references, but I suspect this is not what is meant here. ?? You can’t close over anything but final variables in an inner class! There is true of anonymous inner classes, but not nested inner classes referring to fields. Closures might not have this restriction but its likely to be just as confusing. Being used to the requirement for final variables, I don’t find this problem esp. as my IDE will correct the code as required for me. The access model with respect to the mutability (or read-only-ness) of objects blows The main complaint appears to be that there are ways of treating final fields as mutable. This is required for de-serialization and dependency injectors. As long as you realise that you have two possible behaviours, one lower level than the other, it is far more useful than it is a problem. The language also should impose the contract that literal constants are immutable. Literal constants are immutable. It appears the author would like to expand what is considered a literal constant. It would be useful IMHO, to support const in the way C++ does. const is a keyword in Java and the ability to define immutable versions of classes without creating multiple implementations or read only wrappers would be more productive. The locking model is broken. The memory overhead of locking concern is really an implementation detail. Its up to the JVM to decide how large the header is and whether it can be locked. The other concern is that there is no control over who can obtain a lock. The common work around for this is to encapsulate your lock, which is what you would have to do in any case. In theory the lock can be optimised away. Currently this only happens when the whole object is optimised way. There is no way to signal without throwing For this, I use a listener pattern with an onError method. There is no support in the language for this, but I don’t see the need to. Doing foo.x should be defined to be equivalent to foo.x(), Perhaps foo.x => foo.getX() would be a better choice, rather like C# does. Compilers should be trivially able to inline zero-argument accessor methods to be inline object+offset loads. The JIT does this, rather than the compiler. This allows the calling code to be changed after the callee has been compiled. The notion of methods “belonging” to classes is lame. This is a “cool” feature which some languages support. In a more dynamic environment, this can look nicer. The down side is that you can piece of code for a class all over the place and you would have to have some way of managing duplicates in different libraries. e.g. library A defines a new printString() method and library B also defines a printString method for the same class. You would need to make each library see its own copy and have some way of determining which version library C would want when it calls this method. Libraries It comes with hash tables, but not qsort It comes with an “optimised merge sort” which is designed to be faster. String has length+24 bytes of overhead over byte[] That is without considering that each of the two objects are aligned to an 8 byte boundary (making it higher). If that sounds bad, consider that malloc can be 16-byte aligned with a minimum size of 32 bytes. If you use a shared_ptr to a byte[] (to give you similar resource management) it can be much larger in C++ than Java. The only reason for this overhead is so that String.substring() can return strings which share the same value array. This is not correct. The problem is that Java doesn’t support variable sized objects (apart from arrays). This means that String object is a fixed size and to have variable sized field, you have to have another object. Its not great either way. ;) String.substring can be a source of “memory leak” You have to know to take an explicit copy of you are going to retain a substring of a larger string. This is ugly, however the benefits usually out weight the down side. What would be a better solution is to be able to optimise the code so that a defensive copy was taken by default, except when the defensive copy is not needed (it is optimised away) The file manipulation primitives are inadequate The file system information has been improved in Java 7. I don’t think these options are available, but can be easily inferred if you need to know this. here is no robust way to ask “am I running on Windows” or “am I running on Unix.’ There are System properties os.name, os.arch, os.version which have always been there. There is no way to access link() on Unix, which is the only reliable way to implement file locking. This was added in Java 7 Creating a Hard Link There is no way to do ftruncate(), except by copying and renaming the whole file. You can use RandomAccessFile.truncate(). Adding in Java 1.4. Is “%10s %03d” really too much to ask? It was added in Java 5.0 A RandomAccessFile cannot be used as a FileInputStream or FileOutputStreamRandomAccessFile  supports DataInput and DataOutput, FileInputStream and FileOutputStream can be wrapped in DataInputStream and DataOutputStream. They can be made to support the same interfaces. I have never come across a situation where I would want to use both classes in a single method. markSupported is stupid True. There are a number of stupid methods which are only there for historical purposes. Another being Object.wait(millis, nanos) on every object (even arrays) and yet the nanos is never really used. What in the world is the difference between System and Runtime? I agree it appears arbitrary and in some cases doubled up. System.gc() actually calls Runtime.getRuntime().gc() and yet is called System GC even in internal code. In hind site they should really be one class with monitoring functionality moved to JMX. What in the world is application-level crap like checkPrintJobAccess() doing in the base language class library So your SecurityManager can control whether you can perform printing. (Without having to have an Application level Security Manager as well) Not sure is this really prevents the need to have Application level security. ;) Reference: “Java Sucks” revisited from our JCG partner Peter Lawrey at the Vanilla Java blog....

Best Of The Week – 2012 – W04

Hello guys, Time for the “Best Of The Week” links for the week that just passed. Here are some links that drew Java Code Geeks attention: * Java Anti-Patterns: A comprehensive lists of Java programming anti-patterns. Also check out our Java Best Practices series while you are at it. * Time Management: 6 Ways to Improve Your Productivity: A nice article providing suggestions on how to improve one’s productivity including elimination of distractions, being prepared for bonus time, knowing when you are done with a task etc. * Solving OutOfMemoryError (part 5) – JDK Tools: This article discusses the tools bundled with the JDK that can help us troubleshoot OutOfMemoryError problems in productions machines. It provides examples for the jps, jmap and jhat command line tools. Also check out Monitoring OpenJDK from the CLI and Profile your applications with Java VisualVM. * JSON Parsing in android: Short tutorial on how to perform JSON parsing in Android using the native SDK. Also check out Android JSON Parsing with Gson Tutorial for a more efficient and robust way. * 25 Best Free Eclipse Plug-ins for Java Developer to be Productive: A comprehensive list of the best free Eclipse plug-ins that can boost your productivity including hits like FindBugs, Checkstyle, PMD, M2eclipse, Subclipse, EGit, Spring Tool Suite, JbossTools and others. Also check out Eclipse Shortcuts for Increased Productivity. * Submitting Your Application to the Android Market: A full blown, step by step guide on how to submit an Android application to the Android market. Check out our “Android Full Application Tutorial” series in order to find out how to build one. * Coding for success: Amazing article discussing technical education and explaining why tomorrow’s children should be educated on writing code. Some lines from the article: “Learning to code is learning to use logic and reason”, “Code is simply the tool for automating the boring stuff”. Awesome… * Automated Acceptance-Testing using Concordion: This article discusses Concordion and the interesting approach it takes on automated acceptance testing. A simple example of how to use it is also provided. Check out 7 mistakes of software testing on the overall subject of testing. * Load Balancing With Apache Tomcat: Simple and straightforward tutorial on how to implement load balancing with Tomcat using Apache web server and mod_jk. Also see Multiple Tomcat Instances on Single Machine. * Java development 2.0: Securing Java application data for cloud computing: This tutorial shows how to use private-key encryption and the Advanced Encryption Standard to secure sensitive application data for the cloud. Additionally, an encryption strategy is provided, which is important for maximizing the efficiency of conditional searches on distributed cloud datastores. Also check out Developing and Testing in the Cloud. That’s all for this week. Stay tuned for more, here at Java Code Geeks. Cheers, Ilias Tsagklis...

Blind spot of software development methodologies

There is a trend of rise and fall of different software development methodologies. There is also a lot of discussion and excitement about which is better Agile or Waterfall or whatever, and what is Scrum really. My impression is that there is a trend of accepting processes and practices, with expectation that there will be always better results and fewer problems, which is not neccessary nor feasible. Although I can see that some methodologies can have certain advantage over another when applied to a concrete software project + team + company, there is something missing. There are parts of software development which also can affect the success of a project or a team, or a company, but is not a methodology matter! I would like to think aloud about these simple things which somehow are underestimated, and are still very important: Plain competence You cannot have enough of this! Is it possible that you are oversteering our projects because your team is not competent enough? Just think about this: When was the last time anybody from your team picked up a technical book related to your project? Having a competent team will result in team members going for it, instead of looking for excuses. Common sense team workflow Does it make sense that whole team attends a meeting where most of the time couple of people have a discussion about how to implement something? Saying it is a scrum thing will not make it better, it is still a waste of time. I’m not saying that meetings are always bad, my point is that you should think about it if it works for your team. My suggestion is to let team decide on the workflow as much as possible, have them included. Also, having a process of “their own” can have benefits to team morale. Every team is unique My experience is that putting a group of people as a team will always produce results and processes which are unique to this team. If you force some sort of process onto them, sometimes you will get partial results, because team tends to work exactly the same as before, with additional overhead of being “compatible” with given process. Even if there is benefit, there is inertia to accept something “just because”. Team should have freedom to measure and accept practices which are working for them, and reject the ones which don’t. As a conclusion I would ask a question: What other things in software development process do you think are important? What experiences from other teams can be applied to your team, and what certainly cannot because you are too different? Reference: Blind spot of software development methodologies from our JCG partner Nenad Sabo at the Software thoughts blog....

Storing hierarchical data in MongoDB

Continuing NoSQL journey with MongoDB, I would like to touch one specific use case which comes up very often: storing hierarchical document relations. MongoDB is awesome document data store but what if documents have parent-child relationships? Can we effectively store and query such document hierarchies? The answer, for sure, is yes, we can. MongoDB has several recommendations how to store Trees in MongoDB. The one solution described there as well and quite widely used is using materialized path. Let me explain how it works by providing very simple examples. As in previous posts, we will build Spring application using recently released version 1.0 of Spring Data MongoDB project. Our POM file contains very basic dependencies, nothing more.4.0.0mongodb com.example.spring 0.0.1-SNAPSHOT jar UTF-8 3.0.7.RELEASE org.springframework.data spring-data-mongodb 1.0.0.RELEASE org.springframework spring-beans org.springframework spring-expression cglib cglib-nodep 2.2 log4j log4j 1.2.16 org.mongodb mongo-java-driver 2.7.2 org.springframework spring-core ${spring.version} org.springframework spring-context ${spring.version} org.springframework spring-context-support ${spring.version} org.apache.maven.plugins maven-compiler-plugin 2.3.2 1.6 1.6To properly configure Spring context, I will use configuration approach utilizing Java classes. I am more and more advocating to use this style as it provides strong typed configuration and most of the mistakes could be caught on compilation time, no need to inspect your XML files anymore. Here how it looks like: package com.example.mongodb.hierarchical;import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.mongodb.core.MongoFactoryBean; import org.springframework.data.mongodb.core.MongoTemplate; import org.springframework.data.mongodb.core.SimpleMongoDbFactory;@Configuration public class AppConfig { @Bean public MongoFactoryBean mongo() { final MongoFactoryBean factory = new MongoFactoryBean(); factory.setHost( "localhost" ); return factory; }@Bean public SimpleMongoDbFactory mongoDbFactory() throws Exception{ return new SimpleMongoDbFactory( mongo().getObject(), "hierarchical" ); }@Bean public MongoTemplate mongoTemplate() throws Exception { return new MongoTemplate( mongoDbFactory() ); }@Bean public IDocumentHierarchyService documentHierarchyService() throws Exception { return new DocumentHierarchyService( mongoTemplate() ); } }That’s pretty nice and clear. Thanks, Spring guys! Now, all boilerplate stuff is ready. Let’s move to interesting part: documents. Our database will contain ‘documents’ collection which stores documents of type SimpleDocument. We describe this using Spring Data MongoDB annotations for SimpleDocument POJO. package com.example.mongodb.hierarchical;import java.util.Collection; import java.util.HashSet;import org.springframework.data.annotation.Id; import org.springframework.data.annotation.Transient; import org.springframework.data.mongodb.core.mapping.Document; import org.springframework.data.mongodb.core.mapping.Field;@Document( collection = "documents" ) public class SimpleDocument { public static final String PATH_SEPARATOR = ".";@Id private String id; @Field private String name; @Field private String path;// We won't store this collection as part of document but will build it on demand @Transient private Collection< SimpleDocument > documents = new HashSet< SimpleDocument >();public SimpleDocument() { }public SimpleDocument( final String id, final String name ) { this.id = id; this.name = name; this.path = id; }public SimpleDocument( final String id, final String name, final SimpleDocument parent ) { this( id, name ); this.path = parent.getPath() + PATH_SEPARATOR + id; }public String getId() { return id; }public void setId(String id) { this.id = id; }public String getName() { return name; }public void setName(String name) { this.name = name; }public String getPath() { return path; }public void setPath(String path) { this.path = path; }public Collection< SimpleDocument > getDocuments() { return documents; } } Let me explain few things here. First, magic property path: this is a key to construct and query through our hierarchy. Path contains identifiers of all document’s parents, usually divided by some kind of separator, in our case just . (dot). Storing document hierarchical relationships in this way allows quickly build hierarchy, search and navigate. Second, notice transient documents collection: this non-persistent collection is constructed by persistent provider and contains all descendant documents (which, in case, also contain own descendants). Let see it in action by looking into find method implementation: package com.example.mongodb.hierarchical;import java.util.Arrays; import java.util.Collection; import java.util.HashMap; import java.util.Map;import org.springframework.data.mongodb.core.MongoOperations; import org.springframework.data.mongodb.core.query.Criteria; import org.springframework.data.mongodb.core.query.Query;public class DocumentHierarchyService { private MongoOperations template;public DocumentHierarchyService( final MongoOperations template ) { this.template = template; }@Override public SimpleDocument find( final String id ) { final SimpleDocument document = template.findOne( Query.query( new Criteria( "id" ).is( id ) ), SimpleDocument.class );if( document == null ) { return document; }return build( document, template.find( Query.query( new Criteria( "path" ).regex( "^" + id + "[.]" ) ), SimpleDocument.class ) ); }private SimpleDocument build( final SimpleDocument root, final Collection< SimpleDocument > documents ) { final Map< String, SimpleDocument > map = new HashMap< String, SimpleDocument >();for( final SimpleDocument document: documents ) { map.put( document.getPath(), document ); }for( final SimpleDocument document: documents ) { map.put( document.getPath(), document );final String path = document .getPath() .substring( 0, document.getPath().lastIndexOf( SimpleDocument.PATH_SEPARATOR ) );if( path.equals( root.getPath() ) ) { root.getDocuments().add( document ); } else { final SimpleDocument parent = map.get( path ); if( parent != null ) { parent.getDocuments().add( document ); } } }return root; } }As you can see, to get single document with a whole hierarchy we need to run just two queries (but more optimal algorithm could reduce it to just one single query). Here is a sample hierarchy and the the result of reading root document from MongoDB template.dropCollection( SimpleDocument.class );final SimpleDocument parent = new SimpleDocument( "1", "Parent 1" ); final SimpleDocument child1 = new SimpleDocument( "2", "Child 1.1", parent ); final SimpleDocument child11 = new SimpleDocument( "3", "Child 1.1.1", child1 ); final SimpleDocument child12 = new SimpleDocument( "4", "Child 1.1.2", child1 ); final SimpleDocument child121 = new SimpleDocument( "5", "Child", child12 ); final SimpleDocument child13 = new SimpleDocument( "6", "Child 1.1.3", child1 ); final SimpleDocument child2 = new SimpleDocument( "7", "Child 1.2", parent );template.insertAll( Arrays.asList( parent, child1, child11, child12, child121, child13, child2 ) );...final ApplicationContext context = new AnnotationConfigApplicationContext( AppConfig.class ); final IDocumentHierarchyService service = context.getBean( IDocumentHierarchyService.class );final SimpleDocument document = service.find( "1" ); // Printing document show following hierarchy: // // Parent 1 // |-- Child 1.1 // |-- Child 1.1.1 // |-- Child 1.1.3 // |-- Child 1.1.2 // |-- Child // |-- Child 1.2That’s it. Simple a powerful concept. Sure, adding index on a path property will speed up query significantly. There are a plenty of improvements and optimizations but basic idea should be clear now. Reference: Storing hierarchical data in MongoDB from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog....

The Rise of the Front End Developers

In any web development company, there exists two different worlds; well there are more, but we’ll just focus on – front end (designers) & back end (developers) The Front end guys are responsible for making something that is visible to the end users (THE LOOK). The back end guys are responsible for making the front end work (THE FUNCTIONALITY). Together, they both deliver a complete web application/site. The back end developers would typically use programming languages, such as Java/C++/Python. Apart from talking to database and processing requests, they even have an arsenal of libraries to generate the site markup (JSPs, server side templates, etc). Front end guys usually fill in by writing HTML documents and CSS files (merely a writer) to present this markup in an visually pleasing way and back end just take these templates to populate data. Front end had only one option to do any logical operations, by using JavaScript - which has been used for a long time just to validate forms (and do some freaky stuffs). Because of this cultural difference, there’s always been a ego-war between these two worlds. Even the company management would rate the front end guys par below the back end developers because the front ends guys don’t do any serious programming. All was going fine until the web2.0 era. Now, the front end realized that they could use JavaScript to do much more cooler stuffs than just the form validation. The development of high speed JavaScript engines (such as V8) made it possible to run complex JavaScript code right in the browser. With the introduction of technologies such as WebGL and Canvas, even graphics rendering became feasible using JavaScript. But, this didn’t change anything on the server side; the server programs were still running on JVMs/Rubys/Pythons. Fast forward to today: The scenario is dramatically changing. JavaScript has just sneaked its way into the servers. Now, it is no longer required that a web application needs to have a back end programming language such as Java/C++. Everything can be done using just JavaScript. Thanks to node.js which made it possible to run the JavaScript on the server side. Using MongoDB, one can replace the need to have SQL code and now store JSON documents using JavaScript MongoDB connectors. The JavaScript template libraries such as {{Mustache}}/Underscore almost removed the need to have server side templates (JSPs). On the client side, JavaScript MVC frameworks such as Backbone.JS enable us to write maintainable code. And, there’s always the plain old JavaScript waiting for us to write some form validation script. With that, now it is possible to do the heavy lifting just by using JavaScript. The front end JavaScript programmers no longer need to focus on just the front end. They can use their skill set to develop the web application end-to-end. This rise of the front end developers poses a real threat to the survival of back end developers. If you are one of that back end guy, do you already realize this threat? What’s your game plan to stay fit to survive this challenge? Reference: The Rise of the Front End Developers from our JCG partner Veera Sundar at the Veera Sundar blog....

Playing around with pomodoros

Over the last 3/4 months I’ve been playing around with the idea of using pomodoros to track all coding/software related stuff that I do outside of work. I originally started using this technique while I was doing the programming assignments for ml-class because I wanted to know how much time I was spending on it each week and make sure I didn’t run down rabbit holes too often. One interesting observation that I noticed from keeping the data of these pomodoros was that while during the early programming assignments it would take me 7 or 8 pomodoros to finish, by the end it was down to around 4. I think this was due to the difficulty of the assignments decreasing as time went on, I didn’t improve that dramatically! As I mentioned a few weeks ago I’ve also been using pomodoros in combination with a yak stack to make sure I don’t go off track and it’s been interesting applying the technique while trying to solve a problem I’m having with using the Jersey client on Android. It’s such a fiddly problem and splitting my time into 25 minute slots has forced me to create a plan for what I’m going to try and do in that pomodoro, whether it be ruling out an approach or trying to understand the underlying code that isn’t working. I haven’t been successful in solving my problem but I’m pretty sure that I’ve spent much less time trying to solve it than I would have otherwise. I can certainly imagine spending hours aimlessly trying things that have no chance of working. One thing I’ve been experimenting with is reducing the length of the pomodoro to 15 minutes when I know there’s something specific that I want to investigate and I’m fairly sure it won’t take a full length pomodoro. Previously I would end up just killing time for 10 minutes or just resetting the pomodoro because I didn’t have anything else to do. I generally enjoy coding much more by applying this time constraint and I think the reason for that is explained by The Progress Principle, which I’m currently reading: If people are in an excellent mood at the end of the day, it’s a good bet that they have made some progress in their work. If they are in a terrible mood, it’s a good bet that they have had a setback. To a great extent, inner work life rises and falls with progress and setbacks in the work. This is the progress principle Using a pomodoro seems to reduce the amount of time that is spent dealing with setbacks and it creates frequent opportunities to discard an approach you’re taking if it’s clear that it’s not going anywhere. A disadvantage that I’ve sometimes felt when working on the Jersey/Android problem is that I really don’t want to spend 25 minutes working on it because I’ve been getting absolutely nowhere with it for about 6/7 pomodoros now. I’d rather delude myself that I’m going to magically fix it just by fiddling around with the code for an indeterminate period of time! In a way constraining coding in this way does take some of the fun out of it as well because it’s now more structured and you tend to have fun when you’re just randomly doing stuff and lose track of time. On the other hand I probably end up doing a lot more of the stuff I want to do when I constrain it in this way! Decisions, decisions… Reference: Playing around with pomodoros from our JCG partner Markh Needham at the Mark Needham Blog....

Effective Unit Testing – Not All Code is Created Equal

Unit Testing is one of the most adopted methodologies for high quality code. Its contribution to a more stable, independent and documented code is well proven . Unit test code is considered and handled as an a integral part of your repository, and as such requires development and maintenance. However, developers often encounter a situation where the resources invested in unit tests where not as fruitful as one would expect. This leads us to wonder, as in any investment, where and how should resources be invested in unit tests? Current metric used to assess the quality of unit testing utilize the notion of code coverage. Code coverage describes the effectiveness to which the source code of a program has been tested. In an ideal world every method we code will have a series of tests covering it’s code and validating it’s correctness. However, usually due to time limitations we either skip some tests or write poor quality ones. In such reality, while keeping in mind the amount of resources invested in unit testing development and maintenance, one must ask himself, given the available time, which code deserve testing the most? And from the existing tests, which tests are actually worth keeping and maintaining? We will try to answer those questions today.We believe that not all code is created equal. There are certain code sections that are harder to test than others. Other code sections are more important than others. We suggest a few guidelines which will help determine in what code sections to invest in Unit Testing first, and maintaining as well:Usages of code – when code is used frequently, it is important to unit test it. Code dependencies – similar to (1), when other code is heavily dependent on the examined code, the more important it is to unit test it. On the other hand, when the examined code is greatly dependent on other code, it is harder to test and the chances to catch a fault is smaller. I\O dependency – code which is dependent on I\O (DB, Networking, etc), is harder to test, as it requires creating mock objects which simulate the behavior of the I\O components. This mock objects require developing, maintenance and are vulnerable to bugs on their own. Moreover, writing mock objects that will simulate the exact behavior of any given I\O, such as faults is not trivial at all. Multithreaded code –multithreaded code behavior is unexpected and as such harder to test. Cyclomatic complexity – this metric is used to indicate the complexity of your source code. The higher the complexity, it is more important to test the code. Code accessibility – this measure is related to the number of people that are acquainted with the source code in question. The bigger the accessibility is the less testing is needed, since problems will be identified and handled more rapidly.Regarding the latter question presented above, we suggest a new approach for managing Unit Tests. This preliminary idea defiantly needs some polish, and we only present a rough outline.After taking all the above into account, the real bother is maintaining the tests. We suggest thinking on a single unit test as a stock. We can keep track on each test unit, treating them as dynamic objects that have initial value that can change over time. According to the above points, we can give each test a preliminary value, indicating its importance. Note that most of the attributes above, can be determined automatically. The change in value over time is related to our profit from the test. Each time a test fails and catches a real bug, its value increases and each time you invest in fixing the test itself, while not catching any real bug in your business logic, its value decreases. Moreover, each time you need to change the code of a test, as a result of change in your business logic, its value stays the same.The above model is not complete, as we only wanted to give a general idea on effective unit testing. There is the question of how each value for our suggested points is computed? how will the preliminary value for each test will then be determined? and how much should we increase/decrease over time? This questions can be answered, for example, by using machine learning techniques, but it is out of the scope of this post. Reference: Effective Unit Testing – Not All Code is Created Equal from our JCG partners Nadav Azaria & Roi Gamliel at the DeveloperLife blog....

Send A Relevant Dilbert Strip To Your Boss

The Dilbert comics is just 2 years younger than me, which means there are thousands of strips. Most of them represent a mocked real-world situation, especially about big corporations and geeks. I have always wondered if managers read Dilbert. I bet they don’t, otherwise they’d say “oh, wait, this thing I’m doing was mocked by Scot Adams, and it is obviously idiotic”. Anyway, regular people like Dilbert could sometimes get a Dilbert attitude and send a comic strip to their managers. And the Dilbert website offers a pretty good search engine for that – just search for “deadline”, “raise”, “new colleague” and you’ll find a lot of good strips to send to your pointy-haired boss. Even if this isn’t going to change the habits of managers, let’s at least make some fun of them. So the next time your company hangs big posters with “Company values”, send this to everyone, cc management. And hope not to get fired. Reference: Send A Relevant Dilbert Strip To Your Boss from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog....

WADL in Java: Gentle introduction

WADL (Web Application Description Language) is to REST what WSDL is to SOAP. The mere existence of this language causes a lot of controversy (see: Do we need WADL? and To WADL or not to WADL). I can think of few legitimate use cases for using WADL, but if you are here already, you are probably not seeking for yet another discussion. So let us move forward to the WADL itself. In principle WADL is similar to WSDL, but the structure of the language is much different. Whilst WSDL defines a flat list of messages and operations either consuming or producing some of them, WADL emphasizes the hierarchical nature of RESTful web services. In REST, the primary artifact is the resource. Each resource (noun) is represented as an URI. Every resource can define both CRUD operations (verbs, implemented as HTTP methods) and nested resources. The nested resource has a strong relationship with a parent resource, typically representing an ownership. A simple example would be http://example.com/api/books resource representing a list of books. You can (HTTP) GET this resource, meaning to retrieve the whole list. You can also GET the http://example.com/api/books/7 resource, fetching the details of 7th book inside books resource. Or you can even PUT new version or DELETE the resource altogether using the same URI. You are not limited to a single level of nesting: GETting http://example.com/api/books/7/reviews?page=2&size=10 will retrieve the second page (up to 10 items) of reviews of 7th book. Obviously you can also place other resources next to books, like http://example.com/api/readers The requirement arose to formally and precisely describe every available resource, method, request and response, just like WSDL guys were able to do. WADL is one of the options to describe “available URIs”, although some believe that well-written REST service should be self-descriptive (see HATEOAS). Nevertheless here is a simple, empty WADL document: <application xmlns="http://wadl.dev.java.net/2009/02"> <resources base="http://example.com/api"/> </application>Nothing fancy here. Note that the <resources> tag defines base API address. All named resources, which we are just about to add, are relative to this address. Also you can define several <resources> tags to describe more than one APIs. So, let’s add a simple resource: <application xmlns="http://wadl.dev.java.net/2009/02"> <resources base="http://example.com/api"> <resource path="books"> <method name="GET"/> <method name="POST"/> </resource> </resources> </application>This defines resource under http://example.com/api/books with two methods possible: GET to retrieve the whole list and POST to create (add) new item. Depending on your requirements you might want to allow DELETE method as well (to delete all items), and it is the responsibility of WADL to document what is allowed. Remember our example at the beginning: /books/7? Obviously 7 is just an example and we won’t declare every possible book id in WADL. Instead there is a handy placeholder syntax: <application xmlns="http://wadl.dev.java.net/2009/02"> <resources base="http://example.com/api"> <resource path="books"> <method name="GET"/> <resource path="{bookId}"> <param required="true" style="template" name="bookId"/> <method name="GET"/> </resource> </resource> </resources> </application>There are two important aspects you should note: first, The {bookId} place-holder was used in place of nested resource. Secondly, to make it clear, we are documenting this place-holder using <param/> tag. We will see soon how it can be used in combination with methods. Just to make sure you are still with me, the document above describes GET /books and GET /books/some_id resources. <application xmlns="http://wadl.dev.java.net/2009/02"> <resources base="http://example.com/api"> <resource path="books"> <method name="GET"/> <resource path="{bookId}"> <param required="true" style="template" name="bookId"/> <method name="GET"/> <method name="DELETE"/> <resource path="reviews"> <method name="GET"> <request> <param name="page" required="false" default="1" style="query"/> <param name="size" required="false" default="20" style="query"/> </request> </method> </resource> </resource> </resource> <resource path="readers"> <method name="GET"/> </resource> </resources> </application>The web service is getting complex, however it describes quite a lot of operations. First of all GET /books/42/reviews is a valid operation. But the interesting part is the nested <request/> tag. As you can see we can describe parameters of each method independently. In our case optional query parameters (as opposed to template parameters used previously for URI place-holders) were defined. This gives the client additional knowledge about acceptable page and size query parameters. This means that /books/7/reviews?page=2&size=10 is a valid resource identifier. And did I mention that every resource, method and parameter can have documentation attached as per the WADL specification? We will stop here and only mention about remaining pieces of WADL. First of all, as you have probably guessed so far, there is also a <response/> child tag possible for each <method/>. Both request and response can define exact grammar (e.g. in XML Schema) that either the request or the response must follow. The response can also document possible HTTP response codes. But since we will be using the knowledge you have gained so far in a code-first application, I intentionally left the <grammars/> definition. WADL is agile and it allows you to define as little (or as much) information as you need. So we know the basics of WADL, now we would like to use it, maybe as a consumer or as a producer in a Java-based application. Fortunately there is a wadl.xsd XML Schema description of the language itself, which we can use to generate JAXB-annotated POJOs to work with (using xjc tool in the JDK): $ wget http://www.w3.org/Submission/wadl/wadl.xsd $ xjc wadl.xsdAnd there it… hangs! The life of a software developer is full of challenges and non-trivial problems. And sometimes it is just an annoying network filter that makes suspicious packets (together with half hour of your life) disappear. It is not hard to spot the problem, once you recall that article written around 2008: W3C’s Excessive DTD Traffic: <xs:import namespace="http://www.w3.org/XML/1998/namespace" schemaLocation="http://www.w3.org/2001/xml.xsd"/>Accessing xml.xsd from the browser returns an HTML page instantly, but xjc tool waits forever. Downloading this file locally and correcting the schemaLocation attribute in wadl.xsd helped. It’s always the little things… $ xjc wadl.xsd parsing a schema... compiling a schema... net/java/dev/wadl/_2009/_02/Application.java net/java/dev/wadl/_2009/_02/Doc.java net/java/dev/wadl/_2009/_02/Grammars.java net/java/dev/wadl/_2009/_02/HTTPMethods.java net/java/dev/wadl/_2009/_02/Include.java net/java/dev/wadl/_2009/_02/Link.java net/java/dev/wadl/_2009/_02/Method.java net/java/dev/wadl/_2009/_02/ObjectFactory.java net/java/dev/wadl/_2009/_02/Option.java net/java/dev/wadl/_2009/_02/Param.java net/java/dev/wadl/_2009/_02/ParamStyle.java net/java/dev/wadl/_2009/_02/Representation.java net/java/dev/wadl/_2009/_02/Request.java net/java/dev/wadl/_2009/_02/Resource.java net/java/dev/wadl/_2009/_02/ResourceType.java net/java/dev/wadl/_2009/_02/Resources.java net/java/dev/wadl/_2009/_02/Response.java net/java/dev/wadl/_2009/_02/package-info.javaSince we’ll be using these classes in a maven based project (and I hate committing generated classes to source repository), let’s move xjc execution to maven lifecycle: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>jaxb2-maven-plugin</artifactId> <version>1.3</version> <dependencies> <dependency> <groupId>net.java.dev.jaxb2-commons</groupId> <artifactId>jaxb-fluent-api</artifactId> <version>2.0.1</version> <exclusions> <exclusion> <groupId>com.sun.xml</groupId> <artifactId>jaxb-xjc</artifactId> </exclusion> </exclusions> </dependency> </dependencies> <executions> <execution> <goals> <goal>xjc</goal> </goals> </execution> </executions> <configuration> <arguments>-Xfluent-api</arguments> <bindingFiles>bindings.xjb</bindingFiles> <packageName>net.java.dev.wadl</packageName> </configuration> </plugin>Well, pom.xml isn’t the most concise format ever… Never mind, this will generate WADL XML classes during every build, before the source code is compiled. I also love the fluent-api plugin that adds with*() methods along with ordinary setters, returning this to allow chaining. Pretty convenient. Finally we define more pleasant package name for generated artifacts (if you find net.java.dev.wadl._2009._02 package name pleasant enough, you can skip this step) and add Wadl prefix to all generated classes bindings.xjb file: <jxb:bindings version="1.0" xmlns:jxb="http://java.sun.com/xml/ns/jaxb" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc" jxb:extensionBindingPrefixes="xjc"> <jxb:bindings schemaLocation="../xsd/wadl.xsd" node="/xs:schema"> <jxb:schemaBindings> <jxb:nameXmlTransform> <jxb:typeName prefix="Wadl"/> <jxb:anonymousTypeName prefix="Wadl"/> <jxb:elementName prefix="Wadl"/> </jxb:nameXmlTransform> </jxb:schemaBindings> </jxb:bindings> </jxb:bindings>We are now ready to produce and consume WADL in XML format using JAXB and POJO classes. Equipped with that knowledge and the foundation we are ready to develop some interesting library – which will be the subject of the next article. Reference: Gentle introduction to WADL (in Java) from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog...

Arquillian with NetBeans, WebLogic 12c, JPA and a MySQL Datasource

You probably followed my posts about testing more complex scenarios with embedded GlassFish (Part I / Part II). Next on my list of things to do was to get this setup working with latest WebLogic 12c.Getting StartedFollow the steps in the getting started part of my first two posts. There are only a few things you have to change to get this working. Obviously you need a WebLogic 12c. Grep a copy from the OTN download-page. Read and accept the license and download either the ZIP installer or the full blown installer for your OS. Arun Gupta has a nice post about getting started with the ZIP installer. This basically is about downloading, extracting, configuring, creating your domain. Assume you have a domain1 in place. Make sure to copy the mysql-connector-java-5.1.6-bin.jar to domain1/lib and fire up the server by startWebLogic.cmd/.sh in your domain1 root directory. Next you need to configure the appropriate connection pool. You also could do this using some WLST magic or with the new WebLogic Maven Plugin but I assume you are doing this via the admin console. Go to Domain > Services > Data Sources and create a MySQL Datasource AuditLog with jndi name “jdbc/auditlog”. Make sure the server is running while you execute your tests! Modifying the sampleweb Project Now open the sampleweb project’s pom.xml and remove the glassfish-embedded-all dependency together with the arquillian-glassfish-embedded-3.1 and the javaee-api. Now add the wls-remote-12.1 container and the jboss-javaee-6.0 dependencies: <dependency> <groupId>org.jboss.arquillian.container</groupId> <artifactId>arquillian-wls-remote-12.1</artifactId> <version>1.0.0.Alpha2</version> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.spec</groupId> <artifactId>jboss-javaee-6.0</artifactId> <version>1.0.0.Final</version> <type>pom</type> <scope>provided</scope> </dependency>Now open your arquillian.xml descriptor and change the container settings to use the wls container: lt;container qualifier="wls" default="true"> <configuration> <property name="adminUrl">t3://localhost:7001</property> <property name="adminUserName">weblogic1</property> <property name="adminPassword">weblogic1</property> <property name="target">AdminServer</property> <property name="wlsHome">X:\path\to\wlserver\</property> </configuration>Make sure to use the right target server and point to the correct wlsHome. Right-click the AuditRepositoryServiceTest in NetBeans and run “Test File”. You will see the remote container doing some work: 22.01.2012 22:40:34 org.jboss.arquillian.container.wls.WebLogicDeployerClient deploy INFO: Starting weblogic.Deployer to deploy the test artifact. 22.01.2012 22:40:46 org.jboss.arquillian.container.wls.WebLogicDeployerClient forkWebLogicDeployer INFO: weblogic.Deployer appears to have terminated successfully. 22.01.2012 22:40:53 org.jboss.arquillian.container.wls.WebLogicDeployerClient undeploy INFO: Starting weblogic.Deployer to undeploy the test artifact. 22.01.2012 22:41:00 org.jboss.arquillian.container.wls.WebLogicDeployerClient forkWebLogicDeployer INFO: weblogic.Deployer appears to have terminated successfully. And the test going green! If you look at the domain log, you can see, that the test.war module is successfully deployed and undeployed. Remarks and Thoughts Looking at what we have with WebLogic 12c (especially the new maven plugin) this all seems very hand-crafted. What would a WebLogic developer have done prior to that in a maven based project? He would have pushed the weblogic.jar to his local repository and use it instead of using any jboss-javaee-6.0 or javaee-api dependencies. If you try this with the Arquillian wls container you start seeing some weird exceptions like the following: Loading class: javax.transaction.SystemException Exception in thread “main” java.lang.ClassFormatError: Absent Code attribute in method that is not native or abstract in class file javax/transaction/SystemException This is basically because only the wlfullclient.jar contains all needed classes for remote management via JMX. The magic weblogic.jar does have some additional class-path entries in it’s manifest which could not be resolved if you put it to your local m2 repository. So you simply have two options left. Use the wlfullclient.jar (see how to build it in the docs) for testing and the weblogic.jar for your development or stick to the jboss-javaee-6.0 dependency for development and testing (scope provided). Both are valid alternatives. As you can see, the WebLogic container is still undocumented in the Arquillian documentation. You can find a more detailed documentation looking at the wls-container project on github. Download the simpleweb-wls.zip project as a reference to get you started. Thanks to Vineet and Aslak for the help! Reference: Arquillian with NetBeans, WebLogic 12c, JPA and a MySQL Datasource from our JCG partner Markus Eisele at the Enterprise Software Development with Java  blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books