Featured FREE Whitepapers

What's New Here?


Best Of The Week – 2011 – W47

Hello guys, Time for the “Best Of The Week” links for the week that just passed. Here are some links that drew Java Code Geeks attention: * Monitor and diagnose Java applications: An amazing collection of Developerworks articles on monitoring and diagnosing Java applications using both tools available in the JDK and third party software. Also check out Monitoring OpenJDK from the CLI. * Analysis Of Web API Versioning Options: An analysis of the various strategies for versioning Web API in the cloud. The fundamental principle is that you shouldn’t break existing clients, because you don’t know what they implement, and you don’t control them. * Java 7 Working With Directories: DirectoryStream, Filter and PathMatcher: This tutorial presents some new functionality in Java 7 regarding directories. With DirectoryStream, Filter and PathMatcher developers are now able to perform easily complex directory related operations. Also check out Manipulating Files in Java 7 and Java 7 Feature Overview. * Why you have less than a second to deliver exceptional performance: This article discusses the performance of web sites (in terms of response times), highlights the importance of having an exceptional performance and explains why it is quite difficult to achieve something like this. * Why do so many technical recruiters suck?: A real life story indicating why today’s technical recruiters are probably not the way to find good developers. * Setting Up Measurement of Garbage Collection in Apache Tomcat: In this article, Garbage Collection measurement is set up for Apache Tomcat. First, some performance tuning basics are discussed and then the author shows how to measure GC performance in Tomcat (and any JVM for that matter). Also check out Change Without Redeploying With Eclipse And Tomcat and Zero-downtime Deployment (and Rollback) in Tomcat. * Continuously Improving as a Developer: Personally, one of the best articles on the subject of becoming a better developer. Includes not only the typical stuff (read code, read more books, write code, use Social Media etc.), but also more subtle things like working out to increase productivity and leveraging the Pareto rule (80/20 principle) when learning a new technology. * A really simple but powerful rule engine: A great tutorial on how to create a custom simple and lightweight rule engine. First, some rules engine use cases are presented and then the implementation of the engine follows based on some general requirements.. * Is Implementing Continuous Delivery the Key to Success?: In this article, the author discusses Continuous Delivery and how it can affect the product delivery and overall software architecture. The goal is to create an efficient delivery mechanism that will give allow developers to collect useful feedback in a structured and continual way. * Android SDK: Using the Text to Speech Engine: This tutorial shows you how to use the embedded text to speech (TTS) engine that is provided with the Android SDK. Also check out Android Text-To-Speech Application. * Using Gossip Protocols for Failure Detection, Monitoring, Messaging and Other Good Things: This article provides a soft introduction to Gossip Protocols which can offer a decentralized way to manage large clusters. Gossip protocols, which maintain relaxed consistency requirements amongst a very large group of nodes, may be used for Failure Detection, Monitoring, as a Form of Messaging etc. That’s all for this week. Stay tuned for more, here at Java Code Geeks. Cheers, Ilias Related Articles:Best Of The Week – 2011 – W46 Best Of The Week – 2011 – W45 Best Of The Week – 2011 – W44 Best Of The Week – 2011 – W43 Best Of The Week – 2011 – W42 Best Of The Week – 2011 – W41 Best Of The Week – 2011 – W40 Best Of The Week – 2011 – W39 Best Of The Week – 2011 – W38 Best Of The Week – 2011 – W37...

What Should you Unit Test? – Testing Techniques 3

I was in the office yesterday, talking about testing to one of my colleagues who was a little unconvinced by writing unit tests. One of the reasons that he was using was that some tests seem meaningless, which brings me on the the subject of what exactly you unit test, and what you don’t need to bother with. Consider a simple immutable Name bean below with a constructor and a bunch of getters. In this example I’m going to let the code speak for itself as I hope that it’s obvious that any testing would be pointless. public class Name {private final String firstName; private final String middleName; private final String surname;public Name(String christianName, String middleName, String surname) { this.firstName = christianName; this.middleName = middleName; this.surname = surname; }public String getFirstName() { return firstName; }public String getMiddleName() { return middleName; }public String getSurname() { return surname; } }…and just to underline the point, here is the pointless test code: public class NameTest {private Name instance;@Before public void setUp() { instance = new Name("John", "Stephen", "Smith"); }@Test public void testGetFirstName() { String result = instance.getFirstName(); assertEquals("John", result); }@Test public void testGetMiddleName() { String result = instance.getMiddleName(); assertEquals("Stephen", result); }@Test public void testGetSurname() { String result = instance.getSurname(); assertEquals("Smith", result); } }The reason it’s pointless testing this class is that the code doesn’t contain any logic; however, the moment you add something like this to the Name class: public String getFullName() {if (isValidString(firstName) && isValidString(middleName) && isValidString(surname)) { return firstName + " " + middleName + " " + surname; } else { throw new RuntimeException("Invalid Name Values"); } }private boolean isValidString(String str) { return isNotNull(str) && str.length() > 0; }private boolean isNotNull(Object obj) { return obj != null; }…then the whole situation changes. Adding some logic in the form of an if statement generates a whole bunch of tests: @Test public void testGetFullName_with_valid_input() {instance = new Name("John", "Stephen", "Smith");final String expected = "John Stephen Smith";String result = instance.getFullName(); assertEquals(expected, result); }@Test(expected = RuntimeException.class) public void testGetFullName_with_null_firstName() {instance = new Name(null, "Stephen", "Smith"); instance.getFullName(); }@Test(expected = RuntimeException.class) public void testGetFullName_with_null_middleName() {instance = new Name("John", null, "Smith"); instance.getFullName(); }@Test(expected = RuntimeException.class) public void testGetFullName_with_null_surname() {instance = new Name("John", "Stephen", null); instance.getFullName(); }@Test(expected = RuntimeException.class) public void testGetFullName_with_no_firstName() {instance = new Name("", "Stephen", "Smith"); instance.getFullName(); }@Test(expected = RuntimeException.class) public void testGetFullName_with_no_middleName() {instance = new Name("John", "", "Smith"); instance.getFullName(); }@Test(expected = RuntimeException.class) public void testGetFullName_with_no_surname() {instance = new Name("John", "Stephen", ""); instance.getFullName(); }So, given that I’ve just said that you shouldn’t need to test objects that do not contain any logic statements, and in a list of logic statements I’d include if and switch together with all the operators (+-*-), and a whole bundle of things that could change and objects state. Given this premise, I’d then suggest that it’s pointless writing a unit test for the address data access object (DAO) in the Address project I’ve been talking about in my last couple of blogs. The DAO is defined by the AddressDao interface and implemented by the JdbcAddress class: public class JdbcAddress extends JdbcDaoSupport implements AddressDao {/** * This is an instance of the query object that'll sort out the results of * the SQL and produce whatever values objects are required */ private MyQueryClass query;/** This is the SQL with which to run this DAO */ private static final String sql = "select * from addresses where id = ?";/** * A class that does the mapping of row data into a value object. */ class MyQueryClass extends MappingSqlQuery<address> {public MyQueryClass(DataSource dataSource, String sql) { super(dataSource, sql); this.declareParameter(new SqlParameter(Types.INTEGER)); }/** * This the implementation of the MappingSqlQuery abstract method. This * method creates and returns a instance of our value object associated * with the table / select statement. * * @param rs * This is the current ResultSet * @param rowNum * The rowNum * @throws SQLException * This is taken care of by the Spring stuff... */ @Override protected Address mapRow(ResultSet rs, int rowNum) throws SQLException {return new Address(rs.getInt("id"), rs.getString("street"), rs.getString("town"), rs.getString("post_code"), rs.getString("country")); } }/** * Override the JdbcDaoSupport method of this name, calling the super class * so that things get set-up correctly and then create the inner query * class. */ @Override protected void initDao() throws Exception { super.initDao(); query = new MyQueryClass(getDataSource(), sql); }/** * Return an address object based upon it's id */ @Override public Address findAddress(int id) { return query.findObject(id); }}In the code above, the only method in the interface is: @Override public Address findAddress(int id) { return query.findObject(id); }…which is really a simple getter method. This seems okay to me as there really should not be any business logic in a DAO, that belongs in the AddressService, which should have a plentiful supply of unit tests. You may want to make a decision on whether or not you want to write unit tests for the MyQueryClass. To me this is a borderline case, so I look forward to any comments… I’m guessing that someone will disagree with this approach, say you should test the JdbcAddress object and that’s true, I’d personally write an integration test for it to make sure that the database I’m using is okay, that it understands my SQL and that the two entities (DAO and database) can talk to each other, but I won’t bother unit testing it. To conclude, unit tests must be meaningful, and a good a definition of ‘meaningful’ is that object under test must contain some independent logic. Reference: What Should you Unit Test? – Testing Techniques 3 from our JCG partner Roger Hughes at the Captain Debug blog Related Articles :Testing Techniques – Not Writing Tests The Misuse of End To End Tests – Testing Techniques 2 Regular Unit Tests and Stubs – Testing Techniques 4 Unit Testing Using Mocks – Testing Techniques 5 Creating Stubs for Legacy Code – Testing Techniques 6 More on Creating Stubs for Legacy Code – Testing Techniques 7 Why You Should Write Unit Tests – Testing Techniques 8 Some Definitions – Testing Techniques 9 Using FindBugs to produce substantially less buggy code Developing and Testing in the Cloud...

How to Fail With Drools or Any Other Tool/Framework/Library

What I like most at conferences are reports of someone’s failure to do or implement something for they’re the best sources of learning. And How to Fail with Drools (in Norwegian) by C. Dannevig of Know IT at JavaZone 2011 is one of them. I’d like to summarize what they learned and extend it for introduction of a tool, framework, or library in general based on my own painful experiences. They decided to switch to the Drools rule management system (a.k.a. JBoss Rules) v.4 from their homegrown rules implementation to centralize all the rules code at one place, to get something simpler and easier to understand, and to improve the time to market by not requiring a redeploy when a rule is added. However Drools turned out to be more of a burden than help for the following reasons:Too little time and resources were provided for learning Drools, which has a rather steep learning curve due to being based on declarative programming and rules matching (some background), which is quite alien to the normal imperative/OO programmers. Drools’ poor support for development and operations – IDE only for Eclipse, difficult debugging, no stacktrace upon failure. Their domain model was not well aligned with Drools and required lot of effort to make it usable by the rules. The users were used to and satisfied with the current system and wanted to keep the parts facing them such as the rules management UI instead of Drools’ own UI thus decreasing the value of the software (while increasing the overall complexity, we could add).At the end they’ve removed Drools and refactored their code to get all rules to one place, using only plain old Java – which works pretty well for them. Lessons Learned from Introducing Tools, Frameworks, and Libraries While the Know IT team encountered some issues specific to Drools, their experience has a lot in common with many other cases when a tool, a framework, or a library are introduced to solve some tasks and problems but turn out to be more of a problem themselves. What can we learn from these failures to deliver the expected benefits for the expected cost? (Actually such initiatives will be often labeled as a success even though the benefits are smaller and cost (often considerably) larger than planned.) Always think twice – or three or four times – before introducing a [heavyweight] tool or framework. Especially if it requires a new and radically different way of thinking or working. Couldn’t you solve it in a simpler way with plain old Java/Groovy/WhateverYouGot? Using an out of the box solution sounds very *easy* – especially at sales meetings – but it is in fact usually pretty *complex*. And as Rich Hickey recently so well explained in his talk, we should strive to minimize complexity instead of prioritizing the relative and misleading easiness (in the sense of “easy to approach, to understand, to use”). I’m certain that many of us have experienced how an “I’ll do it all for you, be happy and relax” tool turns into a major obstacle and source of pain – at least I have experienced that with Websphere ESB 6.0. (It required heavy tooling that only few mastered, was in reality version 1.0 and a lot of the promised functionality had to be implemented manually anyway etc.) We should never forget that introducing a new library, framework or tool has its cost, which we usually tend to underestimate. The cost has multiple dimensions:Complexity – complexity is the single worst thing in IT projects, are you sure that increasing it will pay off? Complexity of infrastructure, of internal structure, … . Competence – learning curve (which proved to be pretty high for Drools), how many people know it and availability of experts that can help in the case of troubles Development – does the tool somehow hinder development, testing or debugging, f.ex. by making it slower, more difficult, or by requiring special tooling (especially if it isn’t available)? (Think of J2EE x Spring) Operations – what’s the impact on observability of the application in production (high for Drools if it doesn’t provide stack traces for failures), on troubleshooting, performance, deployment process, …? Defects and limitations – every tool has them, even though seemingly mature (they had already version 4 of Drools); you usually run into limitations quite late, it’s difficult if not impossible to discover them up front – and it’s hard to estimate how flexible the authors have made it (it’s especially bad if the solution is closed-source) Longevity – will the tool be around in 1, 5, 10 years? What about backwards compatibility, support for migration to higher versions? (The company I worked for decided to stop support for Websphere ESB in its infrastructure after one year and we had to migrate away from it – what wasted resources!) Dependencies – what dependencies does it have, don’t they conflict with something else in the application or its environment? How it will be in 10 years?And I’m sure I missed some dimensions. So be aware that the actual cost of using something is likely few times higher than your initial estimate. Another lesson is that support for development is a key characteristics of any tool, framework, library. Any slowdown which it introduces must be multiplied at least by a 106 because all those slowdowns spread over the team and lifetime of the project will add up a lot. I experienced that too many times – a framework that required redeploy after every other changes, an application which required us to manually go through a wizard to the page we wanted to test, slow execution of tests by an IDE. The last thing to bear in mind is that you should be aware whether a tool and the design behind it is well aligned with your business domain and processes (including the development process itself). If there are mismatches, you will need to pay for them – just thing about OOP versus RDBMS (don’t you know somebody who starts to shudder upon hearing “ORM”?). Conclusion Be aware that everything has its cost and make sure to account for it and beware our tendency to be overly optimistic when estimating both benefits and cost (perhaps hire a seasoned pessimist or appoint a devil’s advocate). Always consider first using the tools you already have, however boring that might sound. I don’t mean that we should never introduce new stuff – just want to make you more cautious about it. I’ve recently followed a few discussions on how “enterprise” applications get unnecessarily and to their own harm bloated with libraries and frameworks and I agree with them that we should be more careful and try to keep things simple. The tool cost dimensions above may hopefully help you to expose the less obvious constituents of the cost of new tools. Reference: How to Fail With Drools or Any Other Tool/Framework/Library from our JCG partner Jakub Holy at the “Holy Java” Blog. Related Articles :Are frameworks making developers dumb? Open Source Java Libraries and Frameworks – Benefits and Dangers Those evil frameworks and their complexity Java Tutorials and Android Tutorials list...

The Misuse of End To End Tests – Testing Techniques 2

My last blog was the first in a series of blogs on approaches to testing code, outlining a simple scenario of retrieving an address from a database using a very common pattern:…and describing a very common testing technique: not writing tests and doing everything manually. Today’s blog covers another practise which I also feel is sub-optimal. In this scenario, the developers use JUnit to write tests, but write them after they have completed writing the code and without any class isolation. This is really an ‘End to End’ (aka Integration) test posing as a Unit Test. Although yesterday I said that I’m only testing the AddressService class, a test using this technique starts by loading the database with some test data and then grabbing hold of the AddressController to call the method under test. The AddressController calls the AddressService which then calls the AddressDao to obtain and return the requested data. @RunWith(UnitilsJUnit4TestClassRunner.class) @SpringApplicationContext("servlet-context.xml") @Transactional(TransactionMode.DISABLED) public class EndToEndAddressServiceTest {@SpringBeanByType private AddressController instance;/** * Test method for * {@link com.captaindebug.address.AddressService#findAddress(int)}. */ @Test public void testFindAddressWithNoAddress() {final int id = 10; BindingAwareModelMap model = new BindingAwareModelMap();String result = instance.findAddress(id, model); assertEquals("address-display", result);Address resultAddress = (Address) model.get("address"); assertEquals(Address.INVALID_ADDRESS, resultAddress); }/** * Test method for * {@link com.captaindebug.address.AddressService#findAddress(int)}. */ @Test @DataSet("FindAddress.xml") public void testFindAddress() {final int id = 1; Address expected = new Address(id, "15 My Street", "My Town", "POSTCODE", "My Country");BindingAwareModelMap model = new BindingAwareModelMap();String result = instance.findAddress(id, model); assertEquals("address-display", result);Address resultAddress = (Address) model.get("address"); assertEquals(expected.getId(), resultAddress.getId()); assertEquals(expected.getStreet(), resultAddress.getStreet()); assertEquals(expected.getTown(), resultAddress.getTown()); assertEquals(expected.getPostCode(), resultAddress.getPostCode()); assertEquals(expected.getCountry(), resultAddress.getCountry()); } }The code above uses Unitils to both load the test data into a database and to load the classes in our Spring context. I find Untils a useful tool that takes the hard work out of writing tests like this and having to setup such a large scale test is hard work. This kind of test has to be written after the code has been completed; it’s NOT Test Driven Development (which from previous blogs, you’ll gather I’m a big fan), and it’s not a unit test. One of the problems with writing a test after the code is that developers who have to do it see it as a chore rather than as part of development, which means that it’s often rushed and not done in the neatest of coding styles. You’ll also need a certain amount of infrastructure to code using this technique, as a database needs setting up, which may or may not be on your local machine and consequently you may have to be connected to a network to run the test. Test data is either held in test files (as in this case), which are loaded into the database when the test runs, or held permanently in the database. If a requirement change forces a change in the test, then the database files will usually need updating together with the test code, which forces you to update the test in at least two places. Another big problem with this kind of test, apart from the lack of test subject isolation, is that fact that they can be very slow, sometimes taking seconds to execute. Shane Warden in his book ‘The Art of Agile Development’ states that unit tests should run at a rate of “hundreds per second”. Warden also goes on to cite Michael Feather’s book Working Effectively with Legacy Code for a good definition of what a unit test, or is not: A test is not a unit test if:It talks to a database. It communicates across a network. It touches the file system. You have to do special things to your environment (such as editing configuration files) to run it.…now I like that. …although I don’t necessarily agree with point three. One of the main tenants of good unit test code is Readability. Method arguments that are passed to objects under test are sometimes large in size, especially when they’re XML. In this case I think that it’s more pragmatic to favour test readability and store data of this size in a data file rather than having it as a private static final String, so I only adhere to point three where ever practical. Unit tests can be summed up using the FIRST acronym: Fast, Independent, Repeatable, Self Validating and Timely, whilst Roy Osherove in his book The Art Of Unit Testing sums up a good unit test as: “an automated piece of code that invokes the method or class being tested and then checks some assumptions about the logical behaviour of that method or class. A unit test is almost always written using a unit testing framework. It can be written easily and runs quickly. It’s full automated, trustworthy, readable and maintainable”. The benefit of an End to End test is that they do test your test subject in collaboration with other objects and surroundings, something that you really must before shipping your code. This means that when complete, you code should comprise of hundreds of unit tests, but only tens of ‘End to End’ tests. Given this, then my introductory premise, when I said that is technique is ‘sub-optimal’, is not strictly true; there’s nothing wrong with ‘End to End’ tests, every project should have some together with some ordinary integration tests, but these kinds of tests shouldn’t replace, or be called unit tests, which is often the case. Having defined what a unit test is, my next blog investigates what you should test and why… Reference: The Misuse of End To End Tests – Testing Techniques 2 from our JCG partner Roger Hughes at the Captain Debug blog Related Articles :Testing Techniques – Not Writing Tests What Should you Unit Test? – Testing Techniques 3 Regular Unit Tests and Stubs – Testing Techniques 4 Unit Testing Using Mocks – Testing Techniques 5 Creating Stubs for Legacy Code – Testing Techniques 6 More on Creating Stubs for Legacy Code – Testing Techniques 7 Why You Should Write Unit Tests – Testing Techniques 8 Some Definitions – Testing Techniques 9 Using FindBugs to produce substantially less buggy code Developing and Testing in the Cloud...

Change Without Redeploying With Eclipse And Tomcat

They say developing Java is slow because of the bloated application servers – you have to redeploy the application to see your changes. While PHP, Python, etc. scripting languages, allow you to “save & refresh”. This quora question summarizes that “myth”. Yup, it’s a myth. You can use “save & refresh” in java web applications as well. The JVM has the so-called HotSwap – replacing classes at runtime. So you just have to start the server in debug mode (the hotswap feature is available in debug mode) and copy the class files. With eclipse that can be done in (at least) two ways:WTP – configure the “Deployment Assembly” to send compiled classes to WEB-INF/classes FileSync plugin for eclipse – configure it to send your compiled classes to an absolute path (where your tomcat lives)I’ve made a more extensive description of how to use them in this stackoverflow answer. Now, of course, there’s a catch. You can’t swap structural changes. If you add a new class, new method, change the method arguments, add fields, add annotations, these can’t be swapped at runtime. But “save & refresh” usually involves simply changing a line within a method. Structural changes are more rare, and in some cases mean the whole application has to be re-initialized anyway. You can’t hotswap configuration as well – your application is usually configured in some (.xml) file, so if you change it, you’d have to redeploy. But that, again, seems quite an ordinary scenario – your app can’t just load its bootstrapping configuration while running. Even more common is the case with html & css changes. You just can’t live without “save & refresh” there. But that works perfectly fine – JSPs are refreshed by the servlet container (unless you are in production mode), and each view technology has an option for picking template files dynamically. And that has nothing to do with the JVM. So you can develop web applications with Java almost as quickly as with any scripting language. Finally, I must mention one product with a slogan “Stop redeploying in Java” – JRebel. They have created a very good product that is an improved HotSwap – it can swap structural changes as well. And has support for many frameworks. The feature list looks really good. While it’s a great product, I wouldn’t say it’s a must. You can be pretty productive without it. But be it HotSwap or JRebel – you must make sure you don’t redeploy to reflect changes. That is a real productivity killer. Reference: Change Without Redeploying With Eclipse And Tomcat from our JCG partner Bozho at the Bozho’s tech blog. Related Articles :Eclipse Shortcuts for Increased Productivity Eclipse: How attach Java source Eclipse Memory Analyzer (MAT) Multiple Tomcat Instances on Single Machine Zero-downtime Deployment (and Rollback) in Tomcat; a walkthrough and a checklist Java Tutorials and Android Tutorials list...

Java 7 Feature Overview

We discussed previously everything that didn’t make it into Java 7 and then reviewed the useful Fork/Join Framework that did make it in. Today’s post will take us through each of the Project Coin features – a collection of small language enhancements that aren’t groundbreaking, but are nonetheless useful for any developer able to use JDK 7. I’ve come up with a bank account class that showcases the basics of Project Coin features. Take a look… public class ProjectCoinBanker {private static final Integer ONE_MILLION = 1_000_000; private static final String RICH_MSG = "You need more than $%,d to be considered rich.";public static void main(String[] args) throws Exception { System.out.println(String.format(RICH_MSG, ONE_MILLION));String requestType = args[0]; String accountId = args[1]; switch (requestType) { case "displayBalance": printBalance(accountId); break; case "lastActivityDate" : printLastActivityDate(accountId); break; case "amIRich" : amIRich(accountId); break; case "lastTransactions" : printLastTransactions(accountId, Integer.parseInt(args[2])); break; case "averageDailyBalance" : printAverageDailyBalance(accountId); break; default: break; } }private static void printAverageDailyBalance(String accountId) { String sql = String.format(AVERAGE_DAILY_BALANCE_QUERY, accountId); try ( PreparedStatement s = _conn.prepareStatement(sql); ResultSet rs = s.executeQuery(); ) { while (rs.next()) { //print the average daily balance results } } catch (SQLException e) { // handle exception, but no need for finally to close resources for (Throwable t : e.getSuppressed()) { System.out.println("Suppressed exception message is " + t.getMessage()); } } }private static void printLastTransactions(String accountId, int numberOfTransactions) { List transactions = new ArrayList<>(); //... handle fetching/printing transactions }private static void printBalance(String accountId) { try { BigDecimal balance = getBalance(accountId); //print balance } catch (AccountFrozenException | ComplianceViolationException | AccountClosedException e) { System.err.println("Please see your local branch for help with your account."); } }private static void amIRich(String accountId) { try { BigDecimal balance = getBalance(accountId); //find out if the account holder is rich } catch (AccountFrozenException | ComplianceViolationException | AccountClosedException e) { System.out.println("Please see your local branch for help with your account."); } }private static BigDecimal getBalance(String acccountId) throws AccountFrozenException, AccountClosedException, ComplianceViolationException { //... getBalance functionality }}Briefly, our ProjectCoinBanker class demonstrates basic usage of the following Project Coin features.Underscores in numeric literals Strings in switch Multi-catch Type inference for typed object creation try with resources and suppressed exceptionsFirst of all, underscores in numeric literals are pretty self-explanatory. Our example, private static final Integer ONE_MILLION = 1_000_000;shows that the benefit is visual. Developers can quickly look at the code to verify that values are as expected. Underscores can be used in places other than natural groupings locations, being ignored wherever they are placed. Underscores in numeric literals cannot begin or terminate a numeric literal, otherwise, you’re free to place them where you’d like. While not demonstrated here, binary literal support has also been added. In the same way that hex literals are prefixed by 0x or 0X, binary literals would be prefixed by 0b or 0B. Strings in switch are also self-explanatory. The switch statement now also accepts String. In our example, we switch on String argument passed to the main method to determine what request was made. On a side note, this is purely a compiler implementation with an indication that JVM support for switching on String may be added at a later date. Type inference is another easy-to-understand improvement. Now instead of our old code List<Transaction> transactions = new ArrayList<Transaction>();we can simply do List<Transaction> transactions = new ArrayList<>();since the type can be inferred. Probably won’t find anyone who would argue that it shouldn’t have been so since the introduction of generics, but it’s now here. Multi-catch will turn out to be very nice for the conciseness of exception handling code. Too many times when wanting to actually do something based on the exception type thrown, until now we were forced to have multiple catch blocks all doing essentially the same thing. The new syntax is very clean and logical. Our example, catch (AccountFrozenException | ComplianceViolationException | AccountClosedException e)shows how easily it can be done. Finally, the last demonstration of a Project Coin feature is the try with resources syntax and support for retrieving suppressed exceptions. A new interface has been introduced, AutoCloseable that has been applied to all the expected suspects including Input/OutputStreams, Readers/Writers, Channels, Sockets, Selectors and java.sql resources Statement, ResultSet and Connection. In my opinion, the syntax is not as natural as the multi-catch change was, not that I have an alternative in mind. try ( PreparedStatement s = _conn.prepareStatement(sql); ResultSet rs = s.executeQuery(); ) { while (rs.next()) { //print the average daily balance results } } catch (SQLException e) { //handle exception, but no need for finally to close resources for (Throwable t : e.getSuppressed()) { System.out.println("Suppressed exception message is " + t.getMessage()); } }First we see that we can include multiple resources in try with resources – very nice. We can even reference previously declared resources in the same block as we did with our PreparedStatement. We still handle our exception, but we don’t need to have a finally block just to close the resources. Notice too that there is a new method getSuppressed() on Throwable. This allows us to access any Exceptions that were thrown in trying to “autoclose” the declared resources. There can be at most one suppressed exception per resource declared. Note: if the resource initialization throws an exception, it would be handled in your declared catch block. That’s it. Nothing earth-shattering but some simple enhancements that we can all begin using without too much trouble. Project Coin also includes a feature regarding varargs and compiler warnings. Essentially, it boils down to a new annotation (@SafeVarargs) that can be applied at the method declaration to allow developers to remove @SuppressWarnings(“varargs”) from their consuming code. This has been applied on all the key suspects within the JDK, but the same annotation is available to you in any of your genericized varags methods. The Project Coin feature set as it is described online is inconsistent at best. Hopefully this will give you a solid summary of what you can use from the proposal in JDK 7. Reference: Java 7 – Project Coin Feature Overview from our JCG partners at the Carfey Software Blog. Related Articles :Java 7: try-with-resources explained GC with Automatic Resource Management in Java 7 A glimpse at Java 7 MethodHandle and its usage Manipulating Files in Java 7 Java SE 7, 8, 9 – Moving Java Forward Java Tutorials and Android Tutorials list...

Devoxx Day 1

Participating at Devoxx brought me enough motivation to post my first blog entry. I am for the first time here and I am really impressed by how it is organized. There are a record number of top speaker present. For me it is a problem choosing the presentation to attend. But thanks to the organizers all events will be available on parleys.com in late December and participants will receive a free subscription so there is nothing to regret at the end of the presentations. The number of participants is also very impressive 3350 from 60 different JUGs, from 40 countries. The only drawback of event’s popularity is the shortage of wireless connections, only 800 IPs are available and the most participants have a laptop plus a smartphone. For me the Kindle saved the day with its free Internet connection but unfortunately for some speakers a few presentation demos did not work due to this. This year main topics beside Java are HTML5, new languages and Android. I will share a few ideas I picked today at presentations I attended. Oracle opening keynote and JDK 7, 8, and 9 presentionOracle is committed to Java and wants to provide support for it on any device. JSE 7 for Mac will be released next week. Oracle would like Java developers to envolve in JCP, to adopt a JSR and to attend local JUG meetings. JEE 7 will be released next year. JEE 7 is focused on cloud integration, some of the features are already implemented in glassfish 4 development branch. JSE 8 will be release in summer of 2013 due to “enterprise community request” as they can not keep the pace with an 18 month release cycle. The main feartures included in JSE8 are lambda support, project Jigsaw, new Date/Time API, project Coin++ and adding support for sensors. JSE 9 probably will focus on some of these features:self tuning JVM improved native language integration processing enhancement for big data reification (adding runtime class type info for generic types) unification of primitive and corresponding object classes meta-object protocol in order to use type and methods define in other JVM languages multi-tenancy JVM resource managementBleeding edge HTML5 by Paul Kinlan from GoogleWeb application will be richer and more intelligent due to addition of new APIs like page Visibility API , WebIntends, WebRTC, WebAudio API. Visbility API determines if a page is visible, can verify online/offline connection status, pre-rendered pages. Browsers will have support WebIntents, these are custom apps, to handle specific types of actions, for example sharing an image to social network for these for example flikr can register by the user to handle these type of actions. WebRTC will make possible face to face communication possible without plugins. Web Audio API adds support for complex audio processing.NoSQL for java developers by Chris Richardson from SpringSourceThe future application will have polyglot persistence for different type of data. An interesting article about this was written today by Martin Fowler. NoSQL solves many problems but has also limitation and careful thought has to be made in advance because changes will be costly. Queries not data model drives NoSQL database design modelProductivity enhancements in Spring 3.1 by Costin Leau from SpringSourceSpring 3.1 will brings the following improvements:environemental abstraction, diferent types of beans/property files are activated with a profile setting like dev, test, prod. improved java based configuration with new annotations cache abstraction improvements, implementations are pluggable by implementing a few interfaces. Spring MVC improvements, support for Servlet 3.0 support for JSE7 Hibernate 4.0 and Quartz 2.x supportNext week will be release the RC2 and shortly followed by the GA. Spring 3.2 will be released in the third quarter of 2011.Tomorrow will be another great day for Java in Antwerp. I will try to keep you posted with the notes I make during presentations. If you are also at Devoxx you can find me by looking for the guy with world renown Transylvania JUG shirt. Reference: Transylvania JUG at Devox day 1 from our JCG partners at the Transylvania JUG. Related Articles :DOAG 2011 vs. Devoxx – Value and Attraction Java SE 7, 8, 9 – Moving Java Forward Java EE Past, Present, & Cloud 7 Official Java 7 for Mac OS X – Status The OpenJDK as the default Java on Linux Java Tutorials and Android Tutorials list...

DOAG 2011 vs. Devoxx – Value and Attraction

Yesterday (November 15, 2011) DOAG 2011 started in Nuremberg. I am with the German Oracle Users Group since some years and it is always a pleasure to contribute to this great conference. A little drawback is, that Devoxx is going on in parallel and I am really sad to see so many people over there in Belgium which I would have loved to meet. I guess, that is what happens if you have different conferences going on in a very short time frame: you have to make hard decisions about which one to attend. And even if I have spoken to Stephan and Fried about if there is any chance to align the two conferences, it seems as if this is not going to happen. So, here are my thoughts about which of both brings more value to you. Being a regular reader of my blog you are interested in Java EE, Oracle Middleware (WebLogic or GlassFish) and probably some other products related to the former. DOAG is about the Oracle Community DOAG likes to call themselves the “Oracle community”. The conference is the annual get-together of Oracle users for 23 years now. Three days of knowledge, current information on using oracle solutions successfully, and the exchange of hands-on experience are the values which differentiate them from others. A very special focus lies on having mostly German content. Even if this changed over the last few years and you see more and more English speakers around, it’s still a mostly German speaking event. They are supported by some of the German and European based Oracle ACEs and lot of the current information comes directly from Oracle speakers coming over from the HQ for this event. The Java related content is weak. Sorry to say, but I believe that the “Java Track” suffers from having big competition with Devoxx. Another point is, that nobody is expecting pure Java content at an Oracle products focused conference. There is a little (friendly) fight with the UK’OUG going on about who held’s the biggest European Oracle conference. I personally believe that they are more or less equal in size (according to the numbers I have heard). I cannot comment on quality because I have never been to the UK’OUG conference. Devoxx is about the Java Community The Devoxx is “The Java™ Community Conference”. Rebranded from former JavaPolis it’s basically the conference of the Belgian JUG. With it’s no1 speakers and topics it has become one of the main Java conferences around. The Devoxx conference is a special blend of many IT disciplines, ranging from Java to Scripting, FX to RIA, Agile to Enterprise, Security to Cloud and much more. Compared to others it might only have one real competitor which is JavaOne. Stephan is doing a splendid job attracting the right kind of speakers to have a real first class line-up. One big plus is the fact, that Google isn’t preventing their guys from speaking there. Even if it is in Europe this is an English speaking conference. DOAG vs. Devoxx What is that all about? Oracle vs. Java ?? Probably. Two of the bigger European conferences with different focus areas make it hard to decide which one to attend for people being with one feet on each side. Writing this blog post was helpful for me to decide what my personal options are for next year. I would love to be in Belgium and meet again with the Java Community. But there is also this little red devil on my right shoulder telling me, that I should keep in touch with the latest in Oracle related developments. If you are in a similar situation, you are now waiting for a recommendation, right? Here it is: It’s brief and short: Looking for Oracle products, come visit Germany. Looking for the best Java related content out there beside JavaOne? Visit Devoxx. If you have interest in both areas, you will have to make your own decision and try to focus on the part that is most valuable to you. Reference: DOAG 2011 vs. Devoxx – Value and Attraction from our JCG partner Markus Eisele at the “Enterprise Software Development with Java” blog. Related Articles :Java SE 7, 8, 9 – Moving Java Forward Java EE Past, Present, & Cloud 7 Official Java 7 for Mac OS X – Status The OpenJDK as the default Java on Linux Developing and Testing in the Cloud Java Tutorials and Android Tutorials list...

SOLID – Single Responsibility Principle

The Single Responsibility principle (SRP) states that:There should never be more than one reason for a class to change. We can relate the “reason to change” to “the responsibility of the class”. So each responsibility would be an axis for change. This principle is similar to designing classes which are highly cohesive. So the idea is to design a class which has one responsibility or in otherwords caters to implementing a functionality . I would like to clarify here that one responsibility doesnt mean that the class has only ONE method. A responsibility can be implemented by means of different methods in the class. Why is that this principle is required? Imagine designing classes with more than one responsibility/implementing more than one functionality. There’s no one stopping you to do this. But imagine the amount of dependency your class can create within itself in the due course of the development time. So when you are asked to change a certain functionality, you are not really sure how it would impact the other functionalities implemented in the class. The change might or might not impact other features, but you really can’t take risk, especially in production applications. So you end up testing all the dependent features. You might say, we have automated tests, and the number of tests to be checked are low, but imagine the impact over time. These kind of changes get accumulate owing to the viscosity of the code making it really fragile and rigid. One way to correct the violation of SRP is to decompose the class functionalities into different classes, each of which confirms to SRP. An example to clarify this principle: Suppose you are asked to implement a UserSetting service where in the user can change the settings but before that the user has to be authenticated. One way to implement this would be: public class UserSettingService { public void changeEmail(User user) { if(checkAccess(user)) { //Grant option to change } } public boolean checkAccess(User user) { //Verify if the user is valid. } }All looks good, until you would want to reuse the checkAccess code at some other place OR you want to make changes to the way checkAccess is being done OR you want to make change to the way email changes are being approved. In all the later 2 cases you would end up changing the same class and in the first case you would have to use UserSettingService to check for access as well, which is unnecessary. One way to correct this is to decompose the UserSettingService into UserSettingService and SecurityService. And move the checkAccess code into SecurityService. public class UserSettingService { public void changeEmail(User user) { if(SecurityService.checkAccess(user)) { //Grant option to change } } }public class SecurityService { public static boolean checkAccess(User user) { //check the access. } }Another example would be: Suppose there is a requirement to download the file – may be in csv/json/xml format, parse the file and then update the contents into a database or file system. One approach would be to: public class Task { public void downloadFile(location) { //Download the file } public void parseTheFile(file) { //Parse the contents of the file- XML/JSON/CSV } public void persistTheData(data) { //Persist the data to Database or file system. } }Looks good, all in one place easy to understand. But what about the number of times this class has to be updated? What about the reusability of parser code? or download code? Its not good design in terms of reusabiltiy of different parts of the code, in terms of cohesiveness. One way to decompose the Task class is to create different classes for downloading the file – Downloader, for parsing the file – Parser and for persisting to the database or file system. Even in JDK you must have seen that Rectangle2D or other Shape classes in java.awt package dont really have information regarding how it has to be drawn on the UI. The drawing information has been embedded in the Graphics/Graphics2D package. A detailed description can be found here. Reference: SOLID- Single Responsibility Principle from our JCG partner Mohamed Sanaulla at the “Experiences Unlimited” blog. Related Articles :Are frameworks making developers dumb? Not doing Code Reviews? What’s your excuse? Why Automated Tests Boost Your Development Speed Using FindBugs to produce substantially less buggy code Things Every Programmer Should Know Java Tutorials and Android Tutorials list...

Testing Techniques – Not Writing Tests

There’s not much doubt about it, the way you test your code is a contentious issue. Different test techniques find favour with different developers for varying reasons including corporate culture, experience and general psychological outlook. For example, you may prefer writing classic unit tests that test an object’s behaviour in isolation by examining return values; you may favour classic stubs, or fake objects; or you may like using mock objects to mock roles, or even using mock objects as stubs. This and my next few blogs takes part of a very, very common design pattern and examines different approaches you could take in testing it. The design pattern I’m using is shown in the UML diagram below, it’s something I’ve used before, mainly because it is so common. You may not like it – it is more ‘ask don’t tell’ rather than ‘tell don’t ask’ in its design, but it suits this simple demo.In this example, the ubiquitous pattern above will be used to retrieve and validate an address from a database. The sample code, available from my GitHub repository, takes a simple Spring MVC webapp as its starting point and uses a small MySQL database to store the addresses for no other reason than I already have a server running locally on my laptop. So far as testing goes, the blogs will concentrate upon testing the service layer component AddressService: @Component public class AddressService {private static final Logger logger = LoggerFactory.getLogger(AddressService.class);private AddressDao addressDao;/** * Given an id, retrieve an address. Apply phony business rules. * * @param id * The id of the address object. */ public Address findAddress(int id) {logger.info("In Address Service with id: " + id); Address address = addressDao.findAddress(id);businessMethod(address);logger.info("Leaving Address Service with id: " + id); return address; }private void businessMethod(Address address) {logger.info("in business method"); // Do some jiggery-pokery here.... }@Autowired void setAddressDao(AddressDao addressDao) { this.addressDao = addressDao; }}…as demonstrated by the code above, which you can see is very simple: it has a findAddress(…) method that takes as its input the id (or table primary key) for a single address. It calls a Data Access Object (DAO), and pretends to do some business processing before returning the Address object to the caller. public class Address {private final int id;private final String street;private final String town;private final String country;private final String postCode;public Address(int id, String street, String town, String postCode, String country) { this.id = id; this.street = street; this.town = town; this.postCode = postCode; this.country = country; }public int getId() { return id; }public String getStreet() {return street; }public String getTown() {return town; }public String getCountry() {return country; }public String getPostCode() {return postCode; } }As I said above, I’m going to cover different strategies for testing this code, some of which I’ll guarantee you’ll hate. The first one, still widely used by many developers and organisation is… Don’t Write Any Tests Unbelievably, some people and organisations still do this. They write their code, deploy it to the web server and open a page. If the page opens then they ship the code, if it doesn’t then they fix the code, compile it redeploy it, reload the web browser and retest. The most extreme example I’ve ever seen of this technique: changing the code, deploying to a server, running the code, spotting a bug and going around the loop again was a couple of years ago on a prestigious Government project. The sub-contractor had, I guess to save money, imported a load of cheap and very inexperienced programmers from ‘off-shore’ and didn’t have enough experienced programmers to mentor them. The module in question was a simple Spring based Message Driven Bean that took messages from one queue, applied a little business logic and then pushed it into another queue: simples. The original author started out by writing a few tests, but then passed the code on to other inexperienced team members. When the code changed and a test broke, they simply switched off all the tests. Testing consisted of deploying the MDB to the EJB container (Weblogic), pushing a message into the front of the system and watching what came out of the other end and debugging the logs along the way. You may say that an end to end test like this isn’t too bad, BUT to deploy the MDB and to run the test took just over an HOUR: in a working day, that’s 8 code changes. Not exactly rapid development! My job? To fix the process and the code. The solution? Write tests, run tests and refactor the code. The module went from having zero tests to about 40 unit tests and a few integration tests and it was improved and finally delivered. Done, done. Most people will have their own opinions on this technique, and mine are: it produces unreliable code; it takes longer to write and ship code using this technique because you spend loads of time waiting for servers to start, WARs / EJBs to be deployed etc. and it’s generally used by more inexperienced programmers, or those who haven’t suffered by using this technique – and you do suffer. I can say that I’ve worked on projects where I’m writing tests whilst other developers aren’t. The test team find very few bugs in my code, whilst those other developers are fixings loads of bugs and are going frantic trying to meet their deadlines. Am I a brilliant programmer or does writing tests pay dividends? From experience, if you use this technique, you will have lots of additional bugs to fix because you can’t easily and repeatably test the multitude of scenarios that accompany the story you’re developing. This is because it simply takes too long and you have to remember each scenario and then manually run them. I do wonder whether or not the not writing tests technique is a hangover from the 1960’s when computing time was expensive, and you had to write programs by hand on punched cards or paper tape and then check over visually using a ‘truth table’. Once you were happy that you code worked, you then sent it to the machine room and ran your code – I’m not old enough to remember computing in the 60s. The fact that machine time was expensive meant that automated testing was out of the question. Although computers got faster, this obsolete paradigm continued on, degenerating into one where you missed out the diligent mental check and just ran the code and if it broke you fixed it. This degenerate paradigm was (is?) still taught in schools, colleges and books and was unchallenged until the last few years. Is this why it can be quite hard to convince people to change their habits? Another major problem with this technique is that a project can descend into a state of paralysis. As I said above, with this technique your bug count will be high and this gives a bad impression to project managers with the perception that the code stinks and enforces the idea that you don’t change the code unless absolutely necessary as you might break something. Managers become hesitant about authorising code changes often having no faith in the developers and micro-managing them. Indeed the developers themselves become very hesitant about adding changes to code as breaking something will make them look bad. The changes they do make are as tiny and small as possible and without any refactoring. Over time this adds to the mess and the code degenerates even more becoming a bigger ball of mud. Whilst I think that you should load and review a page to ensure that every thing’s working, it should only be done at the end of the story, once you have a bundle of tests that tell you that your code is working okay. I hope that I’m not being contentious when I sum this method up by saying that it sucks, though time will tell. You may also wonder why I included it, the reason is to point out that it sucks and offer some alternatives in my following blogs. Reference: Testing Techniques – Part 1 – Not Writing Tests from our JCG partner Roger Hughes at the Captain Debug blog. Related Articles :The Misuse of End To End Tests – Testing Techniques 2 What Should you Unit Test? – Testing Techniques 3 Regular Unit Tests and Stubs – Testing Techniques 4 Unit Testing Using Mocks – Testing Techniques 5 Creating Stubs for Legacy Code – Testing Techniques 6 More on Creating Stubs for Legacy Code – Testing Techniques 7 Why You Should Write Unit Tests – Testing Techniques 8 Some Definitions – Testing Techniques 9 Using FindBugs to produce substantially less buggy code Developing and Testing in the Cloud...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: