Featured FREE Whitepapers

What's New Here?

software-development-2-logo

What’s better – Big Fat Tests or Little Tests?

Like most startups, we built a lot of prototypes and wrote and threw out a lot of code as we tried out different ideas. Because we were throwing out the code anyways, we didn’t bother writing tests – why write tests that you’ll just throw away too?But as we ramped the team up to build the prototype out into a working system, we got into trouble early. We were pushing our small test team too hard trying to keep up with changes and new features, while still trying to make sure that the core system was working properly. We needed to get a good automated test capability in place fast.The quickest way to do this was by writing what Michael Feathers calls “Characterization Tests”: automated tests – written at inflection points in an existing code base – that capture the behavior of parts of a system, so that you know if you’ve affected existing behavior when you change or fix something. Once you’ve reviewed these tests to make sure that what the system is doing is actually what it is supposed to be doing, the tests become an effective regression tool.The tests that we wrote to do this are bigger and broader than unit tests – they’re fat developer-facing tests that run beneath the UI and validate a business function or a business rule involving one or more system components or subsystems. Unlike customer-facing functional tests, they don’t require manual setup or verification. Most of these tests are positive, happy path tests that make sure that important functions in the system are working properly, and that test validation functions.Using fat and happy tests as a starting point for test automation is described in the Continuous Delivery book. The idea is to automate high-value high-risk test scenarios that cover as much of the important parts of the system as you can with a small number of tests. This gives you a “smoke test” to start, and the core of a test suite.Today we have thousands of automated tests that run in our Continuous Integration environment. Developers write small unit tests, especially in new parts of the code and where we need to test through a lot of different logical paths and variations quickly. But a big part of our automated tests are still fat, or at least chubby, functional component tests and linked integration tests that explore different paths through the main parts of the system.We use code coverage analysis to identify weak spots, areas where we need to add more automated tests or do more manual testing. Using a combination of unit tests and component tests we get high (90%+) test coverage in core parts of the application, and we exercise a lot of the general plumbing of the system regularly.It’s easy to test server-side services this way, using a common pattern: set up initial state in a database or memory, perform some action using a message or API call, verify the expected results (including messages and database changes and in-memory state) and then roll-back state and prepare for the next test.We also have hundreds of much bigger and fatter integration and acceptance tests that test client UI functions and client API functions through to the server. These “really big fat” tests involve a lot more setup work and have more moving parts, are harder to write and require more maintenance, and take longer to run. They are also more fragile and need to be changed more often. But they test real end-to-end scenarios that can catch real problems like intermittent system race conditions as well as regressions. What’s good and bad about fat tests?There are advantages and disadvantages in relying on fat tests.First, bigger tests have more dependencies. They need more setup work and more test infrastructure, they have more steps, and they take longer to run than unit tests. You need to take time to design a test approach and to create templates and utilities to make it easy to write and maintain bigger tests.You’ll end up with more waste and overlap: common code that gets exercised over and over, just like in the real world. You’ll have to put in better hardware to run the tests, and testing pipelines so that more expensive testing (like the really fat integration and acceptance testing) is done later and less often.Feedback from big tests isn’t as fast or as direct when tests fail. Gerard Meszaros points out that the bigger the test, the harder is to understand what actually broke – you know that there is a real problem, but you have more digging to figure out where the problem is. Feedback to the developer is less immediate: bigger tests run slower than small tests and you have more debugging work to do. We’ve done a lot of work on providing contextual information when tests fail so that programmers can move faster to figuring out what’s broken. And from a regression test standpoint, it’s usually obvious that whatever broke the system is whatever you just changed, so….As you work more on a large system, it is less important to get immediate and local feedback on the change that you just made and more important to make sure that you didn’t break something else somewhere else, that you didn’t make an incorrect assumption or break a contract of some kind, or introduce a side-effect. Big component tests and interaction tests help catch important problems faster. They tell you more about the state of the system, how healthy it is. You can have a lot of small unit tests that are passing, but that won’t give you as much confidence as a smaller number of fat tests that tell you that the core functions of the system are working correctly.Bigger tests also tell you more about what the system does and how it works. I don’t buy the idea that tests make for good documentation of a system – at least unit tests don’t. It’s unrealistic to expect a developer to pick up how a system works from looking at hundreds or thousands of unit tests. But new people joining a team can look at functional tests to understand the important functions of the system and what the rules of the system are. And testers, even non-technical manual testers, can read the tests and understand what tests scenarios are covered and what aren’t, and use this to guide their own testing and review work.Meszaros also explains that good automated developer tests, even tests at the class or method level, should always be black box tests, so that if you need to change the implementation in refactoring or for optimization, you can do this without breaking a lot of tests. Fat tests make these black boxes bigger, raising it to a component or service level. This makes it even easier to change implementation details without having to fix tests – as long as you don’t change public interfaces and public behavior, (which are dangerous changes to make anyways), the tests will still run fine.But this also means that you can make mistakes in implementation that won’t be caught by functional tests – behavior outside of the box hasn’t changed, but something inside the box might still be wrong, a mistake that won’t trip you up until later. Fat tests won’t find these kinds of mistakes, and they won’t catch other detailed mistakes like missing some validation.It’s harder to write negative tests and to test error handling code this way, because the internal exception paths are often blocked at a higher level. You’ll need other kinds of testing, including unit tests and manual exploratory testing and destructive testing to check edge cases and catch problems in exception handling.Would we do it this way again?I’d like to think that if we started something brand new again, we’d start off in a more disciplined way, test first and all that. But I can’t promise. When you are trying to get to the right idea as quickly as possible, anything that gets in the way and slows down thinking and feedback is going to be put aside. It’s once you’ve got something that is close-to-right and close-to-working and you need to make sure that it keeps working, that testing becomes an imperative.You need both small unit tests and chubby functional tests and some big fat integration and end-to-end tests to do a proper job of automated testing. It’s not an either/or argument.But writing fat, functional and interaction tests will pay back faster in the short-term, because you can cover more of the important scenarios faster with fewer tests. And they pay back over time in regression, because you always know that aren’t breaking anything important, and you know that you are exercising the paths and scenarios that your customers are or will be – the paths and scenarios that should be tested all of the time. When it comes to automated testing, some extra fat is a good thing.Reference: What’s better – Big Fat Tests or Little Tests? from our JCG partner Jim Bird at the Building Real Software blog....
java-logo

Java memes which refuse to die

Also titled; My pet hates in Java coding.  There are a number of Java memes which annoy me, partly because they were always a bad idea, but mostly because people still keep picking them up years after there is better alternatives. Using StringBuffer instead of StringBuilder The Javadoc for StringBuffer from 2004 states As of release JDK 5, this class has been supplemented with an equivalent class designed for use by a single thread, StringBuilder. The StringBuilder class should generally be used in preference to this one, as it supports all of the same operations but it is faster, as it performs no synchronization.  Not only is StringBuilder a better choice, the occasions where you could have used a synchronized StringBuffer are so rare, its unlike it was ever a good idea. Say you had the code // run in two threads sb.append(key).append("=").append(value).append(", ");Each append is thread safe, but the lock could be release at any point meaning you could get key1=value1, key2=value2, key1=key2value1=, value2, key1key2==value1value2, ,What makes it worse is that the JIT and JVM will attempt to hold onto the lock between calls in the interests of efficiency. This means you can have code which passes all your tests and works in production for years, but then very rarely breaks, possibly due to upgrading your JVM. Using DataInputStream to read text Another common meme is using DataInputStream when reading text in the following template (three lines with the two readers on the same line) I suspect there is one original code which gets copied around. FileInputStream fstream = new FileInputStream("filename.txt"); DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in));This is bad for three reasonsYou might be tempted to use in to read binary which won’t work due to the buffered nature of BufferedReader. (I have seen this tried) Similarly, you might believe that DataInputStream does something useful here when it doesn’t There is a much shorter way which is correct.BufferedReader br = new BufferedReader(new FileReader("filename.txt")); // or with Java 7. try (BufferedReader br = new BufferedReader(new FileReader("filename.txt")) { // use br }Using Double Checked Locking to create a Singleton When Double checked locking was first used it was a bad idea because the JVM didn’t support this operation safely. // Singleton with double-checked locking: public class Singleton { private volatile static Singleton instance;private Singleton() { }public static Singleton getInstance() { if (instance == null) { synchronized (Singleton.class) { if (instance == null) { instance = new Singleton(); } } } return instance; } }The problem was that until Java 5.0, this usually worked but wasn’t guaranteed in the memory model. There was a simpler option which was safe and didn’t require explicit locking. // suggested by Bill Pugh public class Singleton { // Private constructor prevents instantiation from other classes private Singleton() { }/** * SingletonHolder is loaded on the first execution of Singleton.getInstance() * or the first access to SingletonHolder.INSTANCE, not before. */ private static class SingletonHolder { public static final Singleton INSTANCE = new Singleton(); }public static Singleton getInstance() { return SingletonHolder.INSTANCE; } }This was still verbose, but it worked and didn’t require an explicit lock so it could be faster. In Java 5.0, when they fixed the memory model to handle double locking safely, they also introduced enums which gave you a much simpler solution. In the second edition of his book Effective Java, Joshua Bloch claims that “a single-element enum type is the best way to implement a singleton”  With an enum, the code looks like this. public enum Singleton { INSTANCE; }This is lazy loaded, thread safe, without explicit locks and much simpler. Reference: Java memes which refuse to die from our JCG partner Peter Lawrey at the Vanilla Java blog....
software-development-2-logo

Bcrypt, Salt. It’s The Bare Minimum.

The other day I read this Arstechnica article and realized how tragic the situation is. And it is not this bad because of the evil hackers. It’s bad because few people know how to handle one very common thing: authentication (signup and login). But it seems even cool companies like LinkedIn and Yahoo do it wrong (tons of passwords have leaked recently)Most of the problems described in the article is solved with bcrypt. And using salt is a must. Other options are also acceptable – PBKDF2 and probably SHA-512. Note that bcrypt is not a hash function, it’s an algorithm that is specifically designed for password storage. It has its own salt generation built-in. Here are two stack exchange questions on the topic: this and this. Jeff Atwood has also written on the topic some time ago.What is salt? It’s a random string (series of bits, to be precise, but for the purpose of password storage, let’s view it as string) that is appended to each password before it is hashed. So “mypassword” may become “543abc7d9fab773fb2a0mypassword”. You then add the salt every time you need to check if the password is correct (i.e. salt+password should generate the same hash that is stored in the database). How does this help? First, rainbow tables (tables of precomputed hashes for character combinations) can’t be used. Rainbow tables are generated for shorter passwords, and a big salt makes the password huge. Bruteforce is still possible, as the attacker knows your salt, so he can just bruteforce salt+(set of attempted passwords). Bcrypt, however, addresses bruteforce, because it is intentionally “slow”.So, use salt. Prefer bcrypt. And that’s not if you have to be super-secure – that’s the absolute minimum for every website out there that stores passwords. And don’t say “my site is just a forum, what can happen if someone gets the passwords”. Users tend to reuse passwords, so their password for your stupid site may also be their email of facebook password. So take this seriously, whatever your website is, because you are risking the security of your users outside your premises. If you think it’s hard to use bcrypt, then don’t use passwords at all. Use “Login with facebook/twitter”, OpenID (that is actually harder than using bcrypt) or another form of externalized authentication.Having used the word “minimum” a couple of times, I’ll proceed with a short list of things to consider in terms of web security that should be done in addition to the minimum requirement of using salt. If you are handling money, or some other very important staff, you can’t afford to stay on the bare minimum:use https everywhere. Sending unsecure session cookies can be sniffed and the attacker can “steal” the user’s session. one-time tokens – sends short-lived tokens (codes) via SMS, or login links – via email, that are used to authentication. That way you even don’t need passwords (you move the authentication complexity to the mobile network / the email provider) encourage use of passphrases, rather than passwords – short passwords are easier to bruteforce, but long passwords are hard to remember. That’s why you could encourage your users to use a passphrase, like “dust in the wind” or “who let the dogs out”, which are easy to remember, but hard to attack. (My signup page has an example of a subtle encouragement) require additional verification for highly-sensitive actions, and don’t allow changing emails if the login was automatic (performed with a long-lived “remember-me” cookie) lock accounts after failed consecutive logins – “bruteforce” should only be usable if the attacker gets hold of your database. It should not happen through your interface. use certificates for authentication – public-key cryptography can be used to establish mutual trust between the user and the server – the user knows the server is the right one, and the server knows the user is not a random person that somehow obtained the password. use hardware tokens – using digital signatures are the same as the above option, but they store the certificates on hardware devices and cannot be extracted from there. So only the owner of the physical device can authenticateWeb security is a complex field. Hello world examples must not be followed for real-world systems. Consider all implications for your users outside your system. Bottom-line: use bcrypt.Reference: Bcrypt, Salt. It’s The Bare Minimum. from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
software-development-2-logo

Overqualified is Overdiagnosed

I’ve been inspired by comments on prior articles to discuss the sensitive topics of ‘overqualification’ and ageism. My Why You Didn’t Get The Job and Why You Didn’t Get The Interview posts were republished on a few sites, resulting in some active debates where at some point a participant states that the real reason that they weren’t hired was that they are overqualified for all the jobs out there, or they were victims of ageism. In my opinion and experience recruiting in the software engineering world, the term overqualified is used too widely by companies (and then inaccurately cited by rejected candidates), and claims of alleged ageism are often something else entirely. Before we begin, I acknowledge that companies want to hire cheaper labor when possible, and some shops care less about quality products than others. And for the record, I’m over 40. By saying you are overqualified for jobs, what are you really saying? “I am more skilled or more experienced than the job requires.“ That feels kind of good, doesn’t it? SPOUSE: How did the interview go? JOB SEEKER: I didn’t get the job. SPOUSE 1: Oh, I’m sorry. What happened? JOB SEEKER: Unfortunately, it turns out my skills are simply too strong. Of course rejection hurts, but to tell your spouse (and yourself) that you were turned down because you were too skilled or too experienced is much less bruising on the ego than the alternative. For companies looking to eliminate candidates, using the word overqualified may take some of the sting and fear of retribution out of the rejection. But is it true? Think about this scenario for a second. You are trying to hire a software developer and you estimate that someone with say five years of experience should be able to handle the duties effectively. A candidate is presented with fifteen years of experience that has all the attributes you are seeking. This person should theoretically perform the tasks quicker and even take on some additional workload. Do you really think a company would not hire this person simply because he/she has those additional years of experience? I would argue that is rarely the case. Question: Is ‘overqualified’ a code word used by managers/HR to mean other things? Answer: ALMOST ALWAYS What can overqualified actually mean? listed in order from most likely to least likely, IMOOverpaid/over budget – If your experience > what is required, it generally becomes a problem when your salary requirements are above what is budgeted. It’s not that you are classified as overpaid in your current role, but that you would be overpaid for the level of responsibility at the new job. I list this as the most likely culprit because I often see companies initially reject a candidate as overqualified, then hire that same person because of a lack of less experienced quality talent. Stagnant – Candidates who have worked for many years as a developer in a technically stagnant and regulated environment will often not thrive in less regulated, more technically diverse firms. The conventional wisdom, right or wrong, is that you can’t release the zoo lions back into the jungle once they’ve been tamed. ‘Overskilled’ – If your skills > what is necessary for the job, an employer may fear that the lack of challenges provided will bore you into looking for more interesting work in the future. Hiring a tech lead to do bug fixes could lead to a short stint. There is emerging evidence that shows skilled workers do not exit less challenging jobs quickly or in high numbers, but hiring managers are not quite ready to abandon the traditional line of thinking. Threatening – If your experience > those conducting the interviews, there could be some fear that you could be a competitor for future opportunities for promotion. If a start-up is yet to hire a CTO, the highest geek on that firm’s food chain may be jockeying for the role. This may sound a bit like a paranoid conspiracy theory, but I genuinely believe it is prevalent enough to mention. Too old – Ageism is a real problem, but in my experience in the software world, ageism is also widely overdiagnosed by candidates who think the problem is their age when in actuality it is their work history. Most of the self-diagnosed claims of ageism that I hear are from candidates who spent perhaps 20+ years working for the same company and have not focused on keeping their skills up to date (see stagnant above). I can’t say that I’ve ever heard a claim of ageism from a candidate that has moved around in their career and stayed current with technology. The problem often isn’t age, it is relevance.Some of the best and most accomplished/successful software engineering professionals that I know are over 50, which is older than some of the candidates I hear claiming possible ageism. One trait that the overwhelming majority of these engineers have in common is that they didn’t stay in any one place for too long to stagnate. I don’t think that is a coincidence. If you are an active job seeker that is continuously hearing that you are overqualified, what can you do to improve your standing?Rethink – Try to investigate which of the meanings of overqualified you are hearing most often. Is your compensation in line with what companies are paying for your set of qualifications? Do you present yourself in interviews as someone who may become easily bored when your work is less challenging? Are you making it clear in interviews that you want the job, and you explain why you want the job? Retool – Make sure your skills are relevant and being sought by companies. Invest time to learn an emerging technology or developing some niche specialty that isn’t already flooded. Remarket – Write down the top reasons you think a company should hire you, and then check to see if those reasons are represented in your job search materials (resume, email application, cover letters). Find out what was effective for your peers in their job search and try to implement new self-promotion tactics. Reboot and refresh – Take a new look at your options beyond the traditional career paths. Have you considered consulting or contracting roles where your guidance and mentoring skills could be justified and valued for temporary periods? Are there emerging markets that interest you?Terms like ‘overqualified’ and ‘not a fit’ are unfortunately the laziest, easiest, and safest ways that companies can reject you for a position, and they almost always mean something else. Discovering the real reason you were passed up is necessary to make the proper adjustments so you can get less rejections and more offers. Reference: Overqualified is Overdiagnosed from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
oracle-glassfish-logo

Rewrite to the edge – getting the most out of it! On GlassFish!

A great topic for modern application development is rewriting. Since the introduction of Java Server Faces and the new lightweight programming model in Java EE 6 you are struggling with pretty and simple, bookmarkable URLs. PrettyFaces was out there since some time and even if it could be called mature at the 3.3.3 version I wasn’t convinced. Mainly because of the fact that I had to configure it in xml. If you ever did a JSF project you know that this is something you do on top later on. Or never. With the last option being the one I have seen a lot. Rewrite is going to change that. Programmatic, easy to use and highly customizable. Exactly what I was looking for.Getting StartedNothing is easy as getting started with stuff coming from one of the RedHat guys. Fire up NetBeans, create a new Maven based Webapp, add JSF and Primefaces to the mix and run it on GlassFish. First step for adding rewriting magic to your application is to add the rewrite dependencies to your project. <dependency> <groupId>org.ocpsoft.rewrite</groupId> <artifactId>rewrite-servlet</artifactId> <version>1.1.0.Final</version> </dependency>That isn’t enough since I am going to use it together with JSF, you also need the jsf-integration. <dependency> <groupId>org.ocpsoft.rewrite</groupId> <artifactId>rewrite-integration-faces</artifactId> <version>1.1.0.Final</version> </dependency>Next implement your own ConfigurationProvider. This is the central piece where most of the magic happens.Let’s call it TricksProvider for now and we also extend the abstract HttpConfigurationProvider. A simple first version looks like this: public class TricksProvider extends HttpConfigurationProvider { @Override public int priority() { return 10; }@Override public Configuration getConfiguration(final ServletContext context) { return ConfigurationBuilder.begin() .addRule(Join.path("/").to("/welcomePrimefaces.xhtml")); } }Now you have to register your ConfigurationProvider. You do this by adding a simple textfile named org.ocpsoft.rewrite.config.ConfigurationProvider to your applications /META-INF/services/ folder. Add the fully qualified name of your ConfigurationProvider implementation to it and you are done. If you fire up your application. The Rewriting BasicsWhile copying the above provider you implicitly added your first rewriting rule. By requesting http://host:8080/yourapp/ you get directly forwarded to the Primefaces welcome page generated by NetBeans. All rules are based on the same principle. Every single rule consists of a condition and an operation. Something like “If X happens, do Y”. Rewrite knows two different kinds of Rules. Some preconfigured ones (Join) starting with “addRule()” and a fluent interface starting with defineRule(). This is a bit confusing because the next major release will deprecate the defineRule() and rename it to addRule(). So most the examples you find (especially the test cases in the latest trunk) are not working with the 1.1.0.Final. Rewrite knows about two different Directions. Inbound and Outbound. Inbound is most likely working like every rewriting engine you know (e.g. mod_rewrite). A request arrives and is forwarded or redirected to the resources defined in your rules. The Outbound direction is little less. It basically has a hook in the encodeURL() method of the HttpServletRequest and rewrites the links you have in your pages (if they get rendered with the help of encodeURL at all). JSF is doing this out of the box. If you are thinking to use it with JSPs you have to make sure to call it yourself. Forwarding .html to .xhtml with some magicLet’s look at some stuff you could do with rewrite. First we add the following to the TricksProvider: .defineRule() .when(Direction.isInbound() .and(Path.matches("{name}.html").where("name").matches("[a-zA-Z/]+"))) .perform(Forward.to("{name}.xhtml"));This is a rule which is looking at inbound requests and checks for all Patch matches {name}.html which confirm to the regular expression pattern [a-zA-Z/]+ and Forwards those to {name}.xhtml files. If this rule is in place all requests to http://host:8080/yourapp/something.html will end up being forwarded to something.xhtml. Now your users will no longer know that you are using fancy JSF stuff underneath and believe you are working with html :) If a url which isn’t matching the regular expression is requested, for example something like http://host:8080/yourapp/something123.html this simply isn’t forwarded and if the something123.html isn’t present in your application you will end up receiving a 404 error. Rewriting Outbound LinksThe other way round you could also add the following rule: .defineRule() .when(Path.matches("test.xhtml") .and(Direction.isOutbound())) .perform(Substitute.with("test.html"))You imagine what this is doing, right? If you have a facelet which contains something like this: <h:outputLink value="test.xhtml">Normal Test</h:outputLink>The link that is rendered to the user will be rewritten to test.html. This is the most basic action for outbound links you will ever need. Most of the magic happens with inbound links. Not a big surprise looking at the very limited reach of the encodeURL() hook. The OutputBufferThe most astonishing stuff in rewrite is called OutputBuffer. At least until the release we are working with at the moment. It is going to be renamed in 2.0 but for now let’s simply look at what you could do. The OutputBuffer is your hook to the response. Whatever you would like to do with the response before it actually arrives at your client’s browser could be done here. Thinking about transforming the markup? Converting css? Or even GZIP compression? Great, that is exactly what you could do. Let’s implement a simple ZipOutputBuffer public class ZipOutputBuffer implements OutputBuffer {private final static Logger LOGGER = Logger.getLogger(ZipOutputBuffer.class.getName());@Override public InputStream execute(InputStream input) { String contents = Streams.toString(input); LOGGER.log(Level.FINER, "Content {0} Length {1}", new Object[]{contents, contents.getBytes().length}); byte[] compressed = compress(contents); LOGGER.log(Level.FINER, "Length: {0}", compressed.length); return new ByteArrayInputStream(compressed); }public static byte[] compress(String string) { ByteArrayOutputStream os = new ByteArrayOutputStream(string.length()); byte[] compressed = null; try { try (GZIPOutputStream gos = new GZIPOutputStream(os)) { gos.write(string.getBytes()); } compressed = os.toByteArray(); os.close(); } catch (IOException iox) { LOGGER.log(Level.SEVERE, "Compression Failed: ", iox); } return compressed; } }As you can see, I am messing around with some streams and use the java.util.zip.GZIPOutputStream to shrink the stream received in this method. Next we have to add the relevant rule to the TricksProvider: .defineRule() .when(Path.matches("/gziptest").and(Direction.isInbound())) .perform(Forward.to("test.xhtml") .and(Response.withOutputBufferedBy(new ZipOutputBuffer()) .and(Response.addHeader("Content-Encoding", "gzip")) .and(Response.addHeader("Content-Type", "text/html"))))An inbound rule (we are not willing to rewrite links in pages here .. so it has to be inbound) which adds the ZipOutputBuffer to the Response. Also take care for the additional response header (both) unless you want to see your browser complaining about the content I have mixed up :) That is it. The request http://host:8080/yourapp/gziptest now delivers the test.xhtml with GZIP compression. That is 2,6KB vs. 1,23 KB!! Less than half of the size !! It’s not very convenient to work with streams and byte[]. And I am not sure if this will work with larger page sizes in terms of memory fragmentation, but it is an easy way out if you don’t have a compression filter in place or only need to compress single parts of your application. Enhance Security with RewriteBut that is not all you could do: You could also enhance the security with rewrite. Lincoln has a great post up about securing your application with rewrite. There are plenty of possible examples around how to use this. I Came up with a single use-case where didn’t want to use the welcome-file features and prefer to dispatch users individually. While doing this I would also inspect their paths and check if the stuff they are entering is malicious or not. You could either do it with the .matches() condition or with a custom constraint. Add the following to the TricksProvider: Constraint<String> selectedCharacters = new Constraint<String>() { @Override public boolean isSatisfiedBy(Rewrite event, EvaluationContext context, String value) { return value.matches("[a-zA-Z/]+"); } };And define the following rule: .defineRule() .when(Direction.isInbound() .and(Path.matches("{path}").where("path").matches("^(.+)/$") .and(Path.captureIn("checkChar").where("checkChar").constrainedBy(selectedCharacters)))) .perform(Redirect.permanent(context.getContextPath() + "{path}index.html"))Another inbound modification. Checking the path if it is has a folder pattern and capturing it in a variable which is checked against the custom constraints. Great! Now you have a save and easy forwarding mechanism in place. All http://host:8080/yourapp/folder/ request are now rewritten to http://host:8080/yourapp/index.html. If you look at the other rules from above you see, that the .html is forwarded to .xhtml … and you are done! Bottom LineI like working with rewrite a lot. It feels easier than configuring the xml files of prettyfaces and I truly enjoyed the support of Lincoln and Christian during my first steps with it. I am curious to see what the 2.0 is coming up with and I hope that I get some more debug output for the rules configuration just to see what is happening. The default is nothing and it could be very tricky to find the right combination of conditions to have a working rule. Looking for the complete sources? Find them on github. Happy to read about your experiences. Where is the GlassFish Part?Oh, yeah. I mentioned it in the headline, right? That should be more like a default. I was running everything with latest GlassFish 3.1.2.2 so you can be sure that this is working. And NetBeans is at 7.2 at the moment and you should give it a try if you haven’t. I didn’t came across a single issue related to GlassFish and I am very pleased to stress this here. Great work! One last remark: Before you are going to implement the OutputBuffer like crazy take a look at what your favorite appserver has in stock already. GlassFish knows about GZIP compression already and it simply can be switched on! Might be a good idea to think twice before implementing here. Reference: Rewrite to the edge – getting the most out of it! On GlassFish! from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
spring-data-logo

Customizing Spring Data JPA Repository

Spring Data is a very convenient library. However, as the project as quite new, it is not well featured. By default, Spring Data JPA will provide implementation of the DAO based on SimpleJpaRepository. In recent project, I have developed a customize repository base class so that I could add more features on it. You could add vendor specific features to this repository base class as you like. Configuration You have to add the following configuration to you spring beans configuration file. You have to specified a new repository factory class. We will develop the class later. <jpa:repositories base-package='example.borislam.dao' factory-class='example.borislam.data.springData.DefaultRepositoryFactoryBean/> Just develop an interface extending JpaRepository. You should remember to annotate it with @NoRepositoryBean. @NoRepositoryBean public interface GenericRepository <T, ID extends Serializable> extends JpaRepository<T, ID> { }Define Custom repository base implementation class Next step is to develop the customized base repository class. You can see that I just one property (i.e. springDataRepositoryInterface) inside this customized base repository. I just want to get more control on the behaviour of the customized behaviour of the repository interface. I will show how to add more features of this base repository class in the next post. @SuppressWarnings('unchecked') @NoRepositoryBean public class GenericRepositoryImpl<T, ID extends Serializable> extends SimpleJpaRepository<T, ID> implements GenericRepository<T, ID> , Serializable{ private static final long serialVersionUID = 1L;static Logger logger = Logger.getLogger(GenericRepositoryImpl.class); private final JpaEntityInformation<T, ?> entityInformation; private final EntityManager em; private final DefaultPersistenceProvider provider; private Class<?> springDataRepositoryInterface; public Class<?> getSpringDataRepositoryInterface() { return springDataRepositoryInterface; }public void setSpringDataRepositoryInterface( Class<?> springDataRepositoryInterface) { this.springDataRepositoryInterface = springDataRepositoryInterface; }/** * Creates a new {@link SimpleJpaRepository} to manage objects of the given * {@link JpaEntityInformation}. * * @param entityInformation * @param entityManager */ public GenericRepositoryImpl (JpaEntityInformation<T, ?> entityInformation, EntityManager entityManager , Class<?> springDataRepositoryInterface) { super(entityInformation, entityManager); this.entityInformation = entityInformation; this.em = entityManager; this.provider = DefaultPersistenceProvider.fromEntityManager(entityManager); this.springDataRepositoryInterface = springDataRepositoryInterface; }/** * Creates a new {@link SimpleJpaRepository} to manage objects of the given * domain type. * * @param domainClass * @param em */ public GenericRepositoryImpl(Class<T> domainClass, EntityManager em) { this(JpaEntityInformationSupport.getMetadata(domainClass, em), em, null); } public <S extends T> S save(S entity) { if (this.entityInformation.isNew(entity)) { this.em.persist(entity); flush(); return entity; } entity = this.em.merge(entity); flush(); return entity; } public T saveWithoutFlush(T entity) { return super.save(entity); } public List<T> saveWithoutFlush(Iterable<? extends T> entities) { List<T> result = new ArrayList<T>(); if (entities == null) { return result; }for (T entity : entities) { result.add(saveWithoutFlush(entity)); } return result; } }As a simple example here, I just override the default save method of the SimpleJPARepository. The default behaviour of the save method will not flush after persist. I modified to make it flush after persist. On the other hand, I add another method called saveWithoutFlush() to allow developer to call save the entity without flush. Define Custom repository factory bean The last step is to create a factory bean class and factory class to produce repository based on your customized base repository class. public class DefaultRepositoryFactoryBean <T extends JpaRepository<S, ID>, S, ID extends Serializable> extends JpaRepositoryFactoryBean<T, S, ID> { /** * Returns a {@link RepositoryFactorySupport}. * * @param entityManager * @return */ protected RepositoryFactorySupport createRepositoryFactory( EntityManager entityManager) {return new DefaultRepositoryFactory(entityManager); } }/** * * The purpose of this class is to override the default behaviour of the spring JpaRepositoryFactory class. * It will produce a GenericRepositoryImpl object instead of SimpleJpaRepository. * */ public class DefaultRepositoryFactory extends JpaRepositoryFactory{ private final EntityManager entityManager; private final QueryExtractor extractor;public DefaultRepositoryFactory(EntityManager entityManager) { super(entityManager); Assert.notNull(entityManager); this.entityManager = entityManager; this.extractor = DefaultPersistenceProvider.fromEntityManager(entityManager); } @SuppressWarnings({ 'unchecked', 'rawtypes' }) protected <T, ID extends Serializable> JpaRepository<?, ?> getTargetRepository( RepositoryMetadata metadata, EntityManager entityManager) {Class<?> repositoryInterface = metadata.getRepositoryInterface(); JpaEntityInformation<?, Serializable> entityInformation = getEntityInformation(metadata.getDomainType());if (isQueryDslExecutor(repositoryInterface)) { return new QueryDslJpaRepository(entityInformation, entityManager); } else { return new GenericRepositoryImpl(entityInformation, entityManager, repositoryInterface); //custom implementation } } @Override protected Class<?> getRepositoryBaseClass(RepositoryMetadata metadata) {if (isQueryDslExecutor(metadata.getRepositoryInterface())) { return QueryDslJpaRepository.class; } else { return GenericRepositoryImpl.class; } } /** * Returns whether the given repository interface requires a QueryDsl * specific implementation to be chosen. * * @param repositoryInterface * @return */ private boolean isQueryDslExecutor(Class<?> repositoryInterface) {return QUERY_DSL_PRESENT && QueryDslPredicateExecutor.class .isAssignableFrom(repositoryInterface); } }Conclusion You could now add more features to base repository class. In your program, you could now create your own repository interface extending GenericRepository instead of JpaRepository. public interface MyRepository <T, ID extends Serializable> extends GenericRepository <T, ID> { void someCustomMethod(ID id); } In next post, I will show you how to add hibernate filter features to this GenericRepository. Reference: Customizing Spring Data JPA Repository from our JCG partner Boris Lam at the Programming Peacefully blog....
software-development-2-logo

5′ on IT-Architecture: the modern software architect

Before I start writing about this let me adjust something right at the beginning: Yes of course, there is the role of a ‘software architect’ in any non-trivial software development project. Even in times of agile projects, dynamic markets and vague terms like ‘emergence’. The simple reason for that is that emergence and democracy in teams only work within constraints. Though, it’s not always clever to assign somebody the role explicitly. In an ideal world one developer in that team evolves into the architecture role. When I started working as an IT professional at a *big* american software & IT consulting company I spent around five years with programming. After that time I got my first architecture job on a big project at a german automotive manufacturer. My main responsibility was to design the solution, advice developers, project managers and clients in doing things and to organize the development process. I wrote many documents, but I didn’t code anymore. The result was that I lost expertise in my core business: programming. So after a while my assessments and gut instinct got worse, which results in worse decisions. As a sideeffect of generic (vague) talking it got harder to gain acceptance by the developers, project managers or clients. When I realized all that I decided to do more development again. Today, I am doing architecture for 10 years. I am developing code in the IDE of my choice at least 20-30% of my time. Avtivity profile Whilst programming is a necessary activity, there is a whole bunch of activities that are sufficient to be successful as an architect. Doing architecture is a lot about collaboration, evaluating alternatives objectively (neutral and fair-minded) and about decision making. It’s a lot about communication, dealing with other individuals that almost always have their own opinions. Further more it’s a lot about forming teams and designing the ideal development process around those teams to solve the concrete problem. Last not least it’s about designing (structuring) the solution in a way that all functional and non-functional requirements are well covered. You can do all that more or less without super actual technical knowledge. But I believe an architect can do better if he/she has technical expertise gathered by day-to-day coding business. In the long run you cannot be a technical architect without sufficient coding practice. Figure 1: Activities of the software architectSolving tradeoffs When I worked as an architect I often found myself in difficult tradeoff situations. That is, I wanted to improve one quality attribute, but to achieve that I needed to downgrade another. Here is a simple but very common example: its often desireable to have a highly changeable system with best possible performance. However, these two attributes – performance and changeability – typically correlate negatively, when you want to increase changeability you often loose efficiency. Doing architecture often means to find the golden mean between competing system qualities – it means choosing the right alternative that represents the best compromise. It’s about finding the balance between system qualities and the environmental factors of that system (e.g. steakholders, requirements). The operations manager will focus on the efficiency of a new system, while the development manager will argue that it’s important to have a changeable system that generates little maintenance costs. The client wants to have a new system with the highest degree of business process automation as possible. These situations consume a reasonalbe amount of time and energy. Sharing knowledge and communication Another superior important activity: sharing knowledge in a team of technical experts and other steakholders. The core problem of software development is to transform fuzzy knowledge of domain experts into merciless logical machine code of silly computers that only understand two digits: 0 and 1. This is a long way through the venturesome and endless jungle of human misunderstandings! Therefore, architects communicate a lot. They use models to do that. Models serve as a mapping mechanism between human brains and computers. The set of problems that can arise during the knowledge-to-binary transformation is very diverse. It’s impossible that every team member knows all of them. That’s another reason why sharing knowledge in a team is so superior important. Nobody is perfect! Needless to say that nobody is perfect. Every team is different and so is every concrete situation. So in one situation one may be the right architect for the team while in other team set-ups that person doesn’t fit. An architect can also have different strengths. I know architects that communicate and socialize very well but don’t do so good in designing solutions or organizing the development process. Although they don’t master each individual skill, they’re all good architects. The common ground is that they were all down-to-earth developers. Reference: 5′ on IT-Architecture: the modern software architect from our JCG partner Niklas....
android-logo

Android Activity Animation Customization Tutorial

If you are thinking on customizing the animation of Activity transition then probably you would look for ActivityOptions class introduced in Android 4.1 (Jelly bean). This class has provided three methods which can help you to customize the Activity Animation. These are given in the table below. ActivityOptions Class MethodsFunction Name DescriptionmakeCustomAnimation This method allows to pass custom animation and when the Atyctivi is launched, it gets rendered accordingly. Here you can pass animation for transitioning out Activity as well as for transitioning in ActivitymakeScaleUpAnimation This method scales up the Activity from the initial size to its final representational size. It can be used to scale up the activity from the view which has launched this activity.makeThumbnailScaleUpAnimation In this animation, a thumbnail of the activity scales up to the final size of the activity.toBundle This method returns Bundle object which can be passed in the startActivity() method for desired animation.For more information on ActivityOptions you can refer here. Project Information: Meta-data about the project. Platform Version : Android API Level 16. IDE : Eclipse Helios Service Release 2 Emulator : Android 4.1(API 16) Prerequisite: Preliminary knowledge of Android application framework, and Intent. Sample Source Code: We create a project using eclipse and then create anim (Animation) folder under res(resource) folder. Now we will define the animation attributes in the xml files and put it in anim folder. Here, we are going to define two animations which will be used in makeCustomAnimation() method. makeCustomAnimation takes two animation files, one for incoming activity and another for outgoing activity. Either of the animations can be null and in that case animation will not be performed for that particular activity. Now we will define fade_in.xml for incoming activity. Here we are going to change the Alpha value from 0 to 1 which makes activity transparent to opaque. <alpha xmlns:android='http://schemas.android.com/apk/res/android' android:interpolator='@android:anim/anticipate_interpolator' android:fromAlpha='0.0' android:toAlpha='1.0' android:duration='@android:integer/config_longAnimTime' /> Now we are going to define the another file called fade_out.xml file for transitioning out Activity. Here we will change the value of Alpha from 1 to 0. <alpha xmlns:android='http://schemas.android.com/apk/res/android' android:interpolator='@android:anim/anticipate_interpolator' android:fromAlpha='1.0' android:toAlpha='0.0' android:duration='@android:integer/config_longAnimTime' /> Now we are going to define the layout file for the main activity. Name this file as acitivity_main.xml. In this file we will add three buttons for corresponding animation. <LinearLayout xmlns:android='http://schemas.android.com/apk/res/android' xmlns:tools='http://schemas.android.com/tools' android:layout_width='match_parent' android:layout_height='match_parent' android:orientation='vertical' ><Button android:layout_width='match_parent' android:layout_height='wrap_content' android:onClick='fadeAnimation' android:text='@string/btFadeAnimation' /> <Button android:layout_width='match_parent' android:layout_height='wrap_content' android:onClick='scaleupAnimation' android:text='@string/btScaleupAni' /> <Button android:layout_width='match_parent' android:layout_height='wrap_content' android:onClick='thumbNailScaleAnimation' android:text='@string/btThumbNailScaleupAni' /></LinearLayout> As you may have noticed that we have already attached onclick method with each button. These methods will animate the activity when it is launched using startActivity() method. Now let’s define the another layout for the target Activity with one ImageView. Put an image in drawable folder and then use that image as src for Image view. Here I have put “freelance2.jpg” image in drawable folder and have used android:src tag to use the image. Name the layout file as activity_animation.xml <RelativeLayout xmlns:android='http://schemas.android.com/apk/res/android' android:layout_width='match_parent' android:layout_height='match_parent' android:orientation='vertical' > <ImageView android:id='@+id/imageView1' android:layout_width='match_parent' android:layout_height='match_parent' android:layout_marginRight='44dp' android:layout_marginTop='54dp' android:layout_centerInParent='true' android:src='@drawable/freelancer2' /></RelativeLayout> Once this layout is defined, we need to define the corresponding Activity class. Let’s name this class as AnimationActivity. The source code is as following: package com.example.jellybeananimationexample;import android.app.Activity; import android.os.Bundle;public class AnimationActivity extends Activity {@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_animation); } }Now, it’s time to define the MainActivity class having methods to customize the Activity animation. package com.example.jellybeananimationexample;import android.app.Activity; import android.app.ActivityOptions; import android.content.Intent; import android.graphics.Bitmap; import android.os.Bundle; import android.view.View;public class MainActivity extends Activity {@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); }public void scaleupAnimation(View view) { // Create a scale-up animation that originates at the button // being pressed. ActivityOptions opts = ActivityOptions.makeScaleUpAnimation(view, 0, 0, view.getWidth(), view.getHeight()); // Request the activity be started, using the custom animation options. startActivity(new Intent(MainActivity.this, AnimationActivity.class), opts.toBundle()); }public void thumbNailScaleAnimation(View view) { view.setDrawingCacheEnabled(true); view.setPressed(false); view.refreshDrawableState(); Bitmap bitmap = view.getDrawingCache(); ActivityOptions opts = ActivityOptions.makeThumbnailScaleUpAnimation( view, bitmap, 0, 0); // Request the activity be started, using the custom animation options. startActivity(new Intent(MainActivity.this, AnimationActivity.class), opts.toBundle()); view.setDrawingCacheEnabled(false); }public void fadeAnimation(View view) { ActivityOptions opts = ActivityOptions.makeCustomAnimation( MainActivity.this, R.anim.fade_in, R.anim.fade_out); // Request the activity be started, using the custom animation options. startActivity(new Intent(MainActivity.this, AnimationActivity.class), opts.toBundle()); }} Once you are done with code, execute it. On clicking the application button, you will see the customized activity animation. You can get the updated Android Animation source code from here. For Android tutorial visit here. Reference: Tutorial on customization of Android Activity Animation from our JCG partner Rakesh Cusat at the Code4Reference blog....
gradle-logo

How to install Gradle

Gradle is a simple and yet strong build tool. It is similar to the Ant build tool. It manages the build well and also handles build dependencies. The best part of Gradle is that it is open source project. If you are thinking about installing and giving it a try, then you are at the right place. Gradle development cycle is of 4 weeks, so after every four weeks they roll out a new version of Gradle. Here, I am assuming that you are going to install Gradle on linux/Ubuntu machine. Gradle setup stepsDownload Gradle from here. Gradle download comes in three different flavors.Binaries, Documentation and Source code Binaries only Source code onlyThe first one is recommended since it comes with documentation and source code. If you are not interested in documentation and source file, you can download Binaries only package.   unzip the download file $unzip gradle-[version]-[type].ziphere type can be all, bin, or src. It is based on type of downloaded flavor. The bin file should be in the system path, only then you can execute the Gradle command. It can be done either by executing the command given below or by editing the .bashrc to set the PATH variable. #modify the directory path according to the setup. $ export PATH=$PATH:/gradle-directory-path/gradle-directory/binNow execute the following command on terminal $ gradle #you will see the output below.:helpWelcome to Gradle 1.1.To run a build, run gradle ... To see a list of available tasks, run gradle tasks To see a list of command-line options, run gradle --help BUILD SUCCESSFUL Total time: 2.607 secsIf you don’t the the above print, then you need to recheck the PATH variable.Hello wold!! in Gradle After installing Gradle, let’s try a simple Gradle file. Create a file named build.gradle and copy the code given below in this file. task("hello"){ println "Hello world!!" }The way we define targets in Makefile, similarly we define tasks in gradle script. In the above code we have created a simple task called hello which prints Hello world!!. For executing this script, just execute the command below on the terminal in the same directory. $ gradle hello #you will get similar out put as shown below.Hello world!! :hello UP-TO-DATEBUILD SUCCESSFULGradle command looks for build.gradle file in the current directory and executes the specified task(s) similar to make command which looks for Makefile in the current directory and executes the specified target(s). Reference: how to install gradle from our JCG partner Rakesh Cusat at the Code4Reference blog....
java-logo

Java Enums: You have grace, elegance and power and this is what I Love!

While Java 8 is coming, are you sure you know well the enums that were introduced in Java 5? Java enums are still underestimated, and it’s a pity since they are more useful than you might think, they’re not just for your usual enumerated constants! Java enum is polymorphic Java enums are real classes that can have behavior and even data. Let’s represent the Rock-Paper-Scissors game using an enum with a single method. Here are the unit tests to define the behavior: @Test public void paper_beats_rock() { assertThat(PAPER.beats(ROCK)).isTrue(); assertThat(ROCK.beats(PAPER)).isFalse(); } @Test public void scissors_beats_paper() { assertThat(SCISSORS.beats(PAPER)).isTrue(); assertThat(PAPER.beats(SCISSORS)).isFalse(); } @Test public void rock_beats_scissors() { assertThat(ROCK.beats(SCISSORS)).isTrue(); assertThat(SCISSORS.beats(ROCK)).isFalse(); } And here is the implementation of the enum, that primarily relies on the ordinal integer of each enum constant, such as the item N+1 wins over the item N. This equivalence between the enum constants and the integers is quite handy in many cases. /** Enums have behavior! */ public enum Gesture { ROCK() { // Enums are polymorphic, that's really handy! @Override public boolean beats(Gesture other) { return other == SCISSORS; } }, PAPER, SCISSORS;// we can implement with the integer representation public boolean beats(Gesture other) { return ordinal() - other.ordinal() == 1; } } Notice that there is not a single IF statement anywhere, all the business logic is handled by the integer logic and by the polymorphism, where we override the method for the ROCK case. If the ordering between the items was not cyclic we could implement it just using the natural ordering of the enum, here the polymorphism helps deal with the cycle.You can do it without any IF statement! Yes you can!This Java enum is also a perfect example that you can have your cake (offer a nice object-oriented API with intent-revealing names), and eat it too (implement with simple and efficient integer logic like in the good ol’ days). Over my last projects I’ve used a lot enums as a substitute for classes: they are guaranted to be singleton, have ordering, hashcode, equals and serialization to and from text all built-in, without any clutter in the source code. If you’re looking for Value Objects and if you can represent a part of your domain with a limited set of instances, then the enum is what you need! It’s a bit like the Sealed Case Class in Scala, except it’s totally restricted to a set of instances all defined at compile time. The bounded set of instances at compile-time is a real limitation, but now with continuous delivery, you can probably wait for the next release if you really need one extra case.   Well-suited for the Strategy pattern Let’s move to to a system for the (in-)famous Eurovision song contest; we want to be able to configure the behavior on when to notify (or not) users of any new Eurovision event. It’s important. Let’s do that with an enum: /** The policy on how to notify the user of any Eurovision song contest event */ public enum EurovisionNotification {/** I love Eurovision, don't want to miss it, never! */ ALWAYS() { @Override public boolean mustNotify(String eventCity, String userCity) { return true; } },/** * I only want to know about Eurovision if it takes place in my city, so * that I can take holidays elsewhere at the same time */ ONLY_IF_IN_MY_CITY() { // a case of flyweight pattern since we pass all the extrinsi data as // arguments instead of storing them as member data @Override public boolean mustNotify(String eventCity, String userCity) { return eventCity.equalsIgnoreCase(userCity); } },/** I don't care, I don't want to know */ NEVER() { @Override public boolean mustNotify(String eventCity, String userCity) { return false; } };// no default behavior public abstract boolean mustNotify(String eventCity, String userCity);} And a unit test for the non trivial case ONLY_IF_IN_MY_CITY: @Test public void notify_users_in_Baku_only() { assertThat(ONLY_IF_IN_MY_CITY.mustNotify("Baku", "BAKU")).isTrue(); assertThat(ONLY_IF_IN_MY_CITY.mustNotify("Baku", Paris")).isFalse(); } Here we define the method abstract, only to implement it for each case. An alternative would be to implement a default behavior and only override it for each case when it makes sense, just like in the Rock-Paper-Scissors game. Again we don’t need the switch on enum to choose the behavior, we rely on polymorphism instead. You probably don’t need the switch on enum much, except for dependency reasons. For example when the enum is part of a message sent to the outside world as in Data Transfer Objects (DTO), you do not want any dependency to your internal code in the enum or its signature. For the Eurovision strategy, using TDD we could start with a simple boolean for the cases ALWAYS and NEVER. It would then be promoted into the enum as soon as we introduce the third strategy ONLY_IF_IN_MY_CITY. Promoting primitives is also in the spirit of the 7th rule « Wrap all primitives » from the Object Calisthenics, and an enum is the perfect way to wrap a boolean or an integer with a bounded set of possible values. Because the strategy pattern is often controlled by configuration, the built-in serialization to and from String is also very convenient to store your settings.   Perfect match for the State pattern Just like the Strategy pattern, the Java enum is very well-suited for finite state machines, where by definition the set of possible states is finite.A baby as a finite state machine (picture from www.alongcamebaby.ca)Let’s take the example of a baby simplified as a state machine, and make it an enum: /** * The primary baby states (simplified) */ public enum BabyState {POOP(null), SLEEP(POOP), EAT(SLEEP), CRY(EAT);private final BabyState next;private BabyState(BabyState next) { this.next = next; }public BabyState next(boolean discomfort) { if (discomfort) { return CRY; } return next == null ? EAT : next; } } And of course some unit tests to drive the behavior: @Test public void eat_then_sleep_then_poop_and_repeat() { assertThat(EAT.next(NO_DISCOMFORT)).isEqualTo(SLEEP); assertThat(SLEEP.next(NO_DISCOMFORT)).isEqualTo(POOP); assertThat(POOP.next(NO_DISCOMFORT)).isEqualTo(EAT); }@Test public void if_discomfort_then_cry_then_eat() { assertThat(SLEEP.next(DISCOMFORT)).isEqualTo(CRY); assertThat(CRY.next(NO_DISCOMFORT)).isEqualTo(EAT); } Yes we can reference enum constants between them, with the restriction that only constants defined before can be referenced. Here we have a cycle between the states EAT -> SLEEP -> POOP -> EAT etc. so we need to open the cycle and close it with a workaround at runtime. We indeed have a graph with the CRY state that can be accessed from any state. I’ve already used enums to represent simple trees by categories simply by referencing in each node its elements, all with enum constants.   Enum-optimized collections Enums also have the benefits of coming with their dedicated implementations for Map and Set: EnumMap and EnumSet. These collections have the same interface and behave just like your regular collections, but internally they exploit the integer nature of the enums, as an optimization. In short you have old C-style data structures and idioms (bit masking and the like) hidden behind an elegant interface. This also demonstrate how you don’t have to compromise your API’s for the sake of efficiency! To illustrate the use of these dedicated collections, let’s represent the 7 cards in Jurgen Appelo’s Delegation Poker: public enum AuthorityLevel {/** make decision as the manager */ TELL,/** convince people about decision */ SELL,/** get input from team before decision */ CONSULT,/** make decision together with team */ AGREE,/** influence decision made by the team */ ADVISE,/** ask feedback after decision by team */ INQUIRE,/** no influence, let team work it out */ DELEGATE; There are 7 cards, the first 3 are more control-oriented, the middle card is balanced, and the 3 last cards are more delegation-oriented (I made that interpretation up, please refer to his book for explanations). In the Delegation Poker, every player selects a card for a given situation, and earns as many points as the card value (from 1 to 7), except the players in the « highest minority ». It’s trivial to compute the number of points using the ordinal value + 1. It is also straightforward to select the control oriented cards by their ordinal value, or we can use a Set built from a range like we do below to select the delegation-oriented cards: public int numberOfPoints() { return ordinal() + 1; }// It's ok to use the internal ordinal integer for the implementation public boolean isControlOriented() { return ordinal() < AGREE.ordinal(); }// EnumSet is a Set implementation that benefits from the integer-like // nature of the enums public static Set DELEGATION_LEVELS = EnumSet.range(ADVISE, DELEGATE);// enums are comparable hence the usual benefits public static AuthorityLevel highest(List levels) { return Collections.max(levels); } } EnumSet offers convenient static factory methods like range(from, to), to create a set that includes every enum constant starting between ADVISE and DELEGATE in our example, in the declaration order. To compute the highest minority we start with the highest card, which is nothing but finding the max, something trivial since the enum is always comparable. Whenever we need to use this enum as a key in a Map, we should use the EnumMap, as illustrated in the test below: // Using an EnumMap to represent the votes by authority level @Test public void votes_with_a_clear_majority() { final Map<AuthorityLevel, Integer> votes = new EnumMap(AuthorityLevel.class); votes.put(SELL, 1); votes.put(ADVISE, 3); votes.put(INQUIRE, 2); assertThat(votes.get(ADVISE)).isEqualTo(3); } Java enums are good, eat them! I love Java enums: they’re just perfect for Value Objects in the Domain-Driven Design sense where the set of every possible values is bounded. In a recent project I deliberatly managed to have a majority of value types expressed as enums. You get a lot of awesomeness for free, and especially with almost no technical noise. This helps improve my signal-to-noise ratio between the words from the domain and the technical jargon. Or course I make sure each enum constant is also immutable, and I get the correct equals, hashcode, toString, String or integer serialization, singleton-ness and very efficient collections on them for free, all that with very little code.  (picture from sys-con.com – Jim Barnabee article) »]The power of polymorphismThe enum polymorphism is very handy, and I never use instanceof on enums and I hardly need to switch on the enum either. I’d love that the Java enum is completed by a similar construct just like the case class in Scala, for when the set of possible values cannot be bounded. And a way to enforce immutability of any class would be nice too. Am I asking too much? Also <troll>don’t even try to compare the Java enum with the C# enum…</troll> Reference: Java Enums: You have grace, elegance and power and this is what I Love! from our JCG partner Cyrille Martraire at the Cyrille Martraire’s blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close