Featured FREE Whitepapers

What's New Here?

java-logo

Avoiding Null Checks In Java

One of the worst nightmares for java developers ( from junior to experts ) is null object reference checking. I’m pretty sure you have seen several times code like this:                   public void addAddressToCustomer(Customer customer, Address newAddress){ if ( cutomer == null || newAddress == null) return; if ( customer.getAddresses() == null ){ customer.setAddresses ( new ArrayList<>()); } customer.addAddress(newAddress); } Personally I hate writing code for null check. In this post I will list some of the things that have worked well for me and are based on my personal experience in production environments and systems.Stop checking for null objects in all layers. Limit the checks only on the upper layers such as UI layer, presentation layer or the layer of your API controllers. In other words ensure that no null objects are passed from upper layers to the business logic layer.  For instance if you are developing a standard web application using Spring annotations then you’re probably have some classes annotated with @Repository , @Service , @Controller. Controllers are responsible for receiving client’s data and pass it to the @Service class for processing. It’s their responsibility to ensure that NO null objects are passed to the service layer. Service classes and below should not be null-safe. If they’re invoked with nulls then they should throw NPE to warn developers that they should fix this error. Remember that NPE is not a user error but a developer’s error and it should be always avoided.  Do the same with user inputs that comes from html forms or other user interface input.If you follow the above approach you won’t be tempted anymore to write business logic based on the fact that an object is null or not. Null objects should not be used to decide the behavior of your system. They are exceptional values and should be treated like errors and not valid business logic state.When returning a list from a method, always return an empty list instead of null. This will allow clients to iterate the list without checking for nulls. Iterating an empty list is totally accepted and will just do nothing, whereas iterating a null list will throw a NPE.In your persistent layer when you search for a specific object (i.e. using its identifier) and it’s not found then a very common approach is to return a null object. Well, this will make all clients to manually check for this null case. In this case you have two alternatives. Either throw a run-time exception (i.e. ObjectNotFoundException ) or return an empty Object. I have used both of these options and my suggestion is to evaluate them depending on the overall architecture of your system and the tools / frameworks used.When comparing a strings always put first the string that is less possible to be null so instead of:customer.getAddress().getStreet().equals("Times Square") prefer: "Times Square".equals(customer.getAddress().getStreet())If you’re using or planning to use Java8 then the new Optional class is here for you. Check this article that clearly explains the use of Optional.Next time you’re going to write some null check code try to think a little and decide if it’s redundant or not.Reference: Avoiding Null Checks In Java from our JCG partner Patroklos Papapetrou at the Only Software matters blog....
java-logo

Avoiding Many If Blocks For Validation Checking

There are cases that we want to validate input data before we send them to business logic layer for processing, computations etc. This validation, in most cases, is done in isolation or it might include some cross-checking with external data or other inputs. Take a look at the following example that validates user input for registration data.               public void register(String email, String name, int age) { String EMAIL_PATTERN = "^[_A-Za-z0-9-\\+]+(\\.[_A-Za-z0-9-]+)*@" + "[A-Za-z0-9-]+(\\.[A-Za-z0-9]+)*(\\.[A-Za-z]{2,})$"; Pattern pattern = Pattern.compile(EMAIL_PATTERN); List<String> forbiddenDomains = Arrays.asList("domain1", "domain2"); if ( email == null || email.trim().equals("")){ throw new IllegalArgumentException("Email should not be empty!"); } if ( !pattern.matcher(email).matches()) { throw new IllegalArgumentException("Email is not a valid email!"); } if ( forbiddenDomains.contains(email)){ throw new IllegalArgumentException("Email belongs to a forbidden email"); } if ( name == null || name.trim().equals("")){ throw new IllegalArgumentException("Name should not be empty!"); } if ( !name.matches("[a-zA-Z]+")){ throw new IllegalArgumentException("Name should contain only characters"); } if ( age <= 18){ throw new IllegalArgumentException("Age should be greater than 18"); } // More code to do the actual registration } The cyclomatic complexity of this method is really high and it might get worse if there are more fields to validate or if we add the actual business logic. Of course we can split the code in two private methods ( validate, doRegister ) but the problem with several if blocks will be moved to the private methods. Besides this method is doing more than one thing and is hard to test. When I ask junior developers to refactor this code and make it more readable, testable and maintainable they look at me like an alien :”How am I supposed to make it simpler. How can I replace these if blocks?” Well here’s a solution that works fine, honors the Single Responsibility Pattern and makes the code easier to read. To better understand the solution, think each of these if blocks as a validation rule. Now it’s time to model these rules. First create an interface with one method. In Java 8 terms, it’s called a functional interface, like the following. public interface RegistrationRule{ void validate(); } Now it’s time to transform each validation check to a registration rule. But before we do that we need to address a small issue. Our interface implementation should be able to handle registration data but as you see we have different types of data. So what we need here is to encapsulate registration data in a single object like this : public class RegistrationData{ private String name; private String email; private int age; // Setters - Getters to follow } Now we can improve our functional interface: public interface RegistrationRule{ void validate(RegistrationData regData); }and start writing our rule set. For instance let’s try to implement the email validation. public class EmailValidatationRule implements RegistrationRule{ private static final String EMAIL_PATTERN = "^[_A-Za-z0-9-\\+]+(\\.[_A-Za-z0-9-]+)*@" + "[A-Za-z0-9-]+(\\.[A-Za-z0-9]+)*(\\.[A-Za-z]{2,})$"; private final Pattern pattern = Pattern.compile(EMAIL_PATTERN); @Override public void validate(RegistrationData regData) { if ( !pattern.matcher(regData.email).matches()) { throw new IllegalArgumentException("Email is not a valid email!"); } } It’s clear that we have isolated in the above class the email validation. We can do the same for all rules of our initial implementation. Now we can re-write our register method to use the validation rules. List<RegistrationRule> rules = new ArrayList<>(); rules.add(new EmailValidatationRule()); rules.add(new EmailEmptinessRule()); rules.add(new ForbiddenEmailDomainsRule()); rules.add(new NameEmptinessRule()); rules.add(new AlphabeticNameRule()); for ( RegistrationRule rule : rules){ rule.validate(regData); } To make it even better we can create a Rules class using the Factory pattern and a static method get() that will return the list of rules. And our final implementation will look like this for ( RegistrationRule rule : Rules.get()){ rule.validate(regData); } Comparing the initial version of our register method to the final one leaves room for doubts. Our new version is more compact, more readable and of course more testable. The actual checks have been moved to separate classes (which are easy to test also) and all methods do only one thing (try to always keep that in mind).Reference: Avoiding Many If Blocks For Validation Checking from our JCG partner Patroklos Papapetrou at the Only Software matters blog....
spring-logo

Spring 4.1 and Java 8: java.util.Optional

As of Spring 4.1 Java 8’s java.util.Optional, a container object which may or may not contain a non-null value, is supported with @RequestParam, @RequestHeader and @MatrixVariable. While using Java 8’s java.util.Optional you make sure your parameters are never null.     Request Params In this example we will bind java.time.LocalDate as java.util.Optional using @RequestParam: @RestController @RequestMapping("o") public class SampleController {@RequestMapping(value = "r", produces = "text/plain") public String requestParamAsOptional( @DateTimeFormat(iso = DateTimeFormat.ISO.DATE) @RequestParam(value = "ld") Optional<LocalDate> localDate) {StringBuilder result = new StringBuilder("ld: "); localDate.ifPresent(value -> result.append(value.toString())); return result.toString(); } } Prior to Spring 4.1, we would get an exception that no matching editors or conversion strategy was found. As of Spring 4.1, this is no more an issue. To verify the binding works properly, we may create a simple integration test: @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = Application.class) @WebAppConfiguration public class SampleSomeControllerTest {@Autowired private WebApplicationContext wac; private MockMvc mockMvc;@Before public void setUp() throws Exception { mockMvc = MockMvcBuilders.webAppContextSetup(wac).build(); }// ...} In the first test, we will check if the binding works properly and if the proper result is returned: @Test public void bindsNonNullLocalDateAsRequestParam() throws Exception { mockMvc.perform(get("/o/r").param("ld", "2020-01-01")) .andExpect(content().string("ld: 2020-01-01")); } In the next test, we will not pass ld parameter: @Test public void bindsNoLocalDateAsRequestParam() throws Exception { mockMvc.perform(get("/o/r")) .andExpect(content().string("ld: ")); } Both tests should be green! Request Headers Similarly, we can bind @RequestHeader to java.util.Optional: @RequestMapping(value = "h", produces = "text/plain") public String requestHeaderAsOptional( @RequestHeader(value = "Custom-Header") Optional<String> header) {StringBuilder result = new StringBuilder("Custom-Header: "); header.ifPresent(value -> result.append(value));return result.toString(); } And the tests: @Test public void bindsNonNullCustomHeader() throws Exception { mockMvc.perform(get("/o/h").header("Custom-Header", "Value")) .andExpect(content().string("Custom-Header: Value")); }@Test public void noCustomHeaderGiven() throws Exception { mockMvc.perform(get("/o/h").header("Custom-Header", "")) .andExpect(content().string("Custom-Header: ")); } Matrix Variables Introduced in Spring 3.2 @MatrixVariable annotation indicates that a method parameter should be bound to a name-value pair within a path segment: @RequestMapping(value = "m/{id}", produces = "text/plain") public String execute(@PathVariable Integer id, @MatrixVariable Optional<Integer> p, @MatrixVariable Optional<Integer> q) {StringBuilder result = new StringBuilder(); result.append("p: "); p.ifPresent(value -> result.append(value)); result.append(", q: "); q.ifPresent(value -> result.append(value));return result.toString(); } The above method can be called via /o/m/42;p=4;q=2 url. Let’s create a test for that: @Test public void bindsNonNullMatrixVariables() throws Exception { mockMvc.perform(get("/o/m/42;p=4;q=2")) .andExpect(content().string("p: 4, q: 2")); } Unfortunatelly, the test will fail, because support for @MatrixVariable annotation is disabled by default in Spring MVC. In order to enable it we need to tweak the configuration and set the removeSemicolonContent property of RequestMappingHandlerMapping to false. By default it is set to true. I have done with WebMvcConfigurerAdapter like below: @Configuration public class WebMvcConfig extends WebMvcConfigurerAdapter { @Override public void configurePathMatch(PathMatchConfigurer configurer) { UrlPathHelper urlPathHelper = new UrlPathHelper(); urlPathHelper.setRemoveSemicolonContent(false); configurer.setUrlPathHelper(urlPathHelper); } } And now all tests passes! Please find the source code for this article here: https://github.com/kolorobot/spring41-samplesReference: Spring 4.1 and Java 8: java.util.Optional from our JCG partner Rafal Borowiec at the Codeleak.pl blog....
software-development-2-logo

How designing for the cloud would improve your service implementation

Cloud platforms come with a variety of options and constraints certainly driven by infrastructure and business needs which indeed diversify their pricing plans and customer adoption as well. But often they also have an implicit value: designing with certain constraints in mind would facilitate scaling and replication, would provide performance gains and increased revenues. Are constraints such as request timeout or limits on outbound traffic and datastore usage, so bad after all? However, when designing for your private hosting, you might skip or ignore few of them and miss an opportunity to improve your service design. Here are few points worth to consider:      Design for monetization: no matter which PaaS you would choose, it will always have constraints on network traffic, datastore connections, datastore space and so on, and that’s reasonable and obvious: they need to make a profit after all. When designing for the cloud you definitely need to face these constraints trying to minimize your expenses (review your data model, restrict required data, limit outbound traffic, adding cache mechanisms and so on), because you certainly want to increase as much as possible your monthly revenue. Designing for your private hosting you would not have taken so seriously some of these parameters perhaps, realizing later on their importance though.Design for performance: as part of the previous point, you might need to adapt your design to fit some cloud constraints and obtain a considerable performance gain which wasn’t directly linked to monetization (but it would impact it somehow after all). You need to limit your outbound traffic, your request time-out, the number of queries on your database, any long running process and so on: well, a bit of healthy pressure on these subjects would definitely not hurt your design.Design for latest technologies and standards: PaaS usually don’t support all existing technologies nor all frameworks/tools per technology, but they normally offer the most common ones, well known and widely used, because they obviously need to target a large community and facilitate your deploy (and their profit). These constraints may change your decision concerning build management, for instance, giving up on your well proven ant script and going for Maven on Heroku, or finally stop using a certain old-but-known library in favour of a more modern and effective one. That might have an impact on your roadmap though because of unexpected learning curves and you would not have faced this question on your nice private hosting, but it might be time to upgrade your competences while working on your project, soon or later you may appreciate that. And it could facilitate the integration of new team members in the future.Design for abstraction: the majority of PaaS would also require some platform/API dependency which would make your application cloud-platform dependent, something you would definitely try to avoid. Adding further layers of abstraction may solve the issue: you may use Memcache or Big table on GAE, for instance, but your service shouldn’t know that if you really want to keep a smooth portability. It’s an extra effort indeed which may affect the decision of choosing a target Paas rather than another, but that’s worth in case of future moves. You would probably not waste time and energy on your more open remote or private hosting, but you may end up later on in complex refactorings and wonder why a simple and clean abstraction wasn’t part of your initial design.Conclusion While you would reasonably ignore cloud constraints when designing for your private hosting, you might gain a quick added value just keeping in mind few of their constraints and challenge your design against them: how would your design react? Would it be easy to deploy your application elsewhere? Would your monetization improve adapting your design to any of them? The exercise is worth the effort as long as it doesn’t sound completely artificial and you do see some potential added values. There are plenty of non functional requirements you might just have left out and possibly you might also consider to use a cloud platform for your final deploy. Pick up two or three of the most used ones (AWS, GAE, Openshift, Heroku, Cloudbees are worth to mention, but they are not the only ones) and dare your design for an hypothetical deploy.Reference: How designing for the cloud would improve your service implementation from our JCG partner Antonio Di Matteo at the Refactoring Ideas blog....
jboss-hibernate-logo

A beginner’s guide to JPA/Hibernate entity state transitions

Introduction Hibernate shifts the developer mindset from SQL statements to entity state transitions. Once an entity is actively managed by Hibernate, all changes are going to be automatically propagated to the database. Manipulating domain model entities (along with their associations) is much easier than writing and maintaining SQL statements. Without an ORM tool, adding a new column requires modifying all associated INSERT/UPDATE statements. But Hibernate is no silver bullet either. Hibernate doesn’t free us from ever worrying about the actual executed SQL statements. Controlling Hibernate is not as straightforward as one might think and it’s mandatory to check all SQL statements Hibernate executes on our behalf. The entity states As I previously mentioned, Hibernate monitors currently attached entities. But for an entity to become managed, it must be in the right entity state. First we must define all entity states:New (Transient): A newly created object that hasn’t ever been associated with a Hibernate Session (a.k.a Persistence Context) and is not mapped to any database table row is considered to be in the New (Transient) state.To become persisted we need to either explicitly call the EntityManager#persist method or make use of the transitive persistence mechanism. Persistent (Managed): A persistent entity has been associated with a database table row and it’s being managed by the current running Persistence Context. Any change made to such entity is going to be detected and propagated to the database (during the Session flush-time). With Hibernate, we no longer have to execute INSERT/UPDATE/DELETE statements. Hibernate employs a “transactional write-behind” working style and changes are synchronized at the very last responsible moment, during the current Session flush-time. Detached: Once the current running Persistence Context is closed all the previously managed entities become detached. Successive changes will no longer be tracked and no automatic database synchronization is going to happen.To associate a detached entity to an active Hibernate Session, you can choose one of the following options:Reattaching Hibernate (but not JPA 2.1) supports reattaching through the Session#update method.A Hibernate Session can only associate one Entity object for a given database row. This is because the Persistence Context acts as an in-memory cache (first level cache) and only one value (entity) is associated to a given key (entity type and database identifier).An entity can be reattached only if there is no other JVM object (matching the same database row) already associated to the current Hibernate Session. Merging The merge is going to copy the detached entity state (source) to a managed entity instance (destination). If the merging entity has no equivalent in the current Session, one will be fetched from the database.The detached object instance will continue to remain detached even after the merge operation.Removed: Although JPA demands that managed entities only are allowed to be removed, Hibernate can also delete detached entities (but only through a Session#delete method call).A removed entity is only scheduled for deletion and the actual database DELETE statement will be executed during Session flush-time.Entity state transitions To change one Entity state, we need to use one of the following entity management interfaces:EntityManager SessionThese interfaces define the entity state transition operations we must explicitly call to notify Hibernate of the entity state change. At flush-time the entity state transition is materialized into a database SQL statement (INSERT/UPDATE/DELETE).Reference: A beginner’s guide to JPA/Hibernate entity state transitions from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
apache-lucene-logo

Keyword extraction and similarity calculation among textual content

Background Web applications are becoming smarter. Gone are the days when to avail a service from a website, user had to fill up a giant form. Let’s say, you have a website which is for book lovers. Before web 2.0, sites like these used to ask user all kind of questions in a form like age, books they read, types of book they like, language preference, author preference etc. Now days, it is a common practice to ask user to write a paragraph on themselves (profile). In this note, user express some details, but the challenge is, how we extract useful information from such free form text and more over how we find user who have similar interest? This use case has become so common that every java developer should know some tricks about information retrieval from text. In this article, I shall walk you through one simple yet effective way to do it. Processes to extract information from textFilter Words: Read textual content word by word and remove unwanted words. As part of this filtering state, remove all commonly used English words. One can also apply censor rules and remove sexually explicit words or hate speech etc. Perform Stemming: Words like ‘search’ or ‘searched’ or ‘searching’ which all mean ‘search’. This process of reducing word to its root is called stemming. Calculate Similarity: After first two steps, we now have a set of keywords that truly represent original text (user profile in this example). We can treat these keywords as set of unique words. To calculate similarity between two user profiles, it would be better if we represent similarity in terms of a number which represent how similar two contents are in a 0 (not similar) to 1 (completely similar) scale. One way to achieve that is to calculate Jaccard Index which is used to calculate similarity or diversity of sets.Jaccard index J(A,B) = |A∩B|/| A⋃B| where A and B are sets and J(A,B) lies between 0 to 1. Implementation Details Based on the points outlined above, one can develop a library to extract keywords and calculate similarity. However, Apache Lucene is a java library that has plenty of API to perform keyword extraction. Here is a brief description of different important areas of this API. Tokenizer Tokenizer splits your text into chunks. There are different tokenizers and depending upon the tokenizer you use, you can get different output token streams (sequences of chunks of text). Stemmers Stemmers are used to get the base of a word in question. It heavily depends on the language used. Words like ‘seaerch’, ’searched’, ’searching’ etc comes from the root word ‘search’. In information retrieval field, it is very useful if we get to the root words as that reduce noise and with fewer words we can still carry the intent of the document. One of the famous stemmer algorithm is Porter Stemmer algo. TokenFilter Tokenfilter can be applied on the tokenizer output to normalize or filter tokens. Like LowerCaseFilter which normalizes token text to lower case or stopfilter that suppress most frequent and almost useless words. Again, it heavily depends on language. For English these stop words are “a”, “the”, “I”, “be”, “have”, etc. Analyzer An analyzer is the higher level class that uses tokenizers to produce tokens from input, uses stemmers to reduce the token, uses filters to suppress/normalize the tokens. This is the class that glue the other three main components. Different Analyzers use different combinations of tokenizers and filters. For example, StandardAnalyzer uses StandardTokenizer to extract tokens from string, pass that through LowerCaseFilter to convert tokens into lower case and then pass the stream of tokens through StopFilter to remove most commonly used English words. It does not perform stemming by default. One can develop a custom analyzer by mixing and matching tokenizer and tokenfilters according the need. Code walk through Source code of this example can be accessed from https://github.com/shamikm/similarity . Below is a highlight of the steps:Create a custom analyzer that perform the following steps:Tokenize English words based on space, comma, period etc. Use StandardTokenizer for this task. Convert the tokens into lowercase using LowerCaseFilter Stop common English words using StopFilter Stem English words using Porter StemmerFrom StemmAnalyzer class: @Override public TokenStream tokenStream(String fieldName, Reader reader) { (a).. final StandardTokenizer src = new StandardTokenizer(matchVersion, reader); TokenStream tok = new StandardFilter(matchVersion, src); (b).. tok = new LowerCaseFilter(matchVersion, tok); (c).. tok = new StopFilter(matchVersion, tok, getStopWords()); (d).. return new PorterStemFilter(tok); } Once we have set of words, it’s easy to calculate similarity between two sets. From JaccardIndexBasedSimilarity class: public double calculateSimilarity(String oneContent, String otherContet) { Set<String> keyWords1 = keywordGenerator.generateKeyWords(oneContent); Set<String> keyWords2 = keywordGenerator.generateKeyWords(otherContet); Set<String> denominator = Sets.union(keyWords1,keyWords2); Set<String> numerator = Sets.intersection(keyWords1,keyWords2);return denominator.size()>0? (double)numerator.size()/(double)denominator.size() : 0; }Here is a sample test case to demonstrate how the code works: @Test public void calculateSim(){ SimilarityCalculator calculator = new JaccardIndexBasedSimilarity(); Assert.assertEquals(calculator.calculateSimilarity("They Licked the platter clean","Jack Sprat could eat no fat"),0.0); //1(lamb) out of 6(littl,lamb,mari,had,go,sure) words are same Assert.assertEquals(calculator.calculateSimilarity("Mary had a little lamb", "The lamb was sure to go."), 0.16, 0.02); Assert.assertEquals(calculator.calculateSimilarity("Mary had a little lamb","Mary had a little lamb"),1.0); } You can run this process offline and find out how one user profile is similar to any other users in your database and can start recommending users based on what similar users are reading. Conclusion Information retrieval from text is a common use case now-a-days. Having a basic knowledge on this critical field is helpful for any developer and in this article, we looked at how Apache Lucene API can be used effectively to extract keyword and calculate similarity among text. Resources:http://en.wikipedia.org/wiki/Jaccard_index http://tartarus.org/martin/PorterStemmer/ http://www.manning.com/ingersoll/ http://www.amazon.com/Algorithms-Intelligent-Web-Haralambos-Marmanis/dp/1933988665...
junit-logo

Clean JUnit Throwable-Tests with Java 8 Lambdas

Recently I was involved in a short online discussion on twitter and google+ which concerned the question why the arrival of Java 8 Lambda expressions makes the catch-exception library1 obsolete. This was triggered by a brief announcement that the library won’t be longer maintained as lambdas will make it redundant. The answer I came up with at that time has a lot in common with the one presented by Rafał Borowiec in his well written post JUNIT: TESTING EXCEPTION WITH JAVA 8 AND LAMBDA EXPRESSIONS. Giving both approaches a second thought however, I believe one could do even a bit better with respect to clean code. So this post is a trackback on that topic which shares my latest considerations and explains concisely a slightly refined solution. This way I hopefully will find out about the weak points soon… Motivation While writing tests I always strive to end up with a clear visual separation of the arrange/act/assert2 phases in a test method (and I am under the impression that it is getting more and more popular to emphasize those phases optically by using empty lines as separator). Now it seems to me that the catch-exception solutions mentioned above mix the act and assert phases more or less together. This is because both assert that a Throwable has been thrown while still being in the act phase. But an assertion belongs apparently to the assert phase. Fortunately this problem can be solved easily. Refinement Let’s have look at a simple example to explain how the refined approach might look like. I start with a class that provides a method throwing an IllegalStateException for demonstration purpose: public class Foo {static final String ERR_MESSAGE = "bad";public void doIt() throws IllegalStateException { throw new IllegalStateException(ERR_MESSAGE); } } The next snippet introduces a little helper that is responsible for capturing a Throwable thrown during the act phase of a JUnit test. Note that it does not assert anything by itself. It simply returns the captured Throwable if any or null otherwise. public class ThrowableCaptor {public interface Actor { void act() throws Throwable; }public static Throwable captureThrowable( Actor actor ) { Throwable result = null; try { actor.act(); } catch( Throwable throwable ) { result = throwable; } return result; } } To highlight that the ThrowableCaptor is used to deal with the act phase of a JUnit Test the captorThrowable method takes a parameter of a type Actor – which admittedly might overdue the metaphor a bit… Anyway, with that utility in place, AssertJ for clean matcher expressions, static imports and Java 8 lambdas at hand, an exception test might look like this: public class FooTest {@Test public void testException() { // arrange Foo foo = new Foo(); // act Throwable actual = captureThrowable( foo::doIt ); // assert assertThat( actual ) .isInstanceOf( IllegalStateException.class ) .hasMessage( Foo.ERR_MESSAGE ); } } For clarification I have inserted comments to depict the clear separation of the three phases in the test method. In case that no exception is thrown the assert block would quit this with an assertion error noting that ‘Expecting actual not to be null’3. Conclusion By moving the Throwable existence check from the act to the assert phase, the catch-exception approach based on Java8 lambda expressions allows to write such tests in a pretty clean way – at least from my current point of view. So what do you think? Am I missing something?  I order to make exception testing cleaner, the catch-exception library catches exceptions in a single line of code and makes them available for further analysis See Practical Unit Testing, Chapter 3.9. Phases of a Unit Test, Tomek Kaczanowski 2013, often also denoted as build-operate-check pattern, Clean Code, Chapter 9. Unit Tests, Robert C. Martin 2009 The Assertion#isNotNull check is implicitly called by Assertion#isInstanceOf, but it can be called also explicitly of courseReference: Clean JUnit Throwable-Tests with Java 8 Lambdas from our JCG partner Frank Appel at the Code Affine blog....
enterprise-java-logo

A closer look at Oracle IDM Auditing

Reporting is a vital functionality in any product which deals with sensitive information. Same applies to Identity & Access Management tools. Oracle IDM’s Auditing module acts as a foundation for its OOTB Reporting capabilities. Let’s take a quick look at Auditing engine and how it facilitates the Reporting functionality within OIM. The use case presented here is simple – change to a user record in OIM. What are the sequence of events which get triggered from an Audit perspective? This is best explained by a diagram. I came up with the figure below in an attempt to better articulate the process.Although the diagram is self explanatory, a theoretical translation of the same is not going to harm us!The updated/created user record gets pushed into the USR table (stores the user information) – Its a normal process by which the information gets recorded in the OIM Database The information is further propagated by the OIM Auditing engine (as a part of core back end server logic) and it initiates a transaction The Audit Engine inserts a new entry in the AUD_JMS table as a part of the audit transaction completion. The AUD_JMS table is nothing but a staging table The Issue Audit Messages scheduled job picks up the Audit messages in the AUD_JMS table and submits the key to the oimAuditQueue JMS queue. The MDB corresponding to the queue initiates the Audit data processing – the data is seeded into the UPA table. This data is in the form of XML. These are snapshots of the user profile at the instant when the user record was actually modified/created. The UPA table also stores the delta (changes to the profile) Finally, the Post processors of the Audit engine pick up the XML snapshots from the central UPA table and store them in specific audit tables (in a de-normalized format) like UPA_USR, UPA_USR_FIELDS, UPA_RESOURCE, UPA_UD_FORMS etc These tables serve as the primary source of information for the Reporting module. If you have ever worked on the OIM Reporting module, I am sure you can relate to the Data Sources which you configure on your BI Publisher instance – these are for executing direct queries on the above mentioned Audit tables for its data.That’s pretty much it ! This was not a coverage of the entire Audit module in OIM, but a preview of HOW the process is orchestrated on a high level. Thanks for reading!Reference: A closer look at Oracle IDM Auditing from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
agile-logo

Nightmare on Agile Street

I’m awake. I’m lying in my bed. I’m sweating but I’m cold. Its the small hours of the morning and the dream is as vivid as it is horrid…. I’m standing in a clients offices, I’ve been here before, I know whats happening. They are building an website. Quite a complex one, this will be the primary purchasing venue for many customers. This will project the company image – and with the right bits it can up-sell to customers – it can even help reduce costs by servicing the customers after the sale. All good stuff. But it is atrociously “behind” schedule, someone said it would be finished in a year, that was three years ago before any code was written. Now its two years to completion but in my dream people say 2+3=2. How can that be? I can’t say it but the only way out I can see is cancellation. If I was suddenly in charge of the client I’d cancel the thing. I’d salvage what I could and I’d launch a new, smaller, initiative to replace the website. But its too big to fail, even the board knows how much money they are spending. Who’s going to walk in there and say: “Scrap it.” Saying “Scrap it” would be to admit one failure and invite a messenger shooting. And if I was the head of the supplier I’d say the same thing. I’d say to my customer: “I know I’m earning oodles of cash out of this, I know its a high profile feather in our cap but really its out of control you really shouldn’t continue.” But of course they won’t. Forget the money they’d lose, they weren’t hired to answer back – like my tailor friend. And of course I’m neither of those. I’m just the guy having the nightmare and in the nightmare I’m the consultant who is trying to fix, it. In the nightmare I’m not fixing it I’m providing cover, while I’m there its Agile, while its Agile its good, Agile is a good drug and I’m the pusher. “You can’t cancel it because all the competitors have one and so we must have one” tells me a ghostly apparition. “We must be best in class” says another apparition. “We must be head-and-shoulders above the opposition” says third – aren’t the opposition seven times the size? And don’t the competition buy large parts of their solution off the shelf? But every time I look the work seems to grow. Every discussion ends in more stories. Not just stories, epics, super-stories, sub-epics, mezzanine-stories. But its OK, this is Agile. The business keeps throwing new requests at it which are just accepted – because they are Agile! Some of these are quite big. But that’s OK because the team are Agile. And Agile means the team do what the business want right? I watch the Analysts work over the stories in the backlog, as they do each grows and replicates like an alien parasite. The Analysts find more edge cases, extra detail which need to be included, more scenarios which need to be catered for. Each becomes a story itself. But that’s OK because the team are Agile. And those damn competitors don’t stop adding and improving their site which mean the client must add that too. But that’s OK because the team are Agile. And the points…. points are the new hours, in the dream I have a book “The Mythical Man Point”. The backlog is measured in thousands of points. The burn-down charts go down – but only if you look at the sprint burn-down, hunt around Jira and you can find a project wide burn-down, O my god, no….. its full of stories! This is not a burn-down chart carrying us to a safe landing, its a fast climbing interceptor… The backlog is a demon… its… its… undead. The faces of those who’ve seen the chart are prematurely aged. Open Jira and show someone the chart and…. their hair turns grey, the wrinkles appear, in moments they are…. One man is immune. As the points grow his power grows, he is… he is… The Product Owner. He introduces himself: “Snape is the name, Severus Snape” – I knew I’d seen him somewhere before. In the planning meeting, he sees the poker cards pulled out, he focuses on the developer with the highest score, there is a ray of cutting sarcasm… he withers. The developers submit, the numbers are lowered. The Product Owner chuckles to himself – no over estimating on his watch! One of the developers suggest “Maybe we should wait until we finish the current work” Snape sneers: “I thought you were Agile boy?” “If you can’t handle it I have some friends in Transylvanian who are really Agile…. do you want to lose the contract boy? … Off-shore is so so cheap…” There is a reality distortion field around the Product Owner. Show him a burn-down chart and it looks good, his presentations to the steering committee always show perfect burn-down. I’m in my pyjamas standing outside the building at night: a sinister looking figure is breaking and entering, he sneaks into the building, he opens Jira and … inserts stories! His mask falls, it is….The Product Owner! Of course, without stories in the backlog he would cease to exist, his power comes from the size of the backlog, more stories more power. Ever since his boss came down with a rare form of chronic flu a link in the reporting chain has been missing. Made worse when the next man up was dismissed for inappropriate behaviour in the canteen. Since when the Product Owner reports to the COO, a COO who doesn’t really have time for him and only has a shaky understanding of any IT related topic. I do the maths. The backlog isn’t so much a backlog as a mortgage, and the team are under water! The payments they make against the mortgage aren’t even covering the growth in stories. The backlog growth is an interest rate they can’t pay. It takes months for stories to progress through the backlog and reach developers. When work finally gets to developers they too uncover more edge cases, more details, more scenarios, more of just about everything. Why didn’t the Analysts find these? Did they find them and then lose them? Then there is a stream of bugs coming in – oozing through the walls. The technical practices aren’t solid, they are… custard! Bugs get caught but more get through! Bugs can’t be fixed because: “bugs are OpEx and we are funded from CapEx.” Someone has slain the Bug Fixing Fairy, her body is found slumped in the corner, a nice your girl straight out of college. They are hiring another fresh young graduate to send to the slaughter, fortunately Bug Fixing Fairies are Plug Compatible with one another. Release dates can’t be honoured. Woody Allen and Anne Hall walk in – since when did Woody Allen do horror films? ‘two elderly women are at a Catskill mountain resort, and one of ‘em says, “Boy, the food at this place is really terrible.” The other one says, “Yeah, I know; and such small portions.”’ I have X-Ray vision: I can see WIP where it lies, there are piles of it on the floor. Its stacked up like beer barrels in a brewery. But the beer isn’t drinkable. Its a fiendish plan. If anyone drinks that beer, if the WIP is shipped, they will discover…. its full of holes! Quality control is… offshore. Why is there so much WIP lying around? Why is the WIP rising? Because they are Agile comes the reply… the business can change their mind at anytime, and they do. I’m drowning in WIP. WHIP to the left of me, WHIP to the right of me. The developers are half way through a piece of work and the team are told to put it to one side and do something else. Nothing gets delivered, everything is half baked. WHIP – work hopefully in process that is. When, and IF, the team return to a piece of WHIP things have changed, the team members might have changed, so picking it up isn’t easy. WHIP goes off, the stench of slowly rotting software. But that’s OK because the team are Agile. Arhhh, the developers are clones, they are plug compatible, you can switch them into and out as you like… but they have no memory…. It gets worse, the client has cunningly outsourced their network ops to another supplier, and their support desk to another one, and the data-centre to the another… no one contractor has more than one contract. Its a perverse form of WIP limit, no supplier is allowed more than one contract. O my god, I’m flying through the data centre, the data centre supplier has lost control, the are creepers everywhere, each server is patched in a different way, there is a stack of change configuration requests in a dark office, I approach the clerk, its its…. Terry Gilliam, the data centre is in Brazil…. Even when the business doesn’t change its mind the development team get stuck. They have dependencies on other teams and on other some other sub-contractor. So work gets put to one side again, more WIP. All roads lead to Dounreay in Scotland, a really good place if you want to build something really dangerous, but why does this project require a fast breeder nuclear reactor? But that’s OK because the team are Agile. The supplier is desperate to keep their people busy, if The Product Owner sees a programmer who’s fingers are not moving on the keyboard he turns them to stone. The team manager is desperate to save his people, he rummages in the backlog and finds… a piece of work they can do. (With a backlog that large you can always find something even if the business value is negative – and there are plenty of them.) You can’t blame the development team, they need to look busy, they need to justify themselves so they work on what they can. But that’s OK because the team are Agile. Get me out of here!!!!! I’m in my kitchen. My hands are wrapped around a hot-chocolate, I need a fresh pair of dry pyjamas but that can wait while I calm down. I’ve wrapped a blanket around me and have the shivers under control. Are they Agile? Undoubtedly it was sold as Agile. It certainty ain’t pretty but it is called Agile. They have iterations. They have planing meeting. They have burn-downs. They have a Scrum Master and they have Jira. They have User Stories. They have some, slow, automated acceptance tests, some developers are even writing automated unit tests. How could it have gone so wrong? Sure the development team could be better. You could boost the supply curve. But that would be like administering morphine. The pain would be relieved for a while but the fever would return and it would be worse. The real problem is elsewhere. The real problem is rampant demand. The real problem is poor client management. The real problem is a client who isn’t looking at the feedback. The real problems are multiple, thats what is so scary about the dream. They are all interconnected. In the wee-small hours I see no way of winning, its a quagmire: To save this project we need to destroy this project. But we all know what happened in Vietnam. What is to be done? – I can’t go back to sleep until I have an answer. Would the team be better off doing Waterfall? The business would still change its mind, project management would put a change request process in place and the propagation delay would be worse. There would probably be more bugs – testing would be postponed. Releases would be held back. This would look better for a few months until they came to actually test and release. If they did waterfall, if they did a big requirements exercise, a big specification, a big design, a big estimation and a big plan they might not choose to do it. But frankly Agile is telling them clearly this will never be done. In fact its telling them with a lot more certainty because they are several years in and have several years of data to look at. Agile is the cover. Because they are Agile they are getting more rope to hang themselves with. But all this is a dream, a horrid dream, none of this ever happened.Reference: Nightmare on Agile Street from our JCG partner Allan Kelly at the Agile, Lean, Patterns blog....
devops-logo

Devops isn’t killing developers – but it is killing development and developer productivity

Devops isn’t killing developers – at least not any developers that I know. But Devops is killing development, or the way that most of us think of how we are supposed to build and deliver software. Agile loaded the gun. Devops is pulling the trigger. Flow instead of Delivery A sea change is happening in the way that software is developed and delivered. Large-scale waterfall software development projects gave way to phased delivery and Spiral approaches, and then to smaller teams delivering working code in time boxes using Scrum or other iterative Agile methods. Now people are moving on from Scrum to Kanban, and to One-Piece Continuous Flow with immediate and Continuous Deployment of code to production in Devops. The scale and focus of development continues to shrink, and so does the time frame for making decisions and getting work done. Phases and milestones and project reviews to sprints and sprint reviews to Lean controls over WIP limits and task-level optimization. The size of deliverables: from what a project team could deliver in a year to what a Scrum team could get done in a month or a week to what an individual developer can get working in production in a couple of days or a couple of hours. The definition of “Done” and “Working Software” changes from something that is coded and tested and ready to demo to something that is working in production – now (“Done Means Released”). Continuous Delivery and Continuous Deployment replace Continuous Integration. Rapid deployment to production doesn’t leave time for manual testing or for manual testers, which means developers are responsible for catching all of the bugs themselves before code gets to production – or do their testing in production and try to catch problems as they happen (aka “Monitoring as Testing“). Because Devops brings developers much closer to production, operational risks become more important than project risks, and operational metrics become more important than project metrics. System uptime and cycle time to production replace Earned Value or velocity. The stress of hitting deadlines is replaced by the stress of firefighting in production and being on call. Devops isn’t about delivering a project or even delivering features. It’s about minimizing lead time and maximizing flow of work to production, recognizing and eliminating junk work and delays and hand offs, improving system reliability and cutting operational costs, building in feedback loops from production to development, standardizing and automating steps as much as possible. It’s more manufacturing and process control than engineering. Devops kills Developer Productivity too Devops also kills developer productivity. Whether you try to measure developer productivity by LOC or Function Points or Feature Points or Story Points or velocity or some other measure of how much code is written, less coding gets done because developers are spending more time on ops work and dealing with interruptions, and less time writing code. Time learning about the infrastructure and the platform and understanding how it is setup and making sure that it is setup right. Building Continuous Delivery and Continuous Deployment pipelines and keeping them running. Helping ops to investigate and resolve issues, responding to urgent customer requests and questions, looking into performance problems, monitoring the system to make sure that it is working correctly, helping to run A/B experiments, pushing changes and fixes out… all take time away from development and pre-empt thinking about requirements and designing and coding and testing (the work that developers are trained to do and are good at). The Impact of Interruptions and Multi-Tasking You can’t protect developers from interruptions and changes in priorities in Devops, even if you use Kanban with strict WIP limits, even in a tightly run shop – and you don’t want to. Developers need to be responsive to operations and customers, react to feedback from production, jump on problems and help detect and resolve failures as quickly as possible. This means everyone, especially your most talented developers, need to be available for ops most if not all of the time. Developers join ops on call after hours, which means carrying a pager (or being chased by Pager Duty) after the day’s work is done. And time wasted on support calls for problems that end up not being real problems, and long nights and weekends on fire fighting and tracking down production issues and helping to recover from failures, coming in tired the next day to spend more time on incident dry runs and testing failover and roll-forward and roll-back recovery and participating in post mortems and root cause analysis sessions when something goes wrong and the failover or roll-forward or roll-back doesn’t work. You can’t plan for interruptions and operational problems, and you can’t plan around them. Which means developers will miss their commitments more often. Then why make commitments at all? Why bother planning or estimating? Use just-in-time prioritization instead to focus in on the most important thing that ops or the customer need at the moment, and deliver it as soon as you can – unless something more important comes up and pre-empts it. As developers take on more ops and support responsibilities, multi-tasking and task switching – and the interruptions and inefficiency that come with it – increase, fracturing time and destroying concentration. This has an immediate drag on productivity, and a longer term impact on people’s ability to think and to solve problems. Even the Continuous Deployment feedback loop itself is an interruption to a developer’s flow. After a developer checks in code, running unit tests in Continuous Integration is supposed to be fast, a few seconds or minutes, so that they can keep moving forward with their work. But to deploy immediately to production means running through a more extensive set of integration tests and systems tests and other checks in Continuous Delivery (more tests and more checks takes more time), then executing the steps through to deployment, and then monitoring production to make sure that everything worked correctly, and jumping in if anything goes wrong. Even if most of the steps are automated and optimized, all of this takes extra time and the developer’s attention away from working on code. Optimizing the flow of work in and out of operations means sacrificing developer flow, and slowing down development work itself. Expectations and Metrics and Incentives have to Change In Devops, the way that developers (and ops) work change, and the way that they need to be managed changes. It’s also critical to change expectations and metrics and incentives for developers. Devops success is measured by operational IT metrics, not on meeting project delivery goals of scope, schedule and cost, not on meeting release goals or sprint commitments, or even meeting product design goals.How fast can the team respond to important changes and problems: Change Lead Time and Cycle Time to production instead of delivery milestones or velocity How often do they push changes to production (which is still the metric that most people are most excited about – how many times per day or per hour or minute Etsy or Netflix or Amazon deploy changes) How often do they make mistakes – Change / Failure ratio System reliability and uptime – MTBF and especially MTTD and MTTR Cost of change – and overall Operations and Support costsDevops is more about Ops than Dev As more software is delivered earlier and more often to production, development turns into maintenance. Project management is replaced by incident management and task management. Planning horizons get much shorter – or planning is replaced by just-in-time queue prioritization and triage. With Infrastructure as Code Ops become developers, designing and coding infrastructure and infrastructure changes, thinking about reuse and readability and duplication and refactoring, technical debt and testability and building on TDD to implement TDI (Test Driven Infrastructure). They become more agile and more Agile, making smaller changes more often, more time programming and less on paper work. And developers start to work more like ops. Taking on responsibilities for operations and support, putting operational risks first, caring about the infrastructure, building operations tools, finding ways to balance immediate short-term demands for operational support with longer-term design goals. None of this will be a surprise to anyone who has been working in an online business for a while. Once you deliver a system and customers start using it, priorities change, everything about the way that you work and plan has to change too. This way of working isn’t better for developers, or worse necessarily. But it is fundamentally different from how many developers think and work today. More frenetic and interrupt-driven. At the same time, more disciplined and more Lean. More transparent. More responsibility and accountability. Less about development and more about release and deployment and operations and support. Developers – and their managers – will need to get used to being part of the bigger picture of running IT, which is about much more than designing apps and writing and delivering code. This might be the future of software development. But not all developers will like it, or be good at it.Reference: Devops isn’t killing developers – but it is killing development and developer productivity from our JCG partner Jim Bird at the Building Real Software blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books