Featured FREE Whitepapers

What's New Here?

spring-logo

Spring 4.1 and Java 8: java.util.Optional

As of Spring 4.1 Java 8’s java.util.Optional, a container object which may or may not contain a non-null value, is supported with @RequestParam, @RequestHeader and @MatrixVariable. While using Java 8’s java.util.Optional you make sure your parameters are never null.     Request Params In this example we will bind java.time.LocalDate as java.util.Optional using @RequestParam: @RestController @RequestMapping("o") public class SampleController {@RequestMapping(value = "r", produces = "text/plain") public String requestParamAsOptional( @DateTimeFormat(iso = DateTimeFormat.ISO.DATE) @RequestParam(value = "ld") Optional<LocalDate> localDate) {StringBuilder result = new StringBuilder("ld: "); localDate.ifPresent(value -> result.append(value.toString())); return result.toString(); } } Prior to Spring 4.1, we would get an exception that no matching editors or conversion strategy was found. As of Spring 4.1, this is no more an issue. To verify the binding works properly, we may create a simple integration test: @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = Application.class) @WebAppConfiguration public class SampleSomeControllerTest {@Autowired private WebApplicationContext wac; private MockMvc mockMvc;@Before public void setUp() throws Exception { mockMvc = MockMvcBuilders.webAppContextSetup(wac).build(); }// ...} In the first test, we will check if the binding works properly and if the proper result is returned: @Test public void bindsNonNullLocalDateAsRequestParam() throws Exception { mockMvc.perform(get("/o/r").param("ld", "2020-01-01")) .andExpect(content().string("ld: 2020-01-01")); } In the next test, we will not pass ld parameter: @Test public void bindsNoLocalDateAsRequestParam() throws Exception { mockMvc.perform(get("/o/r")) .andExpect(content().string("ld: ")); } Both tests should be green! Request Headers Similarly, we can bind @RequestHeader to java.util.Optional: @RequestMapping(value = "h", produces = "text/plain") public String requestHeaderAsOptional( @RequestHeader(value = "Custom-Header") Optional<String> header) {StringBuilder result = new StringBuilder("Custom-Header: "); header.ifPresent(value -> result.append(value));return result.toString(); } And the tests: @Test public void bindsNonNullCustomHeader() throws Exception { mockMvc.perform(get("/o/h").header("Custom-Header", "Value")) .andExpect(content().string("Custom-Header: Value")); }@Test public void noCustomHeaderGiven() throws Exception { mockMvc.perform(get("/o/h").header("Custom-Header", "")) .andExpect(content().string("Custom-Header: ")); } Matrix Variables Introduced in Spring 3.2 @MatrixVariable annotation indicates that a method parameter should be bound to a name-value pair within a path segment: @RequestMapping(value = "m/{id}", produces = "text/plain") public String execute(@PathVariable Integer id, @MatrixVariable Optional<Integer> p, @MatrixVariable Optional<Integer> q) {StringBuilder result = new StringBuilder(); result.append("p: "); p.ifPresent(value -> result.append(value)); result.append(", q: "); q.ifPresent(value -> result.append(value));return result.toString(); } The above method can be called via /o/m/42;p=4;q=2 url. Let’s create a test for that: @Test public void bindsNonNullMatrixVariables() throws Exception { mockMvc.perform(get("/o/m/42;p=4;q=2")) .andExpect(content().string("p: 4, q: 2")); } Unfortunatelly, the test will fail, because support for @MatrixVariable annotation is disabled by default in Spring MVC. In order to enable it we need to tweak the configuration and set the removeSemicolonContent property of RequestMappingHandlerMapping to false. By default it is set to true. I have done with WebMvcConfigurerAdapter like below: @Configuration public class WebMvcConfig extends WebMvcConfigurerAdapter { @Override public void configurePathMatch(PathMatchConfigurer configurer) { UrlPathHelper urlPathHelper = new UrlPathHelper(); urlPathHelper.setRemoveSemicolonContent(false); configurer.setUrlPathHelper(urlPathHelper); } } And now all tests passes! Please find the source code for this article here: https://github.com/kolorobot/spring41-samplesReference: Spring 4.1 and Java 8: java.util.Optional from our JCG partner Rafal Borowiec at the Codeleak.pl blog....
software-development-2-logo

How designing for the cloud would improve your service implementation

Cloud platforms come with a variety of options and constraints certainly driven by infrastructure and business needs which indeed diversify their pricing plans and customer adoption as well. But often they also have an implicit value: designing with certain constraints in mind would facilitate scaling and replication, would provide performance gains and increased revenues. Are constraints such as request timeout or limits on outbound traffic and datastore usage, so bad after all? However, when designing for your private hosting, you might skip or ignore few of them and miss an opportunity to improve your service design. Here are few points worth to consider:      Design for monetization: no matter which PaaS you would choose, it will always have constraints on network traffic, datastore connections, datastore space and so on, and that’s reasonable and obvious: they need to make a profit after all. When designing for the cloud you definitely need to face these constraints trying to minimize your expenses (review your data model, restrict required data, limit outbound traffic, adding cache mechanisms and so on), because you certainly want to increase as much as possible your monthly revenue. Designing for your private hosting you would not have taken so seriously some of these parameters perhaps, realizing later on their importance though.Design for performance: as part of the previous point, you might need to adapt your design to fit some cloud constraints and obtain a considerable performance gain which wasn’t directly linked to monetization (but it would impact it somehow after all). You need to limit your outbound traffic, your request time-out, the number of queries on your database, any long running process and so on: well, a bit of healthy pressure on these subjects would definitely not hurt your design.Design for latest technologies and standards: PaaS usually don’t support all existing technologies nor all frameworks/tools per technology, but they normally offer the most common ones, well known and widely used, because they obviously need to target a large community and facilitate your deploy (and their profit). These constraints may change your decision concerning build management, for instance, giving up on your well proven ant script and going for Maven on Heroku, or finally stop using a certain old-but-known library in favour of a more modern and effective one. That might have an impact on your roadmap though because of unexpected learning curves and you would not have faced this question on your nice private hosting, but it might be time to upgrade your competences while working on your project, soon or later you may appreciate that. And it could facilitate the integration of new team members in the future.Design for abstraction: the majority of PaaS would also require some platform/API dependency which would make your application cloud-platform dependent, something you would definitely try to avoid. Adding further layers of abstraction may solve the issue: you may use Memcache or Big table on GAE, for instance, but your service shouldn’t know that if you really want to keep a smooth portability. It’s an extra effort indeed which may affect the decision of choosing a target Paas rather than another, but that’s worth in case of future moves. You would probably not waste time and energy on your more open remote or private hosting, but you may end up later on in complex refactorings and wonder why a simple and clean abstraction wasn’t part of your initial design.Conclusion While you would reasonably ignore cloud constraints when designing for your private hosting, you might gain a quick added value just keeping in mind few of their constraints and challenge your design against them: how would your design react? Would it be easy to deploy your application elsewhere? Would your monetization improve adapting your design to any of them? The exercise is worth the effort as long as it doesn’t sound completely artificial and you do see some potential added values. There are plenty of non functional requirements you might just have left out and possibly you might also consider to use a cloud platform for your final deploy. Pick up two or three of the most used ones (AWS, GAE, Openshift, Heroku, Cloudbees are worth to mention, but they are not the only ones) and dare your design for an hypothetical deploy.Reference: How designing for the cloud would improve your service implementation from our JCG partner Antonio Di Matteo at the Refactoring Ideas blog....
jboss-hibernate-logo

A beginner’s guide to JPA/Hibernate entity state transitions

Introduction Hibernate shifts the developer mindset from SQL statements to entity state transitions. Once an entity is actively managed by Hibernate, all changes are going to be automatically propagated to the database. Manipulating domain model entities (along with their associations) is much easier than writing and maintaining SQL statements. Without an ORM tool, adding a new column requires modifying all associated INSERT/UPDATE statements. But Hibernate is no silver bullet either. Hibernate doesn’t free us from ever worrying about the actual executed SQL statements. Controlling Hibernate is not as straightforward as one might think and it’s mandatory to check all SQL statements Hibernate executes on our behalf. The entity states As I previously mentioned, Hibernate monitors currently attached entities. But for an entity to become managed, it must be in the right entity state. First we must define all entity states:New (Transient): A newly created object that hasn’t ever been associated with a Hibernate Session (a.k.a Persistence Context) and is not mapped to any database table row is considered to be in the New (Transient) state.To become persisted we need to either explicitly call the EntityManager#persist method or make use of the transitive persistence mechanism. Persistent (Managed): A persistent entity has been associated with a database table row and it’s being managed by the current running Persistence Context. Any change made to such entity is going to be detected and propagated to the database (during the Session flush-time). With Hibernate, we no longer have to execute INSERT/UPDATE/DELETE statements. Hibernate employs a “transactional write-behind” working style and changes are synchronized at the very last responsible moment, during the current Session flush-time. Detached: Once the current running Persistence Context is closed all the previously managed entities become detached. Successive changes will no longer be tracked and no automatic database synchronization is going to happen.To associate a detached entity to an active Hibernate Session, you can choose one of the following options:Reattaching Hibernate (but not JPA 2.1) supports reattaching through the Session#update method.A Hibernate Session can only associate one Entity object for a given database row. This is because the Persistence Context acts as an in-memory cache (first level cache) and only one value (entity) is associated to a given key (entity type and database identifier).An entity can be reattached only if there is no other JVM object (matching the same database row) already associated to the current Hibernate Session. Merging The merge is going to copy the detached entity state (source) to a managed entity instance (destination). If the merging entity has no equivalent in the current Session, one will be fetched from the database.The detached object instance will continue to remain detached even after the merge operation.Removed: Although JPA demands that managed entities only are allowed to be removed, Hibernate can also delete detached entities (but only through a Session#delete method call).A removed entity is only scheduled for deletion and the actual database DELETE statement will be executed during Session flush-time.Entity state transitions To change one Entity state, we need to use one of the following entity management interfaces:EntityManager SessionThese interfaces define the entity state transition operations we must explicitly call to notify Hibernate of the entity state change. At flush-time the entity state transition is materialized into a database SQL statement (INSERT/UPDATE/DELETE).Reference: A beginner’s guide to JPA/Hibernate entity state transitions from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
apache-lucene-logo

Keyword extraction and similarity calculation among textual content

Background Web applications are becoming smarter. Gone are the days when to avail a service from a website, user had to fill up a giant form. Let’s say, you have a website which is for book lovers. Before web 2.0, sites like these used to ask user all kind of questions in a form like age, books they read, types of book they like, language preference, author preference etc. Now days, it is a common practice to ask user to write a paragraph on themselves (profile). In this note, user express some details, but the challenge is, how we extract useful information from such free form text and more over how we find user who have similar interest? This use case has become so common that every java developer should know some tricks about information retrieval from text. In this article, I shall walk you through one simple yet effective way to do it. Processes to extract information from textFilter Words: Read textual content word by word and remove unwanted words. As part of this filtering state, remove all commonly used English words. One can also apply censor rules and remove sexually explicit words or hate speech etc. Perform Stemming: Words like ‘search’ or ‘searched’ or ‘searching’ which all mean ‘search’. This process of reducing word to its root is called stemming. Calculate Similarity: After first two steps, we now have a set of keywords that truly represent original text (user profile in this example). We can treat these keywords as set of unique words. To calculate similarity between two user profiles, it would be better if we represent similarity in terms of a number which represent how similar two contents are in a 0 (not similar) to 1 (completely similar) scale. One way to achieve that is to calculate Jaccard Index which is used to calculate similarity or diversity of sets.Jaccard index J(A,B) = |A∩B|/| A⋃B| where A and B are sets and J(A,B) lies between 0 to 1. Implementation Details Based on the points outlined above, one can develop a library to extract keywords and calculate similarity. However, Apache Lucene is a java library that has plenty of API to perform keyword extraction. Here is a brief description of different important areas of this API. Tokenizer Tokenizer splits your text into chunks. There are different tokenizers and depending upon the tokenizer you use, you can get different output token streams (sequences of chunks of text). Stemmers Stemmers are used to get the base of a word in question. It heavily depends on the language used. Words like ‘seaerch’, ’searched’, ’searching’ etc comes from the root word ‘search’. In information retrieval field, it is very useful if we get to the root words as that reduce noise and with fewer words we can still carry the intent of the document. One of the famous stemmer algorithm is Porter Stemmer algo. TokenFilter Tokenfilter can be applied on the tokenizer output to normalize or filter tokens. Like LowerCaseFilter which normalizes token text to lower case or stopfilter that suppress most frequent and almost useless words. Again, it heavily depends on language. For English these stop words are “a”, “the”, “I”, “be”, “have”, etc. Analyzer An analyzer is the higher level class that uses tokenizers to produce tokens from input, uses stemmers to reduce the token, uses filters to suppress/normalize the tokens. This is the class that glue the other three main components. Different Analyzers use different combinations of tokenizers and filters. For example, StandardAnalyzer uses StandardTokenizer to extract tokens from string, pass that through LowerCaseFilter to convert tokens into lower case and then pass the stream of tokens through StopFilter to remove most commonly used English words. It does not perform stemming by default. One can develop a custom analyzer by mixing and matching tokenizer and tokenfilters according the need. Code walk through Source code of this example can be accessed from https://github.com/shamikm/similarity . Below is a highlight of the steps:Create a custom analyzer that perform the following steps:Tokenize English words based on space, comma, period etc. Use StandardTokenizer for this task. Convert the tokens into lowercase using LowerCaseFilter Stop common English words using StopFilter Stem English words using Porter StemmerFrom StemmAnalyzer class: @Override public TokenStream tokenStream(String fieldName, Reader reader) { (a).. final StandardTokenizer src = new StandardTokenizer(matchVersion, reader); TokenStream tok = new StandardFilter(matchVersion, src); (b).. tok = new LowerCaseFilter(matchVersion, tok); (c).. tok = new StopFilter(matchVersion, tok, getStopWords()); (d).. return new PorterStemFilter(tok); } Once we have set of words, it’s easy to calculate similarity between two sets. From JaccardIndexBasedSimilarity class: public double calculateSimilarity(String oneContent, String otherContet) { Set<String> keyWords1 = keywordGenerator.generateKeyWords(oneContent); Set<String> keyWords2 = keywordGenerator.generateKeyWords(otherContet); Set<String> denominator = Sets.union(keyWords1,keyWords2); Set<String> numerator = Sets.intersection(keyWords1,keyWords2);return denominator.size()>0? (double)numerator.size()/(double)denominator.size() : 0; }Here is a sample test case to demonstrate how the code works: @Test public void calculateSim(){ SimilarityCalculator calculator = new JaccardIndexBasedSimilarity(); Assert.assertEquals(calculator.calculateSimilarity("They Licked the platter clean","Jack Sprat could eat no fat"),0.0); //1(lamb) out of 6(littl,lamb,mari,had,go,sure) words are same Assert.assertEquals(calculator.calculateSimilarity("Mary had a little lamb", "The lamb was sure to go."), 0.16, 0.02); Assert.assertEquals(calculator.calculateSimilarity("Mary had a little lamb","Mary had a little lamb"),1.0); } You can run this process offline and find out how one user profile is similar to any other users in your database and can start recommending users based on what similar users are reading. Conclusion Information retrieval from text is a common use case now-a-days. Having a basic knowledge on this critical field is helpful for any developer and in this article, we looked at how Apache Lucene API can be used effectively to extract keyword and calculate similarity among text. Resources:http://en.wikipedia.org/wiki/Jaccard_index http://tartarus.org/martin/PorterStemmer/ http://www.manning.com/ingersoll/ http://www.amazon.com/Algorithms-Intelligent-Web-Haralambos-Marmanis/dp/1933988665...
junit-logo

Clean JUnit Throwable-Tests with Java 8 Lambdas

Recently I was involved in a short online discussion on twitter and google+ which concerned the question why the arrival of Java 8 Lambda expressions makes the catch-exception library1 obsolete. This was triggered by a brief announcement that the library won’t be longer maintained as lambdas will make it redundant. The answer I came up with at that time has a lot in common with the one presented by Rafał Borowiec in his well written post JUNIT: TESTING EXCEPTION WITH JAVA 8 AND LAMBDA EXPRESSIONS. Giving both approaches a second thought however, I believe one could do even a bit better with respect to clean code. So this post is a trackback on that topic which shares my latest considerations and explains concisely a slightly refined solution. This way I hopefully will find out about the weak points soon… Motivation While writing tests I always strive to end up with a clear visual separation of the arrange/act/assert2 phases in a test method (and I am under the impression that it is getting more and more popular to emphasize those phases optically by using empty lines as separator). Now it seems to me that the catch-exception solutions mentioned above mix the act and assert phases more or less together. This is because both assert that a Throwable has been thrown while still being in the act phase. But an assertion belongs apparently to the assert phase. Fortunately this problem can be solved easily. Refinement Let’s have look at a simple example to explain how the refined approach might look like. I start with a class that provides a method throwing an IllegalStateException for demonstration purpose: public class Foo {static final String ERR_MESSAGE = "bad";public void doIt() throws IllegalStateException { throw new IllegalStateException(ERR_MESSAGE); } } The next snippet introduces a little helper that is responsible for capturing a Throwable thrown during the act phase of a JUnit test. Note that it does not assert anything by itself. It simply returns the captured Throwable if any or null otherwise. public class ThrowableCaptor {public interface Actor { void act() throws Throwable; }public static Throwable captureThrowable( Actor actor ) { Throwable result = null; try { actor.act(); } catch( Throwable throwable ) { result = throwable; } return result; } } To highlight that the ThrowableCaptor is used to deal with the act phase of a JUnit Test the captorThrowable method takes a parameter of a type Actor – which admittedly might overdue the metaphor a bit… Anyway, with that utility in place, AssertJ for clean matcher expressions, static imports and Java 8 lambdas at hand, an exception test might look like this: public class FooTest {@Test public void testException() { // arrange Foo foo = new Foo(); // act Throwable actual = captureThrowable( foo::doIt ); // assert assertThat( actual ) .isInstanceOf( IllegalStateException.class ) .hasMessage( Foo.ERR_MESSAGE ); } } For clarification I have inserted comments to depict the clear separation of the three phases in the test method. In case that no exception is thrown the assert block would quit this with an assertion error noting that ‘Expecting actual not to be null’3. Conclusion By moving the Throwable existence check from the act to the assert phase, the catch-exception approach based on Java8 lambda expressions allows to write such tests in a pretty clean way – at least from my current point of view. So what do you think? Am I missing something?  I order to make exception testing cleaner, the catch-exception library catches exceptions in a single line of code and makes them available for further analysis See Practical Unit Testing, Chapter 3.9. Phases of a Unit Test, Tomek Kaczanowski 2013, often also denoted as build-operate-check pattern, Clean Code, Chapter 9. Unit Tests, Robert C. Martin 2009 The Assertion#isNotNull check is implicitly called by Assertion#isInstanceOf, but it can be called also explicitly of courseReference: Clean JUnit Throwable-Tests with Java 8 Lambdas from our JCG partner Frank Appel at the Code Affine blog....
enterprise-java-logo

A closer look at Oracle IDM Auditing

Reporting is a vital functionality in any product which deals with sensitive information. Same applies to Identity & Access Management tools. Oracle IDM’s Auditing module acts as a foundation for its OOTB Reporting capabilities. Let’s take a quick look at Auditing engine and how it facilitates the Reporting functionality within OIM. The use case presented here is simple – change to a user record in OIM. What are the sequence of events which get triggered from an Audit perspective? This is best explained by a diagram. I came up with the figure below in an attempt to better articulate the process.Although the diagram is self explanatory, a theoretical translation of the same is not going to harm us!The updated/created user record gets pushed into the USR table (stores the user information) – Its a normal process by which the information gets recorded in the OIM Database The information is further propagated by the OIM Auditing engine (as a part of core back end server logic) and it initiates a transaction The Audit Engine inserts a new entry in the AUD_JMS table as a part of the audit transaction completion. The AUD_JMS table is nothing but a staging table The Issue Audit Messages scheduled job picks up the Audit messages in the AUD_JMS table and submits the key to the oimAuditQueue JMS queue. The MDB corresponding to the queue initiates the Audit data processing – the data is seeded into the UPA table. This data is in the form of XML. These are snapshots of the user profile at the instant when the user record was actually modified/created. The UPA table also stores the delta (changes to the profile) Finally, the Post processors of the Audit engine pick up the XML snapshots from the central UPA table and store them in specific audit tables (in a de-normalized format) like UPA_USR, UPA_USR_FIELDS, UPA_RESOURCE, UPA_UD_FORMS etc These tables serve as the primary source of information for the Reporting module. If you have ever worked on the OIM Reporting module, I am sure you can relate to the Data Sources which you configure on your BI Publisher instance – these are for executing direct queries on the above mentioned Audit tables for its data.That’s pretty much it ! This was not a coverage of the entire Audit module in OIM, but a preview of HOW the process is orchestrated on a high level. Thanks for reading!Reference: A closer look at Oracle IDM Auditing from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
agile-logo

Nightmare on Agile Street

I’m awake. I’m lying in my bed. I’m sweating but I’m cold. Its the small hours of the morning and the dream is as vivid as it is horrid…. I’m standing in a clients offices, I’ve been here before, I know whats happening. They are building an website. Quite a complex one, this will be the primary purchasing venue for many customers. This will project the company image – and with the right bits it can up-sell to customers – it can even help reduce costs by servicing the customers after the sale. All good stuff. But it is atrociously “behind” schedule, someone said it would be finished in a year, that was three years ago before any code was written. Now its two years to completion but in my dream people say 2+3=2. How can that be? I can’t say it but the only way out I can see is cancellation. If I was suddenly in charge of the client I’d cancel the thing. I’d salvage what I could and I’d launch a new, smaller, initiative to replace the website. But its too big to fail, even the board knows how much money they are spending. Who’s going to walk in there and say: “Scrap it.” Saying “Scrap it” would be to admit one failure and invite a messenger shooting. And if I was the head of the supplier I’d say the same thing. I’d say to my customer: “I know I’m earning oodles of cash out of this, I know its a high profile feather in our cap but really its out of control you really shouldn’t continue.” But of course they won’t. Forget the money they’d lose, they weren’t hired to answer back – like my tailor friend. And of course I’m neither of those. I’m just the guy having the nightmare and in the nightmare I’m the consultant who is trying to fix, it. In the nightmare I’m not fixing it I’m providing cover, while I’m there its Agile, while its Agile its good, Agile is a good drug and I’m the pusher. “You can’t cancel it because all the competitors have one and so we must have one” tells me a ghostly apparition. “We must be best in class” says another apparition. “We must be head-and-shoulders above the opposition” says third – aren’t the opposition seven times the size? And don’t the competition buy large parts of their solution off the shelf? But every time I look the work seems to grow. Every discussion ends in more stories. Not just stories, epics, super-stories, sub-epics, mezzanine-stories. But its OK, this is Agile. The business keeps throwing new requests at it which are just accepted – because they are Agile! Some of these are quite big. But that’s OK because the team are Agile. And Agile means the team do what the business want right? I watch the Analysts work over the stories in the backlog, as they do each grows and replicates like an alien parasite. The Analysts find more edge cases, extra detail which need to be included, more scenarios which need to be catered for. Each becomes a story itself. But that’s OK because the team are Agile. And those damn competitors don’t stop adding and improving their site which mean the client must add that too. But that’s OK because the team are Agile. And the points…. points are the new hours, in the dream I have a book “The Mythical Man Point”. The backlog is measured in thousands of points. The burn-down charts go down – but only if you look at the sprint burn-down, hunt around Jira and you can find a project wide burn-down, O my god, no….. its full of stories! This is not a burn-down chart carrying us to a safe landing, its a fast climbing interceptor… The backlog is a demon… its… its… undead. The faces of those who’ve seen the chart are prematurely aged. Open Jira and show someone the chart and…. their hair turns grey, the wrinkles appear, in moments they are…. One man is immune. As the points grow his power grows, he is… he is… The Product Owner. He introduces himself: “Snape is the name, Severus Snape” – I knew I’d seen him somewhere before. In the planning meeting, he sees the poker cards pulled out, he focuses on the developer with the highest score, there is a ray of cutting sarcasm… he withers. The developers submit, the numbers are lowered. The Product Owner chuckles to himself – no over estimating on his watch! One of the developers suggest “Maybe we should wait until we finish the current work” Snape sneers: “I thought you were Agile boy?” “If you can’t handle it I have some friends in Transylvanian who are really Agile…. do you want to lose the contract boy? … Off-shore is so so cheap…” There is a reality distortion field around the Product Owner. Show him a burn-down chart and it looks good, his presentations to the steering committee always show perfect burn-down. I’m in my pyjamas standing outside the building at night: a sinister looking figure is breaking and entering, he sneaks into the building, he opens Jira and … inserts stories! His mask falls, it is….The Product Owner! Of course, without stories in the backlog he would cease to exist, his power comes from the size of the backlog, more stories more power. Ever since his boss came down with a rare form of chronic flu a link in the reporting chain has been missing. Made worse when the next man up was dismissed for inappropriate behaviour in the canteen. Since when the Product Owner reports to the COO, a COO who doesn’t really have time for him and only has a shaky understanding of any IT related topic. I do the maths. The backlog isn’t so much a backlog as a mortgage, and the team are under water! The payments they make against the mortgage aren’t even covering the growth in stories. The backlog growth is an interest rate they can’t pay. It takes months for stories to progress through the backlog and reach developers. When work finally gets to developers they too uncover more edge cases, more details, more scenarios, more of just about everything. Why didn’t the Analysts find these? Did they find them and then lose them? Then there is a stream of bugs coming in – oozing through the walls. The technical practices aren’t solid, they are… custard! Bugs get caught but more get through! Bugs can’t be fixed because: “bugs are OpEx and we are funded from CapEx.” Someone has slain the Bug Fixing Fairy, her body is found slumped in the corner, a nice your girl straight out of college. They are hiring another fresh young graduate to send to the slaughter, fortunately Bug Fixing Fairies are Plug Compatible with one another. Release dates can’t be honoured. Woody Allen and Anne Hall walk in – since when did Woody Allen do horror films? ‘two elderly women are at a Catskill mountain resort, and one of ‘em says, “Boy, the food at this place is really terrible.” The other one says, “Yeah, I know; and such small portions.”’ I have X-Ray vision: I can see WIP where it lies, there are piles of it on the floor. Its stacked up like beer barrels in a brewery. But the beer isn’t drinkable. Its a fiendish plan. If anyone drinks that beer, if the WIP is shipped, they will discover…. its full of holes! Quality control is… offshore. Why is there so much WIP lying around? Why is the WIP rising? Because they are Agile comes the reply… the business can change their mind at anytime, and they do. I’m drowning in WIP. WHIP to the left of me, WHIP to the right of me. The developers are half way through a piece of work and the team are told to put it to one side and do something else. Nothing gets delivered, everything is half baked. WHIP – work hopefully in process that is. When, and IF, the team return to a piece of WHIP things have changed, the team members might have changed, so picking it up isn’t easy. WHIP goes off, the stench of slowly rotting software. But that’s OK because the team are Agile. Arhhh, the developers are clones, they are plug compatible, you can switch them into and out as you like… but they have no memory…. It gets worse, the client has cunningly outsourced their network ops to another supplier, and their support desk to another one, and the data-centre to the another… no one contractor has more than one contract. Its a perverse form of WIP limit, no supplier is allowed more than one contract. O my god, I’m flying through the data centre, the data centre supplier has lost control, the are creepers everywhere, each server is patched in a different way, there is a stack of change configuration requests in a dark office, I approach the clerk, its its…. Terry Gilliam, the data centre is in Brazil…. Even when the business doesn’t change its mind the development team get stuck. They have dependencies on other teams and on other some other sub-contractor. So work gets put to one side again, more WIP. All roads lead to Dounreay in Scotland, a really good place if you want to build something really dangerous, but why does this project require a fast breeder nuclear reactor? But that’s OK because the team are Agile. The supplier is desperate to keep their people busy, if The Product Owner sees a programmer who’s fingers are not moving on the keyboard he turns them to stone. The team manager is desperate to save his people, he rummages in the backlog and finds… a piece of work they can do. (With a backlog that large you can always find something even if the business value is negative – and there are plenty of them.) You can’t blame the development team, they need to look busy, they need to justify themselves so they work on what they can. But that’s OK because the team are Agile. Get me out of here!!!!! I’m in my kitchen. My hands are wrapped around a hot-chocolate, I need a fresh pair of dry pyjamas but that can wait while I calm down. I’ve wrapped a blanket around me and have the shivers under control. Are they Agile? Undoubtedly it was sold as Agile. It certainty ain’t pretty but it is called Agile. They have iterations. They have planing meeting. They have burn-downs. They have a Scrum Master and they have Jira. They have User Stories. They have some, slow, automated acceptance tests, some developers are even writing automated unit tests. How could it have gone so wrong? Sure the development team could be better. You could boost the supply curve. But that would be like administering morphine. The pain would be relieved for a while but the fever would return and it would be worse. The real problem is elsewhere. The real problem is rampant demand. The real problem is poor client management. The real problem is a client who isn’t looking at the feedback. The real problems are multiple, thats what is so scary about the dream. They are all interconnected. In the wee-small hours I see no way of winning, its a quagmire: To save this project we need to destroy this project. But we all know what happened in Vietnam. What is to be done? – I can’t go back to sleep until I have an answer. Would the team be better off doing Waterfall? The business would still change its mind, project management would put a change request process in place and the propagation delay would be worse. There would probably be more bugs – testing would be postponed. Releases would be held back. This would look better for a few months until they came to actually test and release. If they did waterfall, if they did a big requirements exercise, a big specification, a big design, a big estimation and a big plan they might not choose to do it. But frankly Agile is telling them clearly this will never be done. In fact its telling them with a lot more certainty because they are several years in and have several years of data to look at. Agile is the cover. Because they are Agile they are getting more rope to hang themselves with. But all this is a dream, a horrid dream, none of this ever happened.Reference: Nightmare on Agile Street from our JCG partner Allan Kelly at the Agile, Lean, Patterns blog....
devops-logo

Devops isn’t killing developers – but it is killing development and developer productivity

Devops isn’t killing developers – at least not any developers that I know. But Devops is killing development, or the way that most of us think of how we are supposed to build and deliver software. Agile loaded the gun. Devops is pulling the trigger. Flow instead of Delivery A sea change is happening in the way that software is developed and delivered. Large-scale waterfall software development projects gave way to phased delivery and Spiral approaches, and then to smaller teams delivering working code in time boxes using Scrum or other iterative Agile methods. Now people are moving on from Scrum to Kanban, and to One-Piece Continuous Flow with immediate and Continuous Deployment of code to production in Devops. The scale and focus of development continues to shrink, and so does the time frame for making decisions and getting work done. Phases and milestones and project reviews to sprints and sprint reviews to Lean controls over WIP limits and task-level optimization. The size of deliverables: from what a project team could deliver in a year to what a Scrum team could get done in a month or a week to what an individual developer can get working in production in a couple of days or a couple of hours. The definition of “Done” and “Working Software” changes from something that is coded and tested and ready to demo to something that is working in production – now (“Done Means Released”). Continuous Delivery and Continuous Deployment replace Continuous Integration. Rapid deployment to production doesn’t leave time for manual testing or for manual testers, which means developers are responsible for catching all of the bugs themselves before code gets to production – or do their testing in production and try to catch problems as they happen (aka “Monitoring as Testing“). Because Devops brings developers much closer to production, operational risks become more important than project risks, and operational metrics become more important than project metrics. System uptime and cycle time to production replace Earned Value or velocity. The stress of hitting deadlines is replaced by the stress of firefighting in production and being on call. Devops isn’t about delivering a project or even delivering features. It’s about minimizing lead time and maximizing flow of work to production, recognizing and eliminating junk work and delays and hand offs, improving system reliability and cutting operational costs, building in feedback loops from production to development, standardizing and automating steps as much as possible. It’s more manufacturing and process control than engineering. Devops kills Developer Productivity too Devops also kills developer productivity. Whether you try to measure developer productivity by LOC or Function Points or Feature Points or Story Points or velocity or some other measure of how much code is written, less coding gets done because developers are spending more time on ops work and dealing with interruptions, and less time writing code. Time learning about the infrastructure and the platform and understanding how it is setup and making sure that it is setup right. Building Continuous Delivery and Continuous Deployment pipelines and keeping them running. Helping ops to investigate and resolve issues, responding to urgent customer requests and questions, looking into performance problems, monitoring the system to make sure that it is working correctly, helping to run A/B experiments, pushing changes and fixes out… all take time away from development and pre-empt thinking about requirements and designing and coding and testing (the work that developers are trained to do and are good at). The Impact of Interruptions and Multi-Tasking You can’t protect developers from interruptions and changes in priorities in Devops, even if you use Kanban with strict WIP limits, even in a tightly run shop – and you don’t want to. Developers need to be responsive to operations and customers, react to feedback from production, jump on problems and help detect and resolve failures as quickly as possible. This means everyone, especially your most talented developers, need to be available for ops most if not all of the time. Developers join ops on call after hours, which means carrying a pager (or being chased by Pager Duty) after the day’s work is done. And time wasted on support calls for problems that end up not being real problems, and long nights and weekends on fire fighting and tracking down production issues and helping to recover from failures, coming in tired the next day to spend more time on incident dry runs and testing failover and roll-forward and roll-back recovery and participating in post mortems and root cause analysis sessions when something goes wrong and the failover or roll-forward or roll-back doesn’t work. You can’t plan for interruptions and operational problems, and you can’t plan around them. Which means developers will miss their commitments more often. Then why make commitments at all? Why bother planning or estimating? Use just-in-time prioritization instead to focus in on the most important thing that ops or the customer need at the moment, and deliver it as soon as you can – unless something more important comes up and pre-empts it. As developers take on more ops and support responsibilities, multi-tasking and task switching – and the interruptions and inefficiency that come with it – increase, fracturing time and destroying concentration. This has an immediate drag on productivity, and a longer term impact on people’s ability to think and to solve problems. Even the Continuous Deployment feedback loop itself is an interruption to a developer’s flow. After a developer checks in code, running unit tests in Continuous Integration is supposed to be fast, a few seconds or minutes, so that they can keep moving forward with their work. But to deploy immediately to production means running through a more extensive set of integration tests and systems tests and other checks in Continuous Delivery (more tests and more checks takes more time), then executing the steps through to deployment, and then monitoring production to make sure that everything worked correctly, and jumping in if anything goes wrong. Even if most of the steps are automated and optimized, all of this takes extra time and the developer’s attention away from working on code. Optimizing the flow of work in and out of operations means sacrificing developer flow, and slowing down development work itself. Expectations and Metrics and Incentives have to Change In Devops, the way that developers (and ops) work change, and the way that they need to be managed changes. It’s also critical to change expectations and metrics and incentives for developers. Devops success is measured by operational IT metrics, not on meeting project delivery goals of scope, schedule and cost, not on meeting release goals or sprint commitments, or even meeting product design goals.How fast can the team respond to important changes and problems: Change Lead Time and Cycle Time to production instead of delivery milestones or velocity How often do they push changes to production (which is still the metric that most people are most excited about – how many times per day or per hour or minute Etsy or Netflix or Amazon deploy changes) How often do they make mistakes – Change / Failure ratio System reliability and uptime – MTBF and especially MTTD and MTTR Cost of change – and overall Operations and Support costsDevops is more about Ops than Dev As more software is delivered earlier and more often to production, development turns into maintenance. Project management is replaced by incident management and task management. Planning horizons get much shorter – or planning is replaced by just-in-time queue prioritization and triage. With Infrastructure as Code Ops become developers, designing and coding infrastructure and infrastructure changes, thinking about reuse and readability and duplication and refactoring, technical debt and testability and building on TDD to implement TDI (Test Driven Infrastructure). They become more agile and more Agile, making smaller changes more often, more time programming and less on paper work. And developers start to work more like ops. Taking on responsibilities for operations and support, putting operational risks first, caring about the infrastructure, building operations tools, finding ways to balance immediate short-term demands for operational support with longer-term design goals. None of this will be a surprise to anyone who has been working in an online business for a while. Once you deliver a system and customers start using it, priorities change, everything about the way that you work and plan has to change too. This way of working isn’t better for developers, or worse necessarily. But it is fundamentally different from how many developers think and work today. More frenetic and interrupt-driven. At the same time, more disciplined and more Lean. More transparent. More responsibility and accountability. Less about development and more about release and deployment and operations and support. Developers – and their managers – will need to get used to being part of the bigger picture of running IT, which is about much more than designing apps and writing and delivering code. This might be the future of software development. But not all developers will like it, or be good at it.Reference: Devops isn’t killing developers – but it is killing development and developer productivity from our JCG partner Jim Bird at the Building Real Software blog....
software-development-2-logo

Test Attribute #6 – Maintenance

I always hated the word “maintainability” in the context of tests. Tests, like any other code are maintainable. Unless there comes a time, where we decide we can’t take it anymore, and the code needs a rewrite, the code is maintainable. We can go and change it, edit or replace it. The same goes for tests. Once we’ve written them, they are maintainable.   So why are we talking about maintainable tests? The trouble with tests is that they are not considered “real” code. They are not production code. Developers, starting out on the road to better quality, seem to regard tests not just as extra work, but also second-class work. All activities that are not directed at running code on production server, or a client computer, are regarded as “actors in supporting roles”. Obviously writing the tests has an associated future cost. It’s a cost on supporting work, which is considered less valuable. One of the reasons developers are afraid to start writing tests is the accumulated multiplier effect: “Ok, I’m willing to write the tests, which doubles my work load. I know that this code is going to change in the future, and therefore I’ll have to do double the work, many times in the future. Is it worth it?” Test maintenance IS costly But not necessarily because of that. The first change we need to do is a mental one. We need to understand that all our activities, including the “supporting” ones, are all first-class. That also includes the test modifications in the future: After all, if we’re going to change the code to support that requirement, that will require tests for that requirement. The trick is to minimize the effort to a minimum. And we can do that, because some of that future effort is waste that we’re creating now. The waste happens when the requirements don’t change, but the tests fail, and not because of a bug. We then need to fix the test, although there wasn’t a real problem. Re-work. Here’s a very simple example, taken from the Accuracy attribute post: [Test]public void AddTwoPositiveNumbers_GetResult() { PositiveCalculator calculator = new PositiveCalculator(); Assert.That(calculator.Add(2, 2), Is.EqualTo(4)); } What happens if we decide to rename the PositiveCalculator to Calculator?  The test will not compile. We’ll need to modify the test in order to pass. Renaming stuff doesn’t seem that much of a trouble, though – we’re relying on modern tools to replace the different occurrences. However, this is very dependent on tools and technology . If we did this in C# or in Java, there is not only automation, but also quick feedback mechanisms that catch this, and we don’t even think we’re maintaining the tests. Imagine you’d get the compilation error only after 2 hours of compiling, rather than immediately after you’ve done the changes. Or only after the automated build cycle. The further we get from automation and quick feedback, we tend to look at the maintenance as a bigger monster. Lowering maintenance costs The general advice is: “Don’t couple your tests to your code”. There’s a reason I chose this example: Tests are always coupled to the code. The level of coupling, and the feedback mechanisms we use effect how big these “maintenance” tasks are going to be. Here are some tips for lowering the chance of test maintenance.Check outputs, not algorithms. Because tests are coupled to the code, the less implementation details the test knows about, the better. Robust tests do not rely on specific method calls inside the code. Instead, they treat the tested system as a black box, even though they may know how it’s internally built. These tests, by the way, are also more readable. Work against a public interface. Test from the outside and avoid testing internal methods. We want to keep the internal method list (and signature) inside our black box. If you feel that’s unavoidable, consider extracting the internal method to a new public object. Use the minimal amount of assert. Being too specific in our assert criteria, especially when using verification of method calls on dependencies, can lead to breaking tests without a benefit. Do we need to know a method was called 5 times, or that it was called at least once? When it was called, do we need to know the exact value of its argument, or maybe a range suffices? With every layer of specificity, we’re adding opportunities for breaking the test. Remember we with failure, we want information to help solve the problem. If we don’t gain additional information from these asserts, lower the criteria. Use good refactoring tools. And a good IDE. And work with languages that support these. Otherwise, we’re delaying the feedback on errors, and causing the cost of maintenance to rise. Use less mocking. Using mocks is like using x-rays. They are very good at what they do, but over-exposure is bad. Mocks couple the code to the test even more. They allow us to specify internal implementation of the code in the test. We’re now relying on the internal algorithm, which can change. And then our test will need some fixing. Avoid hand-written mocks. The-hand written ones are the worst, because unless they are very simple, it is very easy to copy the behavior of the tested code into the mocks. Frameworks encourage setting the behavior through the interface.There’s a saying: Code is a liability, not an asset. Tests are the same – maintenance will not go away completely. But we can lower the cost if we stick to these guidelines.Reference: Test Attribute #6 – Maintenance from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
software-development-2-logo

ngImprovedTesting: mock testing for AngularJS made easy

Being able to easily test your application is one of the most powerful features that AngularJS offers. All the services, controllers, filters even directives you develop can be fully (unit) tested. However the learning curve for writing (proper) unit tests tends to be quite steep. This is mainly because AngularJS doesn’t really offer any high level API’s to ease the unit testing. Instead you are forced to use the same (low level) services that AngularJS uses internally. That means you have to gain in dept knowledge about the internals of $controller, when to $digest and how to use $provide in order to mock these services. Especially mocking out a dependency of controller, filter or another service is too cumbersome. This blog will show how you would normally create mocks in AngularJS, why its troublesome and finally introduces the new ngImprovedTesting library that makes mock testing much easier. Sample application Consider the following application consisting of the “userService” and the “permissionService”: var appModule = angular.module('myApp', []); appModule.factory('userService', function($http) { var detailsPerUsername = {}; $http({method: 'GET', url: '/users'}) .success(function(users) { detailsPerUsername = _.indexBy(users, 'username'); }); return { getUserDetails: function(userName) { return detailsPerUsername[userName]; } }; }); appModule.factory('permissionService', function(users) { return { hasAdminAccess: function(username) { return users.getUserDetails(username).admin === true; } }; }); When it comes to unit testing “permissionService” there are two default strategies:using mock $httpBackend (from the ngMock module) to simulate $http trafic from the “userService” using a mock instead of the actual “userService” dependencyReplacing the “userService” with a mock using vanilla AngularJS Using vanilla AngularJS you have to do all the hard work yourself when you like to create a mock. You will have to manually create an object with its relevant fields and methods. Finally you will have to register the mock (using $provide) to overwrite the existing service implementation. Using the following vanilla AngularJS we can replace “userService” with a mock in our unit tests: describe('Vanilla mocked style permissions service specification', function() { var userServiceMock; beforeEach(module('myApp', function ($provide) { userServiceMock = { getUserDetails: jasmine.createSpy() }; $provide.value('userService', userServiceMock); })); // ... The imperfections of the vanilla style of mocking To ability to mock services in unit tests is a really great feature in AngularJS but it’s far from perfect. As a developer I really don’t want to be bothered with having to manually create a mock object. For instance I might just simply forget to mock the “userService” dependency when testing the “permissionService” meaning I would accidentally test it using the actual “userService”. And what if you would refactor the “userService” and would rename its method to “getUserInfo”. Then you would except the unit test of “permissionService” to fail, right? But it won’t since the mocked “userService” still has the old “getUserDetails” (spy) method. Make things even worse… what if you would rename service to “userInfoService”. This makes the “userService” dependency of the “permissionService” to be no longer resolvable. Due to this modification the application will no longer bootstrap when executed inside a browser. But when executed from the unit test it won’t fail since its still uses its own mock. However other unit tests using the same module but not mocking the service will fail. How mock testing could be improved Coming from a Java background if found the manual creation of mocks felt quite weird to me. In static languages the existence of interfaces (and classes) make it way more easy to automatically create mocks. Using AngularJS we could do something similar … … what if we would use the original service as a template for creating a mocked version. Then we could automatically create mocks that contain the same properties as the original object. Each non-method property could be copied as-is and each method would instead be a Jasmine spy. Instead of manually registering a mock service using $provide we could instead automate this. This would also allow us to automatically check if a service you want to mock actually exists. Also we could check if the service being mock is indeed being used as dependency of a component. Introducing the ngImprovedTesting library With the intention of making (unit) testing more easy I created the “ngImprovedTesting” library. The just released 0.1 version supports (selectively) mocking out dependencies of a controller, filter or another service. Mock out the “userService” dependency when testing the “permissionService” is now extremely easy: describe('ngImprovedTesting mocked style permissions service specification', function() { beforeEach(ModuleBuilder.forModule('myApp') .serviceWithMocksFor('permissionService', 'userService') .build()); // ... continous in next code snippets Instead of using the traditional “beforeEach(module(‘myApp’))” we are using the ModuleBuilder of “ngImprovedTesting” to build a module specifically for our test. In this case we would like to test the actual “permissionService” in a test in combination with a mock for its “userService” dependency. But what if I would like to set some behavior on the automatically created mock … … how do I actually get a hold on the actual mock instance? Well simple… besides the component being tested all its dependencies including the mocked one can be injected. To differentiate a mock from a regular one it’s registered with “Mock” appended in its name. So to inject the mocked out version of “userService” just use “userServiceMock” instead: describe('hasAdminAccess method', function() { it('should return true when user details has property: admin == true', inject(function(permissions, userServiceMock) { userServiceMock.getUserDetails.andReturn({admin: true}); expect(permissions.hasAdminAccess('anAdminUser')).toBe(true); })); }); As you can see in the example the “userServiceMock.getUserDetails” method is a just a Jasmine spy. It therefor allows invocation of “andReturn” on in order to set the return value of the method. However it does not allow an “andCallThrough” as the spy is not on the original service. Exploring the ModuleBuilder API of ngImprovedTesting Since I didn’t get round to writing and generating JSDocs / NGDocs, I instead will quickly explain it here. To instantiate a “ModuleBuilder” use its static “forModule” method. The “ModuleBuilder” (in version 0.1) consists of the following instance methods:serviceWithMocksFor: registers a service for testing and mock specified dependencies serviceWithMocks: registers a service for testing and mock all dependencies serviceWithMocksExcept: registers a service for testing and mock dependencies except the specified controllerWithMocksFor: registers a controller for testing and mock specified dependencies controllerWithMocks: registers a controller for testing and mock all dependencies controllerWithMocksExcept: registers a controller for testing and mock dependencies except the specified controllerAsIs: registers a controller so that it can be instantiated through $controller filterWithMocksFor: registers a filter for testing and mock specified dependencies filterWithMocks: registers a filter for testing and mock all dependencies filterWithMocksExcept: registers a filter for testing and mock dependencies except the specified filterAsIs: registers a filter so that is can be using through $filterLimitations in the initial (0.1) of ngImprovedTesting Although version 0.1 is quite production ready (and well unit tested) is has its limitations:Services registered with the “provider” method currently cannot be used as to be tested service; meaning it cannot be used as first parameter of “serviceWithMocks…”, however it can be used as a (potentially mocked) dependency. Services which are registered using “$provide” (i.e. inside a config function of a module) instead of through “angular.Module” cannot be used as to be tested service. Mock testing of directives is currently not supported.How to get started with ngImprovedTesting All sources from this blog post can be found as part of a sample application:https://github.com/evangalen/ng-improved-testing-sample.gitThe sample applications demonstrates three different flavors of testing:One that uses the $httpBackend Another using vanilla mocking support And one using ngImprovedTestingTo execute the tests on the command-line use the following commands (requires NodeJS, NPM, Bower and Grunt to be installed): npm install bower update gruntThe actual sources of ngImprovedTesting itself are also hosted on GitHub:https://github.com/evangalen/ng-improved-testing.git: contains the source code on ngImprovedTesting itself. https://github.com/evangalen/ng-module-introspector.git: specifically developed AngularJS module introspector that allows us to retrieve the exact declaration of a controller, filter and service and its dependencies.Furthermore ngImprovedTesting is also available through bower itself. You can easily install and add it to an existing project using the following command: bower install ng-improved-testing --save-dev   Your feedback is more than welcome My goal for ngImprovedTesting is to ease mock testing in your AngularJS unit tests. I’m very interested in your feedback… is ngImprovedTesting any useful… and how could it be improved?Reference: ngImprovedTesting: mock testing for AngularJS made easy from our JCG partner Emil van Galen at the JDriven blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close