Featured FREE Whitepapers

What's New Here?

java-logo

Template method design pattern in Java

Template method pattern is a behavioral design pattern which provide base method for algorithm,called template method which defers some of its steps to subclasses So algorithm structure is same but some of its steps can be redefined by subclasses according to context. Template means Preset format like HTML templates which has fixed preset format.Similarly in template method pattern,we have a preset structure method called template method which consists of steps.This steps can be abstract method which will be implemented by its subclasses.       So in short you can say,In template method pattern,there is template method which defines set of steps and implementation of steps can be deferred to subclasses.Thus template method defines algorithm but exact steps can be defined in subclasses. When to use it:When you have a preset format or steps for algorithm but implementation of steps may vary. When you want to avoid code duplication,implementing common code in base class and variation in subclass.Structure:So in above diagram,as you can see we have defined template method with three steps i.e. operation1,operation2 and operation3.Among them,opeation1 and operation2 are abstract steps,so these are implemented by ConcreteClass.We have implemented operation3 here.You can implement an operation in base class in two scenario first is if it is common to all and second is if it is default implementation of that method. UML diagram will be much clearer now.Components: AbstractClassIt defines template method defining the structure of algorithm. It also defines abstract operations that will be implemented by subclasses to define steps of algorithm.ConcreteClassIt implements abstract operation of super class to carry out subclass specific steps of the algorithm and also overrides operation if default behavior is not requiredImportant points about template method pattern:Template method in super class follows “the Hollywood principle”: “Don’t call us, we’ll call you”. This refers to the fact that instead of calling the methods from base class in the subclasses, the methods from subclass are called in the template method from superclass. Template method in super class should not be overriden so make it final Customization hooks:Methods containing a default implementation that may be overidden in other classes are called hooks methods. Hook methods are intended to be overridden, concrete methods are not.So in this pattern,we can provide hooks methods.The problem is sometimes it becomes very hard to differentiate between hook methods and concrete methods. Template methods are technique for code reuse because with this,you can figure out common behavior and defer specific behavior to subclasses.Example: Lets take example.When you have to read from two data source i.e CSV and database then you have to process that data and generate output as CSV files.Here three steps are involved.Read data from correspoding data source Process data Write output to CSV filesJava code: Below class contains template method called ‘parseDataAndGenerateOutput’ which consists of steps for reading data,processing data and writing to csv file. 1.DataParser.java package org.arpit.javapostsforlearning;abstract public class DataParser {//Template method //This method defines a generic structure for parsing data public void parseDataAndGenerateOutput() { readData(); processData(); writeData(); } //This methods will be implemented by its subclass abstract void readData(); abstract void processData();//We have to write output in a CSV file so this step will be same for all subclasses public void writeData() { System.out.println('Output generated,writing to CSV'); } } In below class,CSV specific steps are implement in this class 2.CSVDataParser.java package org.arpit.javapostsforlearning;public class CSVDataParser extends DataParser {void readData() { System.out.println('Reading data from csv file'); } void processData() { System.out.println('Looping through loaded csv file'); } } In below class,database specific steps are implement in this class 3.DatabaseDataParser.java package org.arpit.javapostsforlearning;public class DatabaseDataParser extends DataParser {void readData() { System.out.println('Reading data from database'); }void processData() { System.out.println('Looping through datasets'); } } 4.TemplateMethodMain.java package org.arpit.javapostsforlearning;public class TemplateMethodMain {/** * @author arpit mandliya */ public static void main(String[] args) {CSVDataParser csvDataParser=new CSVDataParser(); csvDataParser.parseDataAndGenerateOutput(); System.out.println('**********************'); DatabaseDataParser databaseDataParser=new DatabaseDataParser(); databaseDataParser.parseDataAndGenerateOutput();}} output: Reading data from csv file Looping through loaded csv file Output generated,writing to CSV ********************** Reading data from database Looping through datasets Output generated,writing to CSV Used in java API:All non-abstract methods of java.io.InputStream, java.io.OutputStream, java.io.Reader and java.io.Writer. All non-abstract methods of java.util.AbstractList, java.util.AbstractSet and java.util.AbstractMap. javax.servlet.http.HttpServlet, all the doXXX() methods by default sends a HTTP 405 ‘Method Not Allowed’ error to the response. You’re free to implement none or any of them.Source code: Download   Reference: Template method design pattern in Java from our JCG partner Arpit Mandliya at the Java frameworks and design patterns for beginners blog. ...
software-development-2-logo

Your Password Is No Longer Secret, Part 1

Of course, the title is a trick. Your password is still secret, for now. To be sure that it will remain so, try to answer the following questions to yourself:How strong are your passwords? How strong they should be in order to prevent other people from revealing them? Are your password habits really adequate?Here, I assume that you are an Internet user with some experience. You don’t use simple or common passwords. Your passwords are at least 8 characters long. You mix letters, numbers, and special symbols. You never use the same password for multiple accounts, at least not for important accounts. Still, answering the above questions with certain confidence can be somewhat challenging. Also, the answers that would have been valid just a few years ago no longer hold. Modern computing advancements have invalidated many former assumptions, to the point that the entire concept of passwords can be seen as severely compromised. In this short blog series, I will be exploring passwords somewhat deeper. In this first post, I will try to help you answer the first two questions above by taking a closer look at password strength, cracking a few passwords on my home PC, and finally putting it all together to come to a definite answer. In the next post, I intend to examine more closely password creation and password habits in general. Password Vulnerability There are two factors to consider when determining the vulnerability of a password to various types of attacks:the password strength itself, that is the average number of guesses the attacker must test to crack it, usually measured by its entropy the speed with which an attacker can check the validity of each guess, usually measured by “password guesses per second” (p/s).While the first factor is under your (the user) direct control, the second factor is determined entirely by how the password is stored and used, and is therefore beyond your control. Most security systems introduce measures to severely limit the testing speed, usually by imposing a timeout after a small number of failed attempts. In such circumstances, the testing speed rarely exceeds 1000 p/s (usually much lower). However the system must store the passwords in some form and if this information is stolen, the situation gets much worse. To reduce the above risk, most systems store only a cryptographic hash of the password instead of the password itself, using hashing algorithms such as MD5 or SHA1. Such hashes are very hard to reverse, so an attacker who gets hold of the hash cannot directly recover the password. However, knowing the hash allows the attacker to test guesses offline much faster. Should you, as an user, worry about the above worst-case scenario? Well, it is not very likely for any individual account, but certainly far from impossible. Only in 2012, hackers got hold of millions of password hashes in security breaches at Yahoo, LinkedIn, eHarmony, and last.fm, among others. I personally had to change three of my passwords during the year due to these incidents. If you, like many Internet users, have accounts at tens or hundreds of Web sites, then the probability at least one of those hashes to fall into the wrong hands just in the next one year is actually quite high. There are also situations in which quick guessing is simply always possible. This is when the password is used to form a cryptographic key, e.g. for PGP or WPA. The conclusion is that since you, as a user, don’t have control over how your password is stored and used, you should assume that sooner or later, a hacker will be able to attempt to crack it offline. The two factors that influence the outcome are again the password strength, as well as the performance of the computing equipment that the hacker has at his disposal. Password Strength According to Wikipedia, Password strength is a measure of the effectiveness of a password in resisting guessing and brute-force attacks. It estimates how many trials an attacker would need, on average, to crack it. The strength of a password depends on the following factors:length complexity, the size of the used character set unpredictability, whether it is created randomly or by a more predictable processEntropy Password strength is usually measured in bits of entropy. This is simply the base-2 logarithm of the number of guesses needed to find the password with certainty. A password with let’s say 42 bits of entropy would require 242 attempts to try all possibilities during a brute force search. Note that on average, an attacker needs to try half the possible passwords to find the correct one. It is fairly easy to calculate the entropy of truly random passwords. If a password with length L is generated randomly from a set of N possible symbols, the number of possible passwords is NL. Therefore the entropy H is given by the formula:When considering human-generated passwords, which are not truly random, the situation gets much more interesting. To remember password more easily, humans tend to use words from a natural language, which has non-uniform distribution of letters, as well as predictable patterns of capitalisation and adding numbers or special symbols. Since all these can be exploited by cracking programs, such passwords have much lower entropy than a truly random password of the same length. To make it worse, it’s also much harder to correctly estimate their entropy. Let’s take the password Admin#123 as an example. This password is of length 9 and uses small and capital letters, numbers, and special symbols, in other words the full set of printable ASCII characters, which is of size 95. A cracking program that only takes this into account has to make 959, or 630,249,409,724,609,400 total attempts, resulting in an entropy of 59.1. However, this is far from optimal. A smarter cracking program could take advantage of the following facts:“admin” is a common 5-letter English word. A program doing a dictionary attack based on up to 5-letter long common English words has to make less than 2000 attempts, resulting in an initial entropy of about 11. Only the first letter is capitalised, an extremely common pattern. A program that tests just this pattern has to test each word just twice, which adds just 1 to the total entropy. The numbers and special symbols are added at the end of the word, another common pattern. A program that tests for numbers and special symbols at the beginning or the end of the word has to test each word 11 times. This adds another 3.5 bits. There is just one special symbol, and it is one of the 10 symbols in the upper row of the keyboard, above the digits. This adds another 3.3 bits. The number pattern “123” is also quite common. A program that tests common number patterns of up to this length, e.g. for equal or consecutive digits, would need to make less than 100 attempts per word, which adds another 6.6 bits.Thus, assuming a fairly smart cracking program, we arrived at an estimation of just about 25.4 bits. There are other, easier to use methods for estimating the strength of human-generated paswords, but none of them is really proven in practice:Using online strength test calculators such as Rumkin.com. This calculator, which only takes some of the above facts into account, estimates the entropy of Admin#123 as 35.5. The NIST Electronic Authentication Guideline proposes a very conservative method, according to which the entropy of Admin#123 is estimated as 25.5, pretty close to the above.For best results, I would recommend using more than one method and taking the middle value. In our case, we would come to a final estimation of about 30 bits of entropy. Modern Password Cracking Unfortunately for users, the technology progress in just the last 5 years has introduced tools that are radically better at password cracking than anything known to that point. Rather surprisingly, this technology progress happened in a seemingly unrelated area, namely video gaming. In response to the increasing demand for better 3D gaming experience, nVidia and ATI kept boosting the performance of graphics processing units (GPUs) in common video cards. Modern GPUs have thousands of processing cores and teraflops of computing performance. In 2007, a method of using these devices for password cracking was invented, and software that implements it soon became commonly available. Password cracking algorithms can be trivially parallelised, which makes GPUs particularly suitable for this task. To make it even worse, parallel computing frameworks such as OpenCL and virtualisation libraries such as VCL allow such algorithms to use a high number of graphics cards in a cluster, with cracking performance scaling nearly linearly with each GPU added. Security experts have only started to take notice of this threat, and companies have so far largely failed to respond appropriately. We still use the same password creation policies and guidelines as 5 or 10 years ago. How do “strong” 8-character passwords with a max entropy of 52 bits (much lower if human-generated) fare against GPU-powered crackers? In short, they suck. In the next sections, we will see how much exactly. Cracking Software There are several GPU-powered password cracking programs available, both commercial and free. Here, I would like to mention just two of the free alternatives:IGHASHGPU was one of the first such tools developed in 2009 – 2010 by Ivan Golubev. It supports a limited number of hashing algorithms and attack modes, and is no longer actively developed or maintained. oclHashcat-plus is arguably the best non-commercial password recovery tool available today. It supports a much larger set of hashing algorithms and attack modes, including a rule engine to script your own sophisticated attacks. It is actively developed and maintained and has a growing ecosystem of related tools, many of which open source.Cracking Tests on My Home PC I happen to have a 4-year old ATI Radeon HD 4870 video card on my home PC. So far it didn’t see much use in 3D gaming, but lately it experienced a few hours of full load when I did some password cracking tests on it with IGHASHGPU. The first thing I tried was a simple brute force search of abc123 hashed as MD5. This ended pretty quickly.Well, abc123 isn’t exactly a secure password, and MD5 is a relatively weak algorithm compared to SHA1. So I proceeded with more complex passwords, using both algorithms. The results for MD5 are given in the table below.Password Charset Size Length Entropy Speed(Mp/s) ETA Actual Timeabc123 36 6 31.0 1100 2s 2sabcd1234 36 8 41.4 1100 41m 32m 7sAdmin#123 72 9 55.5 1100 1y 200d ?Admini$>123 95 11 72.3 1100 162000y ?Admini$tr>123 95 13 85.4 1100 Next Big Boom ?Admini$trat0r>12 95 16 105.1 1100 Next Big Boom ?In the last 2 cases, the program actually displayed “Next Big Boom” as an estimated time. I decided not to wait that much. The times for SHA1 were consistently about 3 times longer. The important thing to notice is the speed: on my outdated GPU it is 1100 Mp/s for MD5 and 360 Mp/s for SHA1. This is far better than what could be achieved on the fastest Intel i7 processor currently available. Cracking Speeds on Modern GPUs As you would expect, the performance of my 4-year old video card is nothing special compared to modern equivalents, especially if they are assembled together in large clusters. The following article from December 2012 describes a 25-GPU cluster of modern ATI video cards which reportedly can do brute force search on NTLM hashes with a speed of 350 Bp/s. Since NTLM hashes are only slightly weaker than MD5 hashes, this is actually about 300 times faster than my GPU. The diagram below charts the cracking times in days of my GPU and that monster per entropy. With simple brute force search, the time doubles with each bit of entropy. To visualize this better, the base-10 logarithm of the time is used.In a single day, my GPU could crack a password of 47 bits of entropy, while the Monster would happily crunch a much more complex password of 56 bits of entropy. The average time to find any 8-character password of 52 bits of entropy on the Monster is just about 2 hours. Speed Per $ People that build 25-GPU clusters could also build much larger ones, given sufficient resources. Therefore, instead of looking at the speed of any GPU cluster with a given size, we should rather look at the speed per $ which is achievable today and in the near future. To do this, I first combined the GPU speed estimations for various ATI video cards provided by Ivan Golubev with the comparison of ATI GPUs in Wikipedia. For SHA1, I came to the following picture:Card Year Price ($) Units Clock GFLOPS Speed,Mp/s Speed / GFLOPSATI Radeon HD 4770 2008640 750 960 352 0.37ATI Radeon HD 4870×2 2008 560 1600 750 2400 880 0.37ATI Radeon HD 5870 20091600 850 2720 1360 0.50ATI Radeon HD 5970 2009 599 3200 725 4640 2320 0.50ATI Radeon HD 6970 2010 369 1536 880 2703 1408 0.52ATI Radeon HD 6990 2011 699 3072 830 5100 2656 0.52ATI Radeon HD 7950 2012 449 1792 800 2867 1493 0.52ATI Radeon HD 7970 2012 549 2048 925 3789 1973 0.52ATI Radeon HD 8970 2013 499 2048 1000 4096 ? ?The price above is the release price of the card. The GFLOPS value is calculated by multiplying the processing units to their clock and then by 2, and finally dividing by 1000. The last value, Speed / GFLOPS, is a factor which represents how well the available GFLOPS can be used on any given GPU for password cracking. It tends to be the same for GPUs of the same family. Over the last 5 years, this number has initially increased quickly and then reached a plateau. Based on the above table, one could easily calculate the speed per $ for any given year after 2008. I decided to go further and extend this for the next 5 years as well. The results below are again for SHA1.Year GFLOPS Price ($) GFLOPS / $ Speed / GFLOPS Speed / $2008 2400 560 4.29 0.37 1.592009 4640 599 7.75 0.50 3.872010 2703 369 7.33 0.52 3.812011 5100 699 7.30 0.52 3.792012 3789 549 6.90 0.52 3.592013 4096 499 8.21 0.52 4.2720148.66 0.53 4.5520159.12 0.53 4.8320169.58 0.54 5.12201710.03 0.54 5.42201810.49 0.55 5.72To come to the above numbers, I have made some arbitrary assumptions:The GFLOPS / $ will increase every year at the rate it increased between 2013 and 2011. This is conservative compared to let’s say 2009 and 2008. The Speed / GFLOPS will increase every year at the rate it increased between 2013 and 2009. This is also somewhat conservative.The above calculations only take into account the price of the cards themselves, ignoring other cost components such as electricity, other hardware, personnel, etc. Another thing not taken into account is that it will be more effective economically to use older cards with more favorable price / performance ratio, rather than brand new ones. Finally, I assume that the hashes are properly salted and cracking has to be done via brute force search and not some other method such as rainbow tables. How Strong a Password? As a user, to estimate how strong should your password be, you should find the answer to the following question: for a given the hashing algorithm A, how much entropy is needed in order cracking the password in T time to cost X money, today and in the near future? Note that the variables T and X can be chosen differently based on your habits (for example, how often do you change your password), and actual needs (for example, the cost of resources that this particular password protects). Since you normally don’t have any knowledge about the hashing algorithm, you should assume a weaker one such as MD5. The above question can be easily answered using the numbers from the previous section. The diagram below charts the cost in $ against the entropy for T of half an year, again in logarithmic units, for SHA1 and MD5, today and in 5 years.As you can see, in order for cracking your password to cost anything at all, it should be of at least 50 bits. Also, all functions cross the 1,000,000$ mark between 65 and 70 bits. Therefore, passwords of 70 bits or more will cost at least few million dollars to crack for the next few years, independently of the hashing algorithm used. Based on the above considerations, I consider passwords of 70 bits of entropy or more to be sufficiently strong for my needs and that of most Internet users for protecting anything of some importance. On the other side, I consider passwords of 50 bits of entropy or less to be inadequate for this purpose. Conclusion Now that you know that a password of 70 bits of entropy is sufficiently strong, protecting your important accounts is a simple matter of creating such passwords for all of them, right? Here is the trick: remembering and typing even a single password of genuine 70 bits of entropy is anything but simple. In fact, most users would find it quite hard. To give you and myself some time to reflect on this, I would like to take a pause here and explore this topic further in my next post.   Reference: Your Password Is No Longer Secret, Part 1 from our JCG partner Stoyan Rachev at the Stoyan Rachev’s Blog blog. ...
java-logo

Facebook Hacker Cup : Studious Student Problem Solution in Java

This program is a solution to Studious Student problem from Facebook Hacker Cup. The problem can be found here: link. The problem: Studious Student You’ve been given a list of words to study and memorize. Being a diligent student of language and the arts, you’ve decided to not study them at all and instead make up pointless games based on them. One game you’ve come up with is to see how you can concatenate the words to generate the lexicographically lowest possible string. Input As input for playing this game you will receive a text file containing an integer N, the number of word sets you need to play your game against. This will be followed by N word sets, each starting with an integer M, the number of words in the set, followed by M words. All tokens in the input will be separated by some whitespace and, aside from N and M, will consist entirely of lowercase letters. Output Your submission should contain the lexicographically shortest strings for each corresponding word set, one per line and in order. Constraints 1 <= N <= 100 1 <= M <= 9 1 <= all word lengths <= 10 Example input 5 6 facebook hacker cup for studious students 5 k duz q rc lvraw 5 mybea zdr yubx xe dyroiy 5 jibw ji jp bw jibw 5 uiuy hopji li j dcyi Example output cupfacebookforhackerstudentsstudious duzklvrawqrc dyroiymybeaxeyubxzdr bwjibwjibwjijp dcyihopjijliuiuy import java.io.*; import java.util.Arrays; public class StudiousStudent { StudiousStudent(String inputFile) throws IOException, FileNotFoundException { FileInputStream fis = new FileInputStream(inputFile); DataInputStream in = new DataInputStream(fis); BufferedReader br = new BufferedReader(new InputStreamReader(in)); String line = null; String splitArray[] = null; //Reading the file line by line while((line = br.readLine()) != null) { //Splitting a line from spaces splitArray = line.split(" ");//Initial Sort Arrays.sort(splitArray);//Advanced Sort for (int i = 1; i<splitArray.length; i++) { for (int j = i+1; j<splitArray.length; j++) { if ((splitArray[j].startsWith(splitArray[i])) && (splitArray[i].length() < splitArray[j].length())) { String tmp = splitArray[i]; splitArray[i] = splitArray[j]; splitArray[j] = tmp; } } } for (int i = 1; i<splitArray.length; i++) System.out.print(splitArray[i]+""); System.out.println(); } br.close(); } public static void main(String args[]) throws FileNotFoundException, IOException { new StudiousStudent("StudiousStudent.txt"); } } Output:  Reference: Facebook Hacker Cup : “Studious Student” Solution in Java from our JCG partner Vishal Lad at the myCoding.net blog. ...
spring-interview-questions-answers

Spring Integration – Application from scratch, Part 2

This is the second part of the tutorial where we are creating an invoices processing application using Spring Integration. In case you missed it, be sure to look at the first part. Previously we’ve defined functional requirements for the system, created gateway, splitter, filter and router component. Let’s continue with creating a transformer. 5. Transforming invoices to the payments We’ve successfully filtered out “too expensive” invoices from the system now (they might need manual inspection or so). The important thing is that we can now take an invoice and generate payment from it. First, let’s add Payment class to the banking   package: package com.vrtoonjava.banking;import com.google.common.base.Objects;import java.math.BigDecimal;public class Payment {private final String senderAccount; private final String receiverAccount; private final BigDecimal dollars;public Payment(String senderAccount, String receiverAccount, BigDecimal dollars) { this.senderAccount = senderAccount; this.receiverAccount = receiverAccount; this.dollars = dollars; }public String getSenderAccount() { return senderAccount; }public String getReceiverAccount() { return receiverAccount; }public BigDecimal getDollars() { return dollars; }@Override public String toString() { return Objects.toStringHelper(this) .add('senderAccount', senderAccount) .add('receiverAccount', receiverAccount) .add('dollars', dollars) .toString(); }} Because we will have two ways how to create a payment (from local and foreign invoices), let’s define a common contract (interface) for creating payments. Put interface PaymentCreator to the banking package: package com.vrtoonjava.banking;import com.vrtoonjava.invoices.Invoice;/** * Creates payment for bank from the invoice. * Real world implementation might do some I/O expensive stuff. */ public interface PaymentCreator {Payment createPayment(Invoice invoice) throws PaymentException;} Technically, this is a simple parametrized Factory. Note that it throws PaymentException. We’ll get to the exception handling later, but here’s the code for the simple PaymentException: package com.vrtoonjava.banking;public class PaymentException extends Exception {public PaymentException(String message) { super(message); }} Now we’re good to add two implementations to the invoices package. First, let’s create LocalPaymentCreator class: package com.vrtoonjava.invoices;import com.vrtoonjava.banking.Payment; import com.vrtoonjava.banking.PaymentCreator; import com.vrtoonjava.banking.PaymentException; import org.springframework.integration.annotation.Transformer; import org.springframework.stereotype.Component;@Component public class LocalPaymentCreator implements PaymentCreator {// hard coded account value for demo purposes private static final String CURRENT_LOCAL_ACC = 'current-local-acc';@Override @Transformer public Payment createPayment(Invoice invoice) throws PaymentException { if (null == invoice.getAccount()) { throw new PaymentException('Account can not be empty when creating local payment!'); }return new Payment(CURRENT_LOCAL_ACC, invoice.getAccount(), invoice.getDollars()); }} Another creator will be ForeignPaymentCreator with rather straightforward implementation: package com.vrtoonjava.invoices;import com.vrtoonjava.banking.Payment; import com.vrtoonjava.banking.PaymentCreator; import com.vrtoonjava.banking.PaymentException; import org.springframework.integration.annotation.Transformer; import org.springframework.stereotype.Component;@Component public class ForeignPaymentCreator implements PaymentCreator {// hard coded account value for demo purposes private static final String CURRENT_IBAN_ACC = 'current-iban-acc';@Override @Transformer public Payment createPayment(Invoice invoice) throws PaymentException { if (null == invoice.getIban()) { throw new PaymentException('IBAN mustn't be null when creating foreign payment!'); }return new Payment(CURRENT_IBAN_ACC, invoice.getIban(), invoice.getDollars()); }} Interesting part about creators is @Transformer annotation. It’s a similar concept as we’ve used with @Filter annotation – only this time we’re telling to Spring Integration that it should use this method for payload transforming logic. Either way we will use foreign or local transformer, so new message will end in bankingChannel channel. Let’s define these new transformers in our schema file: <int:transformer input-channel='localTransactions' output-channel='bankingChannel' ref='localPaymentCreator' /><int:transformer input-channel='foreignTransactions' output-channel='bankingChannel' ref='foreignPaymentCreator' /><int:channel id = 'bankingChannel'> <int:queue capacity='1000' /> </int:channel> 6. Passing payments to the banking service (Service Activator) Payments are ready and messages containing them are waiting in the bankingChannel. The last step of the flow is to use Service Activator component. The way it works is simple – when a new message appears in a channel, Spring Integration invokes logic specified in a Service Activator component. So when a new payment appears in the bankingChannel, we want to pass it to the banking service. In order to do that we first need to see a contract for the banking service. So put interface BankingService to the banking package (in the real world this would probably reside in some external module): package com.vrtoonjava.banking;/** * Contract for communication with bank. */ public interface BankingService {void pay(Payment payment) throws PaymentException;} Now we will need an actual implementation of the BankingService. Again, it’s highly unlikely that implementation would reside in our project (it would probably be remotely exposed service), but let’s at least create some mock implementation for the tutorial purposes. Add MockBankingService class to the banking package: package com.vrtoonjava.banking;import org.springframework.stereotype.Service;import java.util.Random;/** * Mock service that simulates some banking behavior. * In real world, we might use some web service or a proxy of real service. */ @Service public class MockBankingService implements BankingService {private final Random rand = new Random();@Override public void pay(Payment payment) throws PaymentException { if (rand.nextDouble() > 0.9) { throw new PaymentException('Banking services are offline, try again later!'); }System.out.println('Processing payment ' + payment); }} Mock implementation creates on some random occasions (~10%) a failure. Of course for the better decoupling we’re not going to use it directly, we will create dependency from our custom component on a contract (interface) instead. Let’s add PaymentProcessor class to the invoices package now: package com.vrtoonjava.invoices;import com.vrtoonjava.banking.BankingService; import com.vrtoonjava.banking.Payment; import com.vrtoonjava.banking.PaymentException; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.integration.annotation.ServiceActivator; import org.springframework.stereotype.Component;/** * Endpoint that picks Payments from the system and dispatches them to the * service provided by bank. */ @Component public class PaymentProcessor {@Autowired BankingService bankingService;@ServiceActivator public void processPayment(Payment payment) throws PaymentException { bankingService.pay(payment); }} Again – note the @ServiceActivator annotation. That means that Spring Integration should invoke corresponding method when service activator component comes to the game. To use the service activator we need to add it to the integration schema: <int:service-activator input-channel='bankingChannel' ref='paymentProcessor'> <int:poller fixed-rate='500' error-channel='failedPaymentsChannel' /> </int:service-activator><int:channel id = 'failedPaymentsChannel' /> Note that we’re defining fixed-rate attribute which means that activator will be invoked every half second (if there is some message present in the bankingChannel). We’re also defining error-channel attribute, but we’ll get there just in moment. Error handling One of the biggest challenges of messaging systems is to properly identify and handle error situations. Spring Integration provides a technique called “error channels”, where we can (obviously) send error messages from the system. Error channel is just another channel and we can take proper action when an error message appears in this channel. In the real world applications we would probably go for some retry logic or professional reporting, in our sample tutorial we will just print out the cause of the error. In the previous component (Service Activator) we’ve specified error-channel property to refer to the failedPaymentsChannel. When message arrives to this channel we will invoke another Service Activator and print out the error. Here’s the implementation of the FailedPaymentHandler Service Activator: package com.vrtoonjava.invoices;import org.springframework.integration.annotation.ServiceActivator; import org.springframework.stereotype.Component;@Component public class FailedPaymentHandler {@ServiceActivator public void handleFailedPayment(Exception e) { System.out.println('Payment failed: ' + e); // now the system should do something reasonable, like retrying the payment // omitted for the tutorial purposes }} And let’s hook it to the integration schema as usual: <int:service-activator input-channel='failedPaymentsChannel' ref='failedPaymentHandler' /> Running the whole thing We’ll create a job now that will (at fixed rate) send new invoices to the system. It is only a standard Spring bean that utilizes Spring’s @Scheduled annotation. So let’s add a new class – InvoicesJob to the project: package com.vrtoonjava.invoices;import org.springframework.beans.factory.annotation.Autowired; import org.springframework.scheduling.annotation.Scheduled; import org.springframework.stereotype.Component;import java.util.ArrayList; import java.util.Collection; import java.util.List;/** * Job that every n-seconds generates invoices and sends them to the system. * In real world this might be endpoint receiving invoices from another system. */ @Component public class InvoicesJob {private int limit = 10; // default value, configurable@Autowired InvoiceCollectorGateway invoiceCollector;@Autowired InvoiceGenerator invoiceGenerator;@Scheduled(fixedRate = 4000) public void scheduleInvoicesHandling() { Collection<Invoice> invoices = generateInvoices(limit); System.out.println('\n===========> Sending ' + invoices.size() + ' invoices to the system'); invoiceCollector.collectInvoices(invoices); }// configurable from Injector public void setLimit(int limit) { this.limit = limit; }private Collection<Invoice> generateInvoices(int limit) { List<Invoice> invoices = new ArrayList<>(); for (int i = 0; i < limit; i++) { invoices.add(invoiceGenerator.nextInvoice()); }return invoices; }} Job invokes (every 4 seconds) InvoicesGenerator and forwards invoices to the Gateway (first component we read about). To make it work we also need InvoicesGenerator class: package com.vrtoonjava.invoices;import org.springframework.stereotype.Component;import java.math.BigDecimal; import java.util.Random;/** * Utility class for generating invoices. */ @Component public class InvoiceGenerator {private Random rand = new Random();public Invoice nextInvoice() { return new Invoice(rand.nextBoolean() ? iban() : null, address(), account(), dollars()); }private BigDecimal dollars() { return new BigDecimal(1 + rand.nextInt(20_000)); }private String account() { return 'test-account ' + rand.nextInt(1000) + 1000; }private String address() { return 'Test Street ' + rand.nextInt(100) + 1; }private String iban() { return 'test-iban-' + rand.nextInt(1000) + 1000; }} This is only a simple mock facility that’ll allow us to see the system working. In the real world we wouldn’t use any generator but probably some exposed service instead. Now under resources folder create a new spring config file – invoices-context.xml and declare component scanning and task scheduling support: <?xml version='1.0' encoding='UTF-8'?> <beans xmlns = 'http://www.springframework.org/schema/beans' xmlns:xsi = 'http://www.w3.org/2001/XMLSchema-instance' xmlns:task = 'http://www.springframework.org/schema/task' xmlns:context = 'http://www.springframework.org/schema/context' xsi:schemaLocation = 'http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd'><import resource = 'invoices-int-schema.xml' /><context:component-scan base-package = 'com.vrtoonjava.invoices' /> <context:component-scan base-package = 'com.vrtoonjava.banking' /><task:executor id = 'executor' pool-size='10' /> <task:scheduler id = 'scheduler' pool-size='10' /> <task:annotation-driven executor='executor' scheduler='scheduler' /></beans> To see the whole thing running we need one more last piece – standard Java main application where we will create Spring’s ApplicationContext. package com.vrtoonjava.invoices;import org.springframework.context.support.ClassPathXmlApplicationContext;/** * Entry point of the application. * Creates Spring context, lets Spring to schedule job and use schema. */ public class InvoicesApplication {public static void main(String[] args) { new ClassPathXmlApplicationContext('/invoices-context.xml'); }} Simply run mvn clean install from command line and launch the main method in InvoicesApplication class. You should be able to see similar output: ===========> Sending 10 invoices to the system Amount of $3441 can be automatically processed by system Amount of $17419 can not be automatically processed by system Processing payment Payment{senderAccount=current-local-acc, receiverAccount=test-account 1011000, dollars=3441} Amount of $18442 can not be automatically processed by system Amount of $19572 can not be automatically processed by system Amount of $5471 can be automatically processed by system Amount of $1663 can be automatically processed by system Processing payment Payment{senderAccount=current-iban-acc, receiverAccount=test-iban-2211000, dollars=5471} Amount of $13160 can not be automatically processed by system Amount of $2213 can be automatically processed by system Amount of $1423 can be automatically processed by system Processing payment Payment{senderAccount=current-iban-acc, receiverAccount=test-iban-8051000, dollars=1663} Amount of $1267 can be automatically processed by system Payment failed: org.springframework.integration.MessageHandlingException: com.vrtoonjava.banking.PaymentException: Banking services are offline, try again later! Processing payment Payment{senderAccount=current-iban-acc, receiverAccount=test-iban-6141000, dollars=1423} Processing payment Payment{senderAccount=current-local-acc, receiverAccount=test-account 6761000, dollars=1267}   Reference: Spring Integration – Application from scratch, Part 2 from our JCG partner Michal Vrtiak at the vrtoonjava blog. ...
spring-interview-questions-answers

Spring Integration – Application from scratch, Part 1

Before we start In this tutorial you will learn what is Spring Integration, how to use it and what kind of problems does it help to solve. We will build a sample application from the scratch and demonstrate some of the core components of Spring Integration. If you’re new to Spring check out another tutorial on Spring written by me – Shall we do some Spring together? Also note that you don’t need any special tooling, however you can get the best experience for building Spring Integration applications either with IntelliJ IDEA or Spring Tool Suite (you can get some fancy-looking diagrams with STS). You can either follow this tutorial step-by-step and create application from the scratch yourself, or you can go ahead and get the code from github:   DOWNLOAD SOURCES HERE: https://github.com/vrto/spring-integration-invoices Whichever way you prefer, it’s time to get started! Application for processing invoices – functional description Imagine that you’re working for some company that periodically receives a large amount of invoices from various contractors. We are about to build a system that will be able to receive invoices, filter out relevant ones, create payments (either local or foreign) and send them to some banking service. Even though the system will be rather naive and certainly not enterprise-ready, we will try to build it with good scalability, flexibility and decoupled design in the mind. Before you go on, you must realize one thing: Spring Integration is (not only, but mostly) about messaging. Spring Integration is basically an embedded enterprise service bus that lets you seamlessly connect your business logic to the messaging channels. Messages can be handled both programmatically (via Spring Integration API) or automatically (by framework itself – higher level of decoupling). Message is the thing that travels across channels. Message has headers and payload – which will be in our case the actual relevant content (domain classes). Let’s take a look at the following picture which is a summary of the system and walk over important pieces:On the picture you can see an integration diagram that illustrates our messaging structure and core components of the system – they are marked with red numbers. Let’s walk over those (we will get back to each component in more detail later):Invoices Gateway – this is the place where we will put new invoices so they can enter the messaging layer Splitter – the system is designed to accept a collection of invoices, but we will need to process each invoice individually. More specifically, message with payload of Collection type will be split to the multiple messages, where each message will have individual invoice as a payload. Filter – Our system is designed to automatically process only those invoices that issue less than $10,000 Router – Some invoices use IBAN account numbers and we have two different accounts – one for the local transactions and one for the foreign transactions. The job of a router component is to send a message carrying invoice to the correct channel – either for local invoices, or for the foreign invoices. Transformers – While we accept Invoices in to the system, our banking APIs work with other types – Payments. Job of the transformer component is to take some message and transform it to another message according to provided logic. We want to transform the payload of original message (invoice) to the new payload – payment. Banking Service Activator – After we have processed invoices and generated some actual payments we’re ready to talk to the external banking system. We have exposed service of such systems and when message carrying payment enters the correct (banking) channel, we want to activate some logic – passing the payment to the bank and let the bank do further processing.Creating the project By now you should have a high level overview of what the system does and how is it structured. Before we start coding you will need an actual Maven project and set up the structure and required dependencies. If you’re familiar with Maven then see pom.xml file below, else if you want to save some time you’re welcome to use a project template I’ve created for you: download the Maven project template. <?xml version='1.0' encoding='UTF-8'?> <project xmlns='http://maven.apache.org/POM/4.0.0' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd'> <modelVersion>4.0.0</modelVersion><groupId>spring-integration-invoices</groupId> <artifactId>spring-integration-invoices</artifactId> <version>1.0-SNAPSHOT</version><dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>3.2.1.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-core</artifactId> <version>2.2.1.RELEASE</version> </dependency><dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.16</version> </dependency> <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>13.0.1</version> </dependency> <dependency> <groupId>org.testng</groupId> <artifactId>testng</artifactId> <version>6.5.2</version> </dependency> </dependencies><build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.7</source> <target>1.7</target> </configuration> </plugin> </plugins> </build></project> Let’s now walk over the six major components of the system in more details and get hands on the actual code. 1. Invoices Gateway First, let’s see the code for Invoice – which will be one of the core classes in our system. I will be using package com.vrtoonjava as root package, and invoices and banking as sub-packages: package com.vrtoonjava.invoices;import com.google.common.base.Objects;import java.math.BigDecimal;public class Invoice {private final String iban; private final String address; private final String account; private final BigDecimal dollars;public Invoice(String iban, String address, String account, BigDecimal dollars) { this.iban = iban; this.address = address; this.account = account; this.dollars = dollars; }public boolean isForeign() { return null != iban && !iban.isEmpty(); }public String getAddress() { return address; }public String getAccount() { return account; }public BigDecimal getDollars() { return dollars; }public String getIban() { return iban; }@Override public String toString() { return Objects.toStringHelper(this) .add('iban', iban) .add('address', address) .add('account', account) .add('dollars', dollars) .toString(); }} Imagine that we’re getting invoices from an another system (be it database, web-service or something else), but we don’t want to couple this part to the integration layer. We will use Gateway component for that purpose. Gateway introduces a contract that decouples client code from the integration layer (Spring Integration dependencies in our case). Let’s see the code for InvoiceCollectorGateway: package com.vrtoonjava.invoices;import java.util.Collection;/** * Defines a contract that decouples client from the Spring Integration framework. */ public interface InvoiceCollectorGateway {void collectInvoices(Collection<Invoice> invoices);} Now, to actually use the Spring Integration we need to create a standard Spring configuration file and use Spring Integration namespace. To get started, here’s invoices-int-schema.xml file. Put it into src/main/resources. Note that we’ve already defined a logging-channel-adapter – which is a special channel where we will send messages from the logger. We’re also using wire-tap – you can think of it as sort of global interceptor that will send logging-related messages to the logger channel. <?xml version='1.0' encoding='UTF-8'?> <beans xmlns = 'http://www.springframework.org/schema/beans' xmlns:xsi = 'http://www.w3.org/2001/XMLSchema-instance' xmlns:int = 'http://www.springframework.org/schema/integration' xsi:schemaLocation = 'http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration.xsd'><!-- intercept and log every message --> <int:logging-channel-adapter id='logger' level='DEBUG' /> <int:wire-tap channel = 'logger' /> </beans> Let’s get back to our gateway now. We’ve defined a gateway interface – that is the dependency that client will use. When client calls collectInvoices method, gateway will send a new message (containing List payload) to the newInvoicesChannel channel. That leaves client decoupled from the messaging facilities, but lets us place the result to the real messaging channel. To configure gateway, add following code to the integration schema config: <int:channel id = 'newInvoicesChannel' /><int:gateway id='invoicesGateway' service-interface='com.vrtoonjava.invoices.InvoiceCollectorGateway'> <int:method name='collectInvoices' request-channel='newInvoicesChannel' /> </int:gateway> 2. Invoices Splitter From the Gateway we’re sending one big message to the system that contains a collection of invoices – in other words – Message has payload of Collection type. As we want to process invoices individually, we will get the result from the newInvoicesChannel and use a splitter component, that will create multiple messages. Each of these new messages will have a payload of Invoice type. We will then place messages to the new channel – singleInvoicesChannel. We will use a default Splitter that Spring Integration provides (by default Spring Integration uses DefaultMessageSplitter that does exactly what we want). This is how we define splitter: <int:splitter input-channel='newInvoicesChannel' output-channel='singleInvoicesChannel' /><int:channel id = 'singleInvoicesChannel' /> 3. Filtering some invoices A business use case of our system requires us to automatically process only those invoices that that issue us less than $10,000. For this purpose we will introduce a Filter component. We will grab messages from the singleInvoicesChannel, apply our filtering logic on them, and then write matched results to the new filteredInvoicesChannel channel. First, let’s create a standard Java class that will contain filtering logic for single invoice. Note that we use @Component annotation (which makes it a standard Spring bean) and we annotate filtering method with @Filter annotation – that will tell Spring Integration to use this method for filtering logic: package com.vrtoonjava.invoices;import org.springframework.integration.annotation.Filter; import org.springframework.stereotype.Component;@Component public class InvoiceFilter {public static final int LOW_ENOUGH_THRESHOLD = 10_000;@Filter public boolean accept(Invoice invoice) { boolean lowEnough = invoice.getDollars().intValue() < LOW_ENOUGH_THRESHOLD; System.out.println('Amount of $' + invoice.getDollars() + (lowEnough ? ' can' : ' can not') + ' be automatically processed by system');return lowEnough; }} Note that this is a standard POJO that we can easily unit test it! As I’ve said before – Spring Integration doesn’t tightly couple us to its messaging facilities. For the sake of brevity, I am not pasting unit tests in this tutorial – but if you’re interested go ahead and download github project and see the tests for yourself. Let’s specify input/output channels for the messaging layer and hook the filter in. Add the following code to the integration schema config: <int:filter input-channel='singleInvoicesChannel' output-channel='filteredInvoicesChannel' ref='invoiceFilter' /><int:channel id = 'filteredInvoicesChannel' /> 4. Routing invoices So far, we’ve splitted and filtered out some invoices. Now it’s time to inspect contents of the each invoice more closely and decide, whether it is an invoice issued from the current country (local), or from an another country (foreign). In order to do that we can approach as before and use custom class for routing logic. We will (for the sake of demonstration purposes) take the other approach now – we will put Spring Expression Language (SpEL) to use and handle routing completely declaratively. Remember isForeign method on Invoice class? We can directly invoke it with SpEL in router declaration (by using selector-expression attribute)! Router will take a look on the payload, evaluate whether it’s a foreign or a local invoice and forward it to the corresponding channel: <int:recipient-list-router input-channel='filteredInvoicesChannel'> <int:recipient channel = 'foreignTransactions' selector-expression='payload.foreign' /> <int:recipient channel = 'localTransactions' selector-expression='!payload.foreign' /> </int:recipient-list-router><int:channel id = 'foreignTransactions' /> <int:channel id = 'localTransactions' /> We will continue developing this application in the second part of this tutorial.   Reference: Spring Integration – Application from scratch, Part 1 from our JCG partner Michal Vrtiak at the vrtoonjava blog. ...
couchbase-logo

Easy application development with Couchbase, Angular and Node.js

A friend of mine wants to build a simple system to capture ideas, and votes. Even if you can find many online services to do that, I think it is a good opportunity to show how easy it is to develop new application using a Couchbase and Node.js. So how to start? Some of us will start with the UI, other with the data, in this example I am starting with the model. The basics steps are :      Model your documents Create Views Create Services Create the UI Improve your application by iterationThe sources of this sample application are available in Gihub : https://github.com/tgrall/couchbase-node-ideas Use the following command to clone the project locally : git clone https://github.com/tgrall/couchbase-node-ideas.git Note: my goal is not to provide a complete application, but to describe the key steps to develop an application. Model your documents For this application you need 3 types of document :Ideas : describes the idea with a author, title and description Vote : the author and a comment – note that it is a choice to not put a value for the vote, in this first version if the vote exists this means user like the idea. User : contains all the information about the user (not used in this first version of the application)You can argue that it is possible to put the votes as a list of element inside the idea document. In this case I prefer to use different document and reference the idea in the vote since we do not know how many votes/comments will have. Using different documents is also interesting in this case for the following reasons :No ‘concurrent’ access, when a user wants to vote he does not change the idea document itself, so no need to put an optimistic locking in place. The size of the document will be smaller and easier to cache in memory.So documents will look like: { "type" : "idea", "id" : "idea:4324", "title" : "Free beer during bug hunt", "description" : "It will be great to have free beer during our test campaign!", "user_id" : "user:234" } { "type" : "user", "id" : "user:434", "name" : "John Doe", "email" : "jdoe@myideas.com" } { "type" : "vote", "id" : "vote:usr:434-idea:4324", "idea_id" : "idea:4324", "user_id" : "user:434", "comment" : "This is a great idea, beer is excellent to find bugs!" } What I really like is the fact that I can quickly create a small dataset to validate that it is correct and help me to design the view. The way I do it, I start my server, launch the Couchbase Administration Console, create a bucket, and finally insert document manually and validate the model and views. Create Views Now that I have created some documents, I can think about the way I want to get the information out of the database. For this application I need:The list of ideas The votes by ideasThe list of idea for this first version is very simple, we just need to emit the title: function (doc, meta) { if (doc.type == "idea") { emit(doc.title); } } For the votes by ideas, I choose to create a collated view, this will give me some interesting options when I will expose them into an API/View layer. I am also for this view using sum() reduce function to be sure I capture the number of votes. function (doc, meta) { switch (doc.type){ case "idea" : emit([meta.id,0, doc.title],0); break; case "vote" : emit([doc.idea_id,1],1); break; } } I have my documents, I have some views that allow me to retrieve the list of ideas, the number of vote by idea and count the vote… So I am ready to expose all these informations to the application using a simple API layer. Create Services Lately I have been playing a lot with Node.js, just because it is nice to learn new stuff and also because it is really easy to use with Couchbase. Think about it Couchbase loves JSON, and Node.js object format is JSON, this means I do not have any marshaling/unmarshaling to do. My API layer is quite simple, I just need to create a set of REST endpoint to deal with:CRUD operation on each type of document List the different DocumentsThe code of the services is available in branch 01-simple-services: You can run the application with simple services using the following command: > git checkout -f 01-simple-services > node app.js and go to you browser using the http://127.0.0.1:3000 About the project For this project I am using only 2 node modules Express and Couchbase. The package.json file looks like : { 'name': 'couchbase-ideas-management', 'version': '0.0.1', 'private': true, 'dependencies': { 'express': '3.x', 'couchbase': '0.0.11' } } After running the install, let’s code the new API interface, as said before I am using an iterative approach so for now I am not dealing with the security, I just want to get the basic actions to work. I am starting with the endpoints to get and set the documents. I am creating a generic endpoints that take the type as URI parameter allowing user/application to do a get/post on /api/vote, /api/idea. The following code captures this: // get document app.get('/api/:type/:id', function(req, res) { if (type == 'idea' || type == 'vote' || type == 'user') { get(req, res, type); } else { res.send(400); } });// create new document app.post('/api/:type', function(req, res) { if (type == 'idea' || type == 'vote' || type == 'user') { upsert(req, res, type); } else { res.send(400); } });In each case I start to test if the URI is one of the supported types (idea, vote, user) and if this is the case I call the get() or upsert() method that will do the call to Couchbase. The get() and upsert() methods are using more or less the same approach. I test if the document exists, if the type is correct and do the operation to Couchbase. Let’s focus on the upsert() method. I call it upsert() since the same operation is used to create and update the document. function upsert(req, res, docType) { // check if the body contains a know type, if not error if (req.body != null && req.body.type == docType) { var id = req.body.id; if (id == null) { // increment the sequence and save the doc cb.incr("counter:"+req.body.type, function(err, value, meta) { id = req.body.type + ":" + value; req.body.id = id; cb.add(id, req.body, function(err, meta) { res.send(200); }); }); } else { cb.replace(id, req.body, function(err, meta) { res.send(200); }); } } else { res.send(403); } } In this function I start by testing if the document contains a type and if the type is the one expected (line 3). Then I check if the document id is present, to see if I need to create it or not. This is one of the reason why I like to keep the id/key in the document, yes I duplicate it, but it makes the development really easy. So if I have to create a new document I have to generate a new id. I chose to create a counter for each type. this is why I call the incr function (line 7) and then use the returned value to create the document (line 10). Note: as you can see, my documents contain the an ID as part of the attributes. This ID is the same value that the one used to set the document (the ‘key’). It is not necessary a good practice to duplicate this information, and in many case the application only use the document key itself. I personally like to put the ID in the document itself too, because it simplifies a lot the development. If the ID is present, I just call the update operation to save the document. (line 15) The delete operation is equivalent to the get, using the delete HTTP operation. So now I can get, insert and update the documents. I still need to do some work to deal with the lists. As you can guess, here I need to call the views. I won’t go in the detail of the simple list of ideas. Let’s focus on the view that shows the result of the votes. app.get('/api/results/:id?', function(req, res) { var queryParams = { stale: false, group_level : 3 }; if (req.params.id != null) { queryParams.startkey = [req.params.id,0]; queryParams.endkey = [req.params.id,2]; } cb.view("ideas", "votes_by_idea", queryParams, function(err, view) { var result = new Array(); var idx = -1; var currentKey = null; for (var i = 0; i < view.length; i++) { key = view[i].key[0]; if (currentKey == null || currentKey != key ) { idx = idx +1; currentKey = key; result[idx] = { id : key, title : view[i].key[2], value : 0 }; } else { result[idx].value = view[i].value; } } res.send(result); }); }); For this part of the application I use a small trick to use the collated view. The /api/results/ call returns the list of ideas with their title and the total number of votes. The result looks like the following: [ { "id": "idea:0", "title": "Add new electric company cars", "value": 0 }, { "id": "idea:1", "title": "Develop new blog on Jekyll", "value": 3 }, { "id": "idea:2", "title": "Bring your own device project", "value": 1 }, { "id": "idea:3", "title": "Test the new Rasperry Pi", "value": 1 } ] Note that it is also possible to select only one idea , you just need to pass the id to the call for example. If you look in more detail the function, not only I call the view, but I build an array in which I put the idea id, label, then on the next loop, I add the number of vote. This is possible because the view is a collated view of the ideas and its votes. I have now my REST Services, including advanced query capabilities. It is time now to use these services and build the user interface. Create the UI For the view I am using AngularJS, that I am packaging in the same node.js application for simplicity reason Simple UI without Login/Security The code of the application without login is available branch in 02-simple-ui-no-login : You can run the application with simple services using the following command: > git checkout -f 02-simple-ui-no-login > node app.js The application is based on AngularJS and Twitter Boostrap. I am using basic feature and packaging for Angular :/public/js/app.js contains the module declaration and all the routes to the different views/controllers /public/js/controllers.js contains all the controller. I will show some of them but basically this is where that I call the services that I have created above. /views/partials/ contains the different pages/screens used by the application.Because the application is quite simple I have not done any packaging of directive, or other functions. This is true at for AngularJS and Node.js parts. Dummy user management In this first version of the UI I have not yet integrated any login/security, so I fake the user login using a global scope variable that $scope.user that you can see in the controller AppCtrl(). Since I have not yet implemented the login/security, I have added at the bottom of the page a textfield where you can enter a ‘dummy’ username to test the application. This field is inserted in the /views/index.html page. List Views and Number of Votes The home page of the application contains the list of ideas and number of votes.Look at the EntriesListCtrl controller and the view/index.html file. As you can guess this is based on the Couchbase collated view that return the list of ideas and number of vote. Create/Edit an idea When the user click on the New link in the navigation, the application call the view/view/partials/idea-form.html . This form is called using the ‘/#/idea/new’ URL. Just look at the IdeaFormCtrl controller to see what is happening : function IdeaFormCtrl($rootScope, $scope, $routeParams, $http, $location) { $scope.idea = null; if ($routeParams.id ) { $http({method: 'GET', url: '/api/idea/'+ $routeParams.id }).success(function(data, status, headers, config) { $scope.idea = data; }); }$scope.save = function() { $scope.idea.type = "idea"; // set the type $scope.idea.user_id = $scope.user; $http.post('/api/idea',$scope.idea).success(function(data) { $location.path('/'); }); } $scope.cancel = function() { $location.path('/'); }} IdeaFormCtrl.$inject = ['$rootScope', '$scope', '$routeParams','$http', '$location'];First of all I test if the controller is called with a idea identifier in the URL ($routeParams.id – line 3) . If the ID is present, I call the REST API to get the idea and set it into the $scope.idea variable. Then on line 9, you can see the $scope.save() function that calls the REST API to save/update the idea to Couchbase. I use the line 10 and 11 to set the user and the type of data to the idea. Note: It is interesting to look at these lines, by adding the two attributes (user & type) I modify the ‘schema’ of my data. I am adding new fields to my document that will be stored as it is in Couchbase. Once again, you see here that I drive the data type from my application. I could take another approach and force the type in the service layer. For this example I chose to put that in the application layer, that is supposed to send the proper data types. Other Interactions The same approach is used to create a vote associated to a user/idea as you can see in the VoteFormCtrl controller. I won’t go in all the details of all operations, I am just inviting you to look at the code of the application, and feel free to add comment to this blog post if I need to clarify other part of the application Iterative Development : adding a value to the vote! The code of the services is available in branch 01-simple-services: You can run the application with simple services using the following command: > git checkout -f 03-vote-with-value > node app.js Adding the field in the form Something that I really like about working with AngularJS, Node and Couchbase is the fact that the developer uses JSON from the database to the browser. So let’s implement a new feature, where instead of having only a comment the user can give a rate to its vote from 1 to 5. Doing that is quite easy, here are the steps:Modify the UI : adding a new field Modify the Couchabe View to use the new fieldThis is it! AngularJS deals with the binding of the new field, so I just need to edit the /views/partials/idea-form.html to add this. For this I need to add the list of values in the controller and expose it into a select box in form. The list of value located in the $scope.ratings variable : $scope.ratings = [ { "id": "0", "label": "0 - No Interest", }, { "id": "1", "label": "1 - Low Interest", }, { "id": "2", "label": "2 - Medium", }, { "id": "3", "label": "3 - Good", }, { "id": "4", "label": "4 - Outstanding", }, { "id": "5", "label": "5 - Must be done. Now!", }]; Once this is done you can add a select box into your view using the following code : <div class="control-group"> <label class="control-label" >Rate</label> <div class="controls"> <select required ng-model="vote.rating" ng-options="value.id as value.label group by value.group for value in ratings"> </select> </div> </div> To add the select box into the form, I just use AngularJS features:the list of value described in my controller using the ng-options attribute the binding to the vote.rating field object using ng-model attribute.I am adding the field in my form, I bind this field to my Javascript object; and… nothing else! Since my REST API is just consuming the JSON object as it is, AngularJS will send the vote object with the new attribute. Update the view to use the rating Now that my database is dealing with a new attribute in the vote, I need to update my view to use this in the sum function. (I could calculate an average too, but here I want the sum of all the vote/ratings). function (doc, meta) { switch (doc.type){ case "idea" : emit([meta.id,0, doc.title],0); break; case "vote" : emit([doc.idea_id,1], (doc.rating)?doc.rating:2 ); break; } } The only line that I have changed is the line number 7. The logic is simple, if the rating is present I emit it, if not I emit a 2, that is a medium rating for an idea. This is a small tip that allow me to have a working view/system without having to update all the existing document if I have some. I’ll stop here for now, and will add new feature later such as User Authentication and User Management using for example Passport. Version and Upgrade Management If you looked closely to the code of the application the views are automatically imported from the app.js file when the application is starting. In fact I have added a small function that check the current version installed and update the views with the correct version when needed. You can look at the function initApplication() :Load the version number from Couchbase (document with ID ‘app.version’) Check the version of if this is differentUpdate/Create the view (I am doing it in production mode here, in real application it will be better to use dev mode – just prefix the design document ID with ‘dev_’ ) Once the view is created update/create the ‘app.version’ document with the new ID.Conclusion In this article we have seen how you can quickly develop your application/prototype and leverage the flexibility of NoSQL for developers. The steps to do this are:Design your document model and API (REST) Create the UI that consumes the API Modify your model by simply adding field into the UI Update the view to adapt your lists to your new modelIn addition to this, I have also quickly explain how you can from your code control the version of your application and deploy new views (and other things) automatically.   Reference: Easy application development with Couchbase, Angular and Node.js from our JCG partner Tugdual Grall at the Tug’s Blog blog. ...
java-interview-questions-answers

JAXB and java.util.Map

Is it ironic that it can be difficult to map the java.util.Map class in JAXB (JSR-222)? In this post I will cover some items that will make it much easier. Java Model Below is the Java model that we will use for this example. Customer The Customer class has a property of type Map . I chose this Map specifically since the key is a domain object and the value is a domain object. package blog.map;import java.util.*; import javax.xml.bind.annotation.*;@XmlRootElement public class Customer {private Map<String, Address> addressMap = new HashMap<String, Address>();public Map<String, Address> getAddressMap() { return addressMap; }public void setAddressMap(Map<String, Address> addressMap) { this.addressMap = addressMap; }} Address The Address class is just a typical POJO. package blog.map;public class Address {private String street;public String getStreet() { return street; }public void setStreet(String street) { this.street = street; }} Demo Code In the demo code below we will create an instance of Customer and populate its Map property. Then we will marshal it to XML. package blog.map;import javax.xml.bind.*;public class Demo {public static void main(String[] args) throws Exception { JAXBContext jc = JAXBContext.newInstance(Customer.class);Address billingAddress = new Address(); billingAddress.setStreet('1 A Street');Address shippingAddress = new Address(); shippingAddress.setStreet('2 B Road');Customer customer = new Customer(); customer.getAddressMap().put('billing', billingAddress); customer.getAddressMap().put('shipping', shippingAddress);Marshaller marshaller = jc.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); marshaller.marshal(customer, System.out); }} Use Case #1 – Default Representation Below is a sample of XML corresponding to our domain model. We see that each item in the Map  has key  and value elements wrapped in an entry element. <?xml version='1.0' encoding='UTF-8'?> <customer> <addressMap> <entry> <key>shipping</key> <value> <street>2 B Road</street> </value> </entry> <entry> <key>billing</key> <value> <street>1 A Street</street> </value> </entry> </addressMap> </customer> Use Case #2 – Rename the Element The JAXB reference implementation uses the @XmlElementWrapper annotation to rename the element corresponding to a Map property (we’ve added this support to MOXy in EclipseLink 2.4.2 and 2.5.0). In previous versions of MOXy the @XmlElement annotation should be used. Customer We will use the @XmlElementWrapper annotation to rename the element corresponding to the addressMap property to be addresses. package blog.map;import java.util.*; import javax.xml.bind.annotation.*;@XmlRootElement public class Customer {private Map<String, Address> addressMap = new HashMap<String, Address>();@XmlElementWrapper(name='addresses') public Map<String, Address> getAddressMap() { return addressMap; }public void setAddressMap(Map<String, Address> addressMap) { this.addressMap = addressMap; }} Output Now we see that the addressMap element has been renamed to addresses. <?xml version='1.0' encoding='UTF-8'?> <customer> <addresses> <entry> <key>shipping</key> <value> <street>2 B Road</street> </value> </entry> <entry> <key>billing</key> <value> <street>1 A Street</street> </value> </entry> </addresses> </customer> Use Case #3 – Add Namespace Qualification In this use case we will examine the impact of applying namespace qualification to a class that has a property of type java.util.Map . There was a MOXy bug related to the namespace qualification of Map properties that has been fixed in EclipseLink 2.4.2 and 2.5.0 (see: http://bugs.eclipse.org/399297). package-info We will use the package level @XmlSchema annotation to specify that all fields/properties belonging to classes in this package should be qualified with the http://www.example.com namespace (see: JAXB & Namespaces). @XmlSchema( namespace='http://www.example.com', elementFormDefault=XmlNsForm.QUALIFIED) package blog.map;import javax.xml.bind.annotation.*; Output We see that the elements corresponding to the Customer and Address classes are namespace qualified, but the elements corresponding to the Map class are not. This is because the Map class is from the java.util package and therefore the information we specified on the package level @XmlSchema annotation does not apply. <?xml version='1.0' encoding='UTF-8'?> <ns2:customer xmlns:ns2='http://www.example.com'> <ns2:addresses> <entry> <key>shipping</key> <value> <ns2:street>2 B Road</ns2:street> </value> </entry> <entry> <key>billing</key> <value> <ns2:street>1 A Street</ns2:street> </value> </entry> </ns2:addresses> </ns2:customer> Use Case #4 – Fix Namespace Qualification with an XmlAdapter We can use an XmlAdapter to adjust the namespace qualification from the previous use case. XmlAdapter (MapAdapter) The XmlAdapter mechanism allows you to convert a class to another for the purpose of affecting the mapping (see: XmlAdapter – JAXB’s Secret Weapon). To get the appropriate namespace qualification we will use an XmlAdapter to convert the Map to objects in the package for our domain model. package blog.map;import java.util.*; import javax.xml.bind.annotation.adapters.XmlAdapter;public class MapAdapter extends XmlAdapter<MapAdapter.AdaptedMap, Map<String, Address>> {public static class AdaptedMap {public List<Entry> entry = new ArrayList<Entry>();}public static class Entry {public String key;public Address value;}@Override public Map<String, Address> unmarshal(AdaptedMap adaptedMap) throws Exception { Map<String, Address> map = new HashMap<String, Address>(); for(Entry entry : adaptedMap.entry) { map.put(entry.key, entry.value); } return map; }@Override public AdaptedMap marshal(Map<String, Address> map) throws Exception { AdaptedMap adaptedMap = new AdaptedMap(); for(Map.Entry<String, Address> mapEntry : map.entrySet()) { Entry entry = new Entry(); entry.key = mapEntry.getKey(); entry.value = mapEntry.getValue(); adaptedMap.entry.add(entry); } return adaptedMap; }} Customer The @XmlJavaTypeAdapter annotation is used to specify the XmlAdapter on the Map property. Note with an XmlAdaper applied we need to change the @XmlElementWrapper annotation to @XmlElement (evidence that @XmlElement should be used to annotate the element for a Map property). package blog.map;import java.util.*; import javax.xml.bind.annotation.*; import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;@XmlRootElement public class Customer {private Map<String, Address> addressMap = new HashMap<String, Address>();@XmlJavaTypeAdapter(MapAdapter.class) @XmlElement(name='addresses') public Map<String, Address> getAddressMap() { return addressMap; }public void setAddressMap(Map<String, Address> addressMap) { this.addressMap = addressMap; }} Output Now all the elements in the XML output are qualfied with the http://www.example.com namespace. <?xml version='1.0' encoding='UTF-8'?> <customer xmlns='http://www.example.com'> <addresses> <entry> <key>shipping</key> <value> <street>2 B Road</street> </value> </entry> <entry> <key>billing</key> <value> <street>1 A Street</street> </value> </entry> </addresses> </customer>   Reference: JAXB and java.util.Map from our JCG partner Blaise Doughan at the Java XML & JSON Binding blog. ...
software-development-2-logo

Appsec at RSA 2013

This was my second time at the RSA conference on IT security. Like last year, I focused on the appsec track, starting with a half-day mini-course on how to write secure applications for developers, presented by Jim Manico and Eoin Keary representing OWASP. It was a well-attended session. Solid, clear guidance from people who really do understand what it takes to write secure code. They explained why relying on pen testing is never going to be enough (your white hat pen tester gets 2 weeks a year to hack your app, the black hats get 52 weeks a year), and covered all of the main problems, including password management (secure storage and forgot password features), how to protect the app from click jacking, proper session management, access control design. They showed code samples (good and bad) and pointed developers to OWASP libraries and Cheat Sheets, as well as other free tools. We have to solve XSS and SQL Injection They spent a lot of time on XSS (the most common vulnerability in web apps) and SQL injection (the most dangerous). Keary recommended that a good first step for securing an app is to find and fix all of the SQL injection problems: SQL injection is easy to see and easy to fix (change the code to use prepared statements with bind variables), and getting this done will not only make your app more secure, it also proves your organization’s ability to find security problems and fix them successfully. SQL injection and XSS kept coming up throughout the conference. In a later session, Nick Galbreath looked deeper into SQL injection attacks and what developers can do to detect and block them. By researching thousands of SQL injection attacks, he found that attackers use constructs in SQL that web application developers rarely use: unions, comments, string and character functions, hex number literals and so on. By looking for these constructs in SQL statements you can easily identify if the system is being attacked, and possibly block the attacks. This is the core idea behind database firewalls like Green SQL and DB Networks, both companies that exhibited their solutions at RSA. On the last day of the conference, Romain Gaucher from Coverity Research asked “Why haven’t we stamped out SQL Injection and XSS yet?”. He found through a static analysis review of several code bases that while many developers are trying to stop SQL injection by using parameterized queries, it’s not possible to do this in all cases. About 15% of SQL code could not be parameterized properly – or at least it wasn’t convenient for the developers to come up with a different approach. Gaucher also reinforced how much of a pain in the butt it is trying to protect an app from XSS:“XSS is not a single vulnerability”. XSS is a group of vulnerabilities that mostly involve injection of tainted data into various HTML contexts”. It’s the same problem that Jim Manico explained in the secure development class: in order to prevent XSS you have to understand the context and do context-sensitive encoding, and hope that you don’t make a mistake. To help make this problem manageable, in addition to libraries available from OWASP, Coverity has open sourced a library to protect Java apps from XSS and SQL injection. The Good While most of the keynotes offered a chance to catch up on email, the Crypto Panel was interesting. Chinese research into crypto is skyrocketing. Which could be a good thing. Or not. I was interested to hear Dan Boneh at Stanford talk more about the research that he has done into digital certificate handling and SSL outside of browsers. His team found that in almost all cases, people who try to do SSL certificate validation in their own apps do it wrong. Katie Moussouris at Microsoft presented an update on ISO standards work for vulnerability handling. ISO 30111 lays out a structured process for investigating, triaging and resolving software security vulnerabilities. There were no surprises in the model – the only surprise is that the industry actually needs an ISO standard for the blindingly obvious, but it should set a good bar for people who don’t know where to start. Jeremiah Grossman explained that there are two sides to the web application security problem. One half is weaknesses in web sites like SQL injection and lousy password handling and mistakes in access control. The other half is attacks that exploit fundamental problems in browsers. Attacks that try to break out of the browser – which browser vendors put a lot of attention to containing through sandboxing and anti-phishing and anti-malware protection – and attacks that stay inside the browser but compromise data inside the browser like XSS and CSRF, which get no attention from browser vendors so it’s up to application developers to deal with. Grossman also presented some statistics on the state of web application security, using data that White Hat Security is collecting from its customer base. Recognizing that their customers are representative of more mature organizations that already do regular security testing of their apps, the results are still encouraging. The average number of vulnerabilities per app is continuing to decline year on year. SQL injection is now the 14th most common vulnerability, found in only 7% of tested apps – although more than 50% of web apps are vulnerable to XSS, for the reasons discussed above. Gary McGraw from Cigital agreed that as an industry, software is getting better. Defect density is going down (not as fast as it should be, but real progress is being made), but the software security problem isn’t going away because we are writing a lot more code, and more code inevitably means more bugs. He reiterated that we need to stay focused on the fundamentals – we already know what to do, we just have to do it.“The time has come to stop looking for new bugs to add to the list. Just fix the bugs”.Another highlight was the panel on Rugged Devops, which continued a discussion that started at OWASP Appsec late last year, and covered pretty much the same ground: how important it is to get developers and operations working together to make software run in production safely, that we need more automation (testing, deployment, monitoring), and how devops provides an opportunity to improve system security in many ways and should be embraced, not resisted by the IT security community. The ideas are based heavily on what Etsy and Netflix and Twitter have done to build security into their rapid development/deployment practices. I agreed with ½ of the panel (Nick Galbreath and David Mortman, who have real experience in software security in Devops shops) almost all of the time, and disagreed with the other ½ of the panel most of the rest of the time. There’s still too much hype over continuously deploying changes 10 or 100 or 1000 times a day, and over the Chaos Monkey. Etsy moved to Continuous Deployment multiple times per day because they couldn’t properly manage their release cycles – that doesn’t mean that everyone has to do the same thing or should even try. And you probably do need something like Chaos Monkey if you’re going to trust your business to infrastructure as unreliable as AWS, but again, that’s not a choice that you have to make. There’s a lot more to devops, it’s unfortunate that these ideas get so much attention. The Bad and the Ugly There was only one low point for me – a panel with John Viega formerly of McAfee and Brad Arkin from Adobe called “Software Security: a Waste of Time”. Viega started off playing the devil’s advocate, asserting that most people should do nothing for appsec, it’s better and cheaper to spend their time and money on writing software that works and deal with security issues later. Arkin disagreed, but unfortunately it wasn’t clear from the panel what he felt an organization should do instead. Both panellists questioned the value of most of the tools and methods that appsec relies on. Neither believed that static analysis tools scale, or that manual security code audits are worth doing. Viega also felt that “peer reviews for security are a waste of time”. Arkin went on to say:“I haven’t seen a Web Application Firewall that’s worth buying, and I’ve stopped looking” “The best way to make somebody quit is to put them in a threat modelling exercise” “You can never fuzz and fix every bug” Arkin also argued against regulation, citing the failure of PCI to shore up security for the retail space – ignoring that the primary reason that many companies even attempt to secure their software is because PCI requires them to take some responsible steps. But Arkin at least does believe that secure development training is important and that every developer should receive some security training. Viega disagreed, and felt that training only matters for a small number of developers who really care. This panel was like a Saturday Night Live skit that went off the rails. I couldn’t tell when the panellists were being honest or when they were ironically playing for effect. This session lived up to its name, and really was a waste of time. The Toys This year’s trade show was even bigger than last year, with overflow space across the hall. There were no race cars or sumo wrestlers at the booths this year, and fewer strippers (ahem models) moonlighting (can you call it ‘moonlighting’ if you’re doing it during the day?) as booth bunnies although there was a guy dressed like Iron Man and way too many carnival games. This year’s theme was something to do with Big Data in security so there were lots of expensive analytics tools for sale. For appsec, the most interesting thing that I saw was Cigital Secure Assist a plug-in for different IDEs that provides fast feedback on security problems in code (Java, .NET or PHP) every time you open or close a file. The Cigital staff were careful not to call this a static analysis tool (they’re not trying to compete with Fortify or Coverity or Klocwork), but what excited me was the quality of the feedback, the small client-side footprint, and that they intend to make it available for direct purchase over the web for developers at a very reasonable price point, which means that this could finally be a viable option for smaller development shops that want to take care of security issues in code. All in all, a good conference and a rare opportunity to meet so many smart people focused on IT security. I still think for pure appsec that OWASP’s annual conference is better, but there’s nothing quite like RSA.   Reference: Appsec at RSA 2013 from our JCG partner Jim Bird at the Building Real Software blog. ...
groovy-logo

Finding Properties in JARs with Groovy

In previous blog posts I have looked at Searching JAR Files with Groovy to find entries (such as .class files) contained in the JAR and Viewing a JAR’s Manifest File with Groovy. In this post, I look at using Groovy to find a particular property in a properties file contained within a JAR. The script in this post searches JARs in a provided directory and its subdirectories for a properties file containing the specified property. The following Groovy script leverages several advantages of Groovy to recursively search a specified directory and its subdirectories for JAR files containing properties files that contain the specified property. The script outputs matching JARs and their properties files entries that contain the specified property. The script also shows the value that each property is set to in each matched JAR/property file. findPropertiesInJars.groovy #!/usr/bin/env groovy/** * findPropertiesInJars.groovy * * findPropertiesInJars.groovy -d <<root_directories>> -p <<properties_to_search_for>> * * Script that looks for provided properties (assumed to be in files with * .properties extension) in JAR files (assumed to have .jar extensions) in the * provided directory and all of its subdirectories. */def cli = new CliBuilder( usage: 'findPropertiesInJars.groovy -d <root_directories> -p <property_names_to_search_for>', header: '\nAvailable options (use -h for help):\n', footer: '\nInformation provided via above options is used to generate printed string.\n') import org.apache.commons.cli.Option cli.with { h(longOpt: 'help', 'Help', args: 0, required: false) d(longOpt: 'directories', 'Directories to be searched', args: Option.UNLIMITED_VALUES, valueSeparator: ',', required: true) p(longOpt: 'properties', 'Property names to search for in JARs', args: Option.UNLIMITED_VALUES, valueSeparator: ',', required: true) } def opt = cli.parse(args) if (!opt) return if (opt.h) cli.usage()def directories = opt.ds def propertiesToSearchFor = opt.psimport java.util.zip.ZipFile import java.util.zip.ZipExceptiondef matches = new TreeMap<String, Set<String>>() directories.each { directory -> def dir = new File(directory) propertiesToSearchFor.each { propertyToFind -> dir.eachFileRecurse { file-> if (file.isFile() && file.name.endsWith('jar')) { try { zip = new ZipFile(file) entries = zip.entries() entries.each { entry-> def entryName = entry.name if (entryName.contains('.properties')) { def fullEntryName = file.canonicalPath + '!/' + entryName def properties = new Properties() try { def url = new URL('jar:file:' + File.separator + fullEntryName) def jarConnection = (JarURLConnection) url.openConnection() properties.load(jarConnection.inputStream) } catch (Exception exception) { println 'Unable to load properties from ${fullEntryName} - ${exception}' } if (properties.get(propertyToFind) != null) { def pathPlusMatch = '${file.canonicalPath}\n\t\t${entryName}\n\t\t${propertyToFind}=${properties.get(propertyToFind)}' if (matches.get(propertyToFind)) { matches.get(propertyToFind).add(pathPlusMatch) } else { def containingJars = new TreeSet<String>() containingJars.add(pathPlusMatch) matches.put(propertyToFind, containingJars) } } } } } catch (ZipException zipEx) { println 'Unable to open JAR file ${file.name}' } } } } }matches.each { propertyName, containingJarNames -> println '\nProperty '${propertyName}' Found:' containingJarNames.each { containingJarName -> println '\t${containingJarName}' } } When the above script is run against JARs, it lists JARs with properties files that have the named property and its assigned value. The screen snapshot shown next demonstrates running the script against the Apache Camel distribution on my machine to find all properties named ‘artifactId’ (Maven) in those numerous JAR files.The above script takes advantage of several Groovy features. For example, Groovy’s ability to directly use Java APIs and libraries is evident throughout the script with use of classes such as ZipFile (for accessing JAR contents), Properties (for accessing contents of properties files), JarURLConnection (also for accessing properties files’ content), TreeSet (for easy sorting), and Apache Commons CLI (built into Groovy for command line support). Groovy’s closures and concise syntax lead to greater fluency and readability as well. This script catches exceptions even though Groovy does not require any exception (whether checked or runtime) to be caught. The reason for this is that an uncaught exception would lead to the script terminating. By catching any encountered exception during opening each JAR file or loading from a property file, an exception in those cases will only prevent that particular JAR or property file from being loaded without stopping others from being processed. This script makes a couple significant assumptions. The first assumption is that the JAR files to be searched have a .jar extension and that the contained properties files have .properties extensions. The script uses built-in CLI support’s nice feature of a single command-line flag allowing multiple directories and/or multiple property names to be searched for by separating the multiple values with commas. There are times I want to know where a particular property is specified within my application and this script makes it easy to find where that particular property is specified in the JARs on the application’s classpath.   Reference: Finding Properties in JARs with Groovy from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...
agile-logo

Organizing an Agile Program: Part 2, Networks for Managing Agile Programs

In Organizing an Agile Program: Part 1, Introduction, I discussed the difference between hierarchies and networks. I used Scrum of Scrums as an example. It could be any organizing hierarchy. Remember, I like Scrum as a way to organize a project team’s work. I also like lean. I like XP. I like unbranded agile. I like anything that helps a team deliver features quickly and get feedback. I’m not religious about what any project team in the program uses. What works for small programs is going to be different from what works for medium programs. It is going to be different from what works for large programs. Why? It’s the scaling problem and the communication path problem. Larger programs are not linear scales of smaller programs. That’s why I asked you how large your program was at the beginning.   Using a Network in a Medium Size Program When you look at the small world network image here, you can see how the teams are connected. This might even be ideal for a five-team program. But what happens with a nine-team program? According to me, that’s still a medium size program. And I claim that not all the teams have to be fully connected. In fact, I claim that they can’t be. No one can have that develop and maintain that many “intimate” connections with other people at work. Note: I realize Dunbar’s Number is about 150, maybe more, maybe less. Dunbar’s Number is the number of people you can maintain social relationships with. On a medium-size program, you have a shot of maintaining relationships with many of the people on the program. Maybe not all of them, but many of them. That helps you accomplish the work. If you have 6-person teams, and you have 9 teams, that’s only 54 people. The teams don’t have to all be connected, as in the small world network here. Some teams are more connected than other teams. Can you really track all the people on your program and know what the heck is going on with each of them? No, not when it comes to their code or tests or features. Never mind when it comes to their human-ness. I’m suggesting you not even try. You track enough of the other people do to be able to finish your work. Some people are well-connected with others. Some are not. This is why communities of practice help. (This is why lunch rooms help even more.) What you are able to do, however, is ask a question of your network and get the answer. Fast. You cooperate to produce the shippable product.You work with the people on your team and maintain relationships with a few other people. That’s what small world networks do. That’s why communities of practice work. That’s why the rumor mill works so well in organizations. We have some well-connected people, and a bunch of people who are not so well-connected. And, that’s okay. Here’s the key point: you don’t have to go up and down a hierarchy to accomplish work. You either know who to ask to accomplish work, or they know who to ask. Nobody needs to ask permission. There is no chief. There is no master. There is no hierarchy. There is cooperation. What are the Risks in Programs? Let’s back up a minute and ask what’s important to programs. Why are there standups in a team? The standups in a team are about micro-commitments to each other. They are also a way to show the status in a visible way. They help us see our risks. We can coordinate our features and see if we have too much work in progress. We ask about impediments. That’s why the Scrum of Scrums got started. All excellent reasons. If you think about what’s risky in programs, every program starts with these areas of risk:Coordinating the features, the backlog among and between the teams. Nurturing the architecture, so it evolves at the “correct” pace. Not too early so we haven’t wasted time and energy on it, and not so late that we have technical architecture debt and frameworks that don’t work for us. How to describe the status in a way that says, “Here is where we are and here is how we know when we will be done.” How to describe our interdependent risks. Risks can be independent of features.Your program will have other risks that are singular to your program. But the risks of managing features among teams; coordinating architecture; explaining status; and managing risk—those big four are the big issues in program management. So, how do we manage those risks if I claim that Scrum of Scrums is a hierarchy and doesn’t work so well? Let’s start with how we manage the issue of features in programs. Programs Need Roadmaps Programs need roadmaps so that the teams can see where they are headed. I am not saying the teams should implement ahead of this iteration’s backlog. If the teams have a roadmap, they can see where they are going with the features. This helps in multiple ways:They can see how this feature might interconnect with another team’s feature They can see how this feature might affect the architecture They can create product backlog burnup charts based on feature accomplishmentIn programs, teams need to be able to look ahead. They don’t need to implement ahead. That would be waste. No, we don’t want that. But looking ahead is quite useful. If the teams are able to look ahead, they can talk with their product owners and help the product owners see if it makes sense to implement some features together or not. Or, if it makes sense to change the order of some features. When I get to large programs, where several teams might work off the same backlog, I’ll come back to this point. I realize that several teams working off the same backlog is not restricted to large teams, but I have a backlog for writing too, and I’m not addressing this yet.A roadmap is a changing document. It is our best guess, based on the most recent demo. We expect it to change. We ask the program product owner to create and maintain the business value of the roadmap. We ask the product owner community to create user stories from the phrases and words on the roadmap. The teams can see which release the features might occur in, and they can see which features they’re supposed to get done in this release, and most importantly now, across the program. Some of the words are not anything like stories. Some might be close to stories. The items on the roadmap close to us in time might be closer to stories. I would expect the ones farther away to be a lot less close. I would expect them to be epic in size. It’s the entire product owner community job to continually evaluate those phrases and ask, “Do we want these? If so, we need to define what they mean and create stories that represent what they mean.” I don’t care what approach the product owners use to create stories from the roadmap. But the roadmap is the 50,000 foot idea. Only the quarter that we are in has anything that might resemble stories. Oh, and that big black line? That’s what the teams need to complete this quarter. Anything below that would be great. As the teams complete the stories, the product owner community reassesses the remaining stories on the roadmap. Yes, they do. It’s a ton of work. Once you have a roadmap, the product owners can create user stories that make sense for their teams. The program product owner works with the teams, as needed. Since the teams are feature teams, not architecture-based teams, they can create product backlog burnup charts. Now, you can tell your status by seeing where you are in the roadmap. Note that you do not need a Gantt chart. You have finished some number of features, ranked by business value. You have some number features remaining. You can finish the program at any time, because you are working by business value. Oh, and you don’t need ROI. You never try to predict anything. You can’t predict anything for a team, and you certainly can’t predict anything for a program. Programs Need Architecture Nurturing I am a huge fan of evolving the architecture in any agile program. Why? Because for all but the smallest of projects, the architecture has always changed. Now, that does not mean I don’t think we should not think about the architecture as we proceed. What I like is when the project teams implement several features before they settle on a specific framework. This works especially well in small and medium-size programs. Just-in-time architecture and evolving it is a tough thing. It’s tough on the project teams. It’s tough on the architects. It’s so much easier to think about a framework (or frameworks) first and pick it, and attempt to bend the product to make it work. But, that’s how we get a pile of technical debt, especially in a complex product, which is where you need a program. So, as much as I would like to pick an architecture early and stick with it, even I force myself to postpone the architecture decisions as late as we can, and keep evolving the architecture as much as possible. What is the Most Responsible Date for Architecture? Now, sometimes “as late as we can” is the second or third iteration. But in a medium size program, the most responsible date is often later than that. And, sometimes the architects need to work in a community, wayfinding along with the feature teams. Did you see in the roadmap in Q1, where we needed a platform before we could do any “real” features? If you have a hardware product, sometimes you need to do that. You just do. But, for SaaS, you almost never do. This means I ask for architects to be embedded into the project teams. I also ask for architects to produce an updated picture of the architecture as an output of each iteration. Can you do this on your agile program? If you have tests to support your work, you can. Remember, agile is the most disciplined approach to product development. If you’re hacking and calling it agile, well, you can call it anything you want, but it’s not agile. Explaining Status: The Standup, By Itself is Not Sufficient When you have a small program, and you have Scrum of Scrums, the daily standup is supposed to manage all four of these issues: how the features work with the teams, how the architecture retains its integrity, what the status, and what the risks are. In a medium program, that daily standup is supposed to do the same. Here is my question: Is your daily standup, for your Scrum of Scrums working for you? Does it have everyone you need? If so, fine. You don’t need me. But for those of you who are struggling with the hierarchy that a Scrum of Scrums brings, or, if you think your program is proceeding too slowly, you have other options. Or, if you need to know when your program will be done, you need agile program management. One of the problems when you have a medium program is that at some point, the number of people who participate in a Scrum of Scrums of teams starts to overwhelm any one standup meeting. The issues you have cannot be resolved in a standup. The standup is not adequate for the problems you encounter. (Remember, what was Scrum designed for? 5-7 people. What happens when you have more than 7 teams? You start to outgrow Scrum. Can you make it work? Of course you can. You can make anything work. You are a smart person, working with smart people. You can. Should you? That’s another question.) Asking “what did you complete,” or even the old “what did you do since our last standup” is the wrong question. The question is irrelevant. As your program grows, the data overwhelms the ability of the people to take the data in. Especially if the data is not useful. Are you doing continuous integration in your program? If so, you don’t need to ask that question in your program. Once you get past five teams, what you did or what you completed is not the question. You need to know what the obstacles are. You need to know what the interdependencies are. You need to know if deliverables are going to be late. That’s program management. We’ll carry on with program management in part 3. This is long enough.   Reference: Organizing an Agile Program: Part 2, Networks for Managing Agile Programs from our JCG partner Johanna Rothman at the Managing Product Development blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close