Featured FREE Whitepapers

What's New Here?


Health Checks, Run-time Asserts and Monkey Armies

After going live, we started building health checks into the system – run-time checks on operational dependencies and status to ensure that the system is setup and running correctly. Over time we have continued to add more run-time checks and tests as we have run into problems, to help make sure that these problems don’t happen again. This is more than pings and Nagios alerts. This is testing that we installed the right code and configuration across systems. Checking code build version numbers and database schema versions. Checking signatures and checksums on files. That flags and switches that are supposed to be turned on or off are actually on or off. Checking in advance for expiry dates on licenses and keys and certs. Sending test messages through the system. Checking alert and notification services, make sure that they are running and that other services that are supposed to be running are running, and that services that aren’t supposed to be running aren’t running. That ports that are supposed to be open are open and ports that are supposed to be closed are closed. Checks to make sure that files and directories that are supposed to be there are there, that files and directories that aren’t supposed to be there aren’t, that tables that are supposed to be empty are empty. That permissions are set correctly on control files and directories. Checks on database status and configuration. Checks to make sure that production and test settings are production and test, not test and production. Checking that diagnostics and debugging code has been disabled. Checks for starting and ending record counts and sequence numbers. Checking artefacts from “jobs” – result files, control records, log file entries – and ensuring that cleanup and setup tasks completed successfully. Checks for run-time storage space. We run these health checks at startup, or sometimes early before startup, after a release or upgrade, after a failover – to catch mistakes, operational problems and environmental problems. These are tests that need to run quickly and return unambiguous results (things are ok or they’re not). They can be simple scripts that run in production or internal checks and diagnostics in the application code – although scripts are easier to adapt and extend. Some require hooks to be added to the application, like JMX. Run-time Asserts Other companies like Etsy do something similar with run-time asserts, using a unit test approach to check for conditions that must be in place for the system to work properly. These tests can (and should) be run on development and test systems too, to make sure that the run-time environments are correct. The idea is to get away from checks being done by hand, operational checklists and calendar reminders and manual tests. Anything that has a dependency, anything that needs a manual check or test, anything in an operational checklist should have an automated run-time check instead. Monkey Armies The same ideas are behind Netflix’s over-hyped (though not always by Netflix) Simian Army, a set of robots that not only check for run-time conditions, but that also sometimes take automatic action when run-time conditions are violated – or even violate run-time conditions to test that the system will still run correctly. The army includes Security Monkey, which checks for improperly configured security groups, firewall rules, expiring certs and so on; and Exploit Monkey, which automatically scans new instances for vulnerabilities when they are brought up. Run-time checking is taken to an extreme in Conformity Monkey, which shuts down services that don’t adhere to established policies, and the famous Chaos Monkey, which automatically forces random failures on systems, in test and in production. It’s surprising how much attention Chaos Monkey gets – maybe it’s the cool name, or because Netflix has Open Sourced it along with some of their other monkeys. Sure it’s ballsy to test failover in production by actually killing off systems during the day, even if they are stateless VM instances which by design should failover without problems (although this is the point, to make sure that they really do failover without problems like they are supposed to). There’s more to Netflix’s success than run-time fault injection and the other monkeys. Still, automatically double-checking as much as you can at run-time is especially important in an engineering-driven, rapidly-changing Devops or Noops environment where developers are pushing code into production too fast to properly understand and verify in advance. But whether you are continuously deploying changes to production (like Etsy and Netflix) or not, getting developers and ops and infosec together to write automated health checks and run-time tests is an important part of getting control over what’s actually happening in the system and keeping it running reliably.   Reference: Health Checks, Run-time Asserts and Monkey Armies from our JCG partner Jim Bird at the Building Real Software blog. ...

become/unbecome – discovering Akka

Sometimes our actor needs to react differently based on its internal state. Typically receiving some specific message causes the state transition which, in turns, changes the way subsequent messages should be handled. Another message restores the original state and thus – the way messages were handled before. In the previous article we implemented RandomOrgBuffer actor based on waitingForResponse flag. It unnecessarily complicated already complex message handling logic:           var waitingForResponse = falsedef receive = { case RandomRequest => preFetchIfAlmostEmpty() if(buffer.isEmpty) { backlog += sender } else { sender ! buffer.dequeue() } case RandomOrgServerResponse(randomNumbers) => buffer ++= randomNumbers waitingForResponse = false while(!backlog.isEmpty && !buffer.isEmpty) { backlog.dequeue() ! buffer.dequeue() } preFetchIfAlmostEmpty() }private def preFetchIfAlmostEmpty() { if(buffer.size <= BatchSize / 4 && !waitingForResponse) { randomOrgClient ! FetchFromRandomOrg(BatchSize) waitingForResponse = true } } Wouldn’t it be simpler to have two distinct receive methods – one used when we are awaiting for external server response (waitingForResponse == true) and the other when buffer is filled sufficiently and no request to random.org was yet issued? In such circumstances become() and unbecome() methods come very handy. By default receive method is used to handle all incoming messages. However at any time we can call become(), which accept any method compliant with receive signature as an argument. Every subsequent message will be handled by this new method. Calling unbecome() restores original receive method. Knowing this technique we can refactor our solution above to the following: def receive = { case RandomRequest => preFetchIfAlmostEmpty() handleOrQueueInBacklog() }def receiveWhenWaiting = { case RandomRequest => handleOrQueueInBacklog() case RandomOrgServerResponse(randomNumbers) => buffer ++= randomNumbers context.unbecome() while(!backlog.isEmpty && !buffer.isEmpty) { backlog.dequeue() ! buffer.dequeue() } preFetchIfAlmostEmpty() }private def handleOrQueueInBacklog() { if (buffer.isEmpty) { backlog += sender } else { sender ! buffer.dequeue() } }private def preFetchIfAlmostEmpty() { if(buffer.size <= BatchSize / 4) { randomOrgClient ! FetchFromRandomOrg(BatchSize) context become receiveWhenWaiting } } We extracted code responsible for handling message while we wait for random.org response into a separate receiveWhenWaiting method. Notice the become() and unbecome() calls – they replaced no longer needed waitingForResponse flag. Instead we simply say: starting from next message please use this other method to handle (become slightly different actor). Later we say: OK, let’s go back to the original state and receive messages as you used to (unbecome). But the most important change is the transition from one, big method into two, much smaller a better named ones. become() and unbecome() methods are actually much more powerful since they internally maintain a stack of receiving methods. Every call to become() (with discardOld = false as a second parameter) pushes current receiving method onto a stack while unbecome() pops it and restores the previous one. Thus we can use become() to use several receiving methods and then gradually go back through all the changes. Moreover Akka also supports finite state machine pattern, but more on that maybe in the future. Source code for this article is available on GitHub in become-unbecome tag. This was a translation of my article “Poznajemy Akka: become/unbecome” originally published on scala.net.pl.   Reference: become/unbecome – discovering Akka from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog. ...

JUnit4 Parameterized and Theories Examples

I always relied on TestNG to pass parameters to test methods in order to give a bit of flexibility to my tests or suites. However, the same flexibility can be achieved using JUnit4. To use it it’s simple:               package com.marco.test;import java.util.Arrays;import java.util.Collection;import junit.framework.Assert;import org.junit.Test;import org.junit.runner.RunWith;import org.junit.runners.Parameterized;import org.junit.runners.Parameterized.Parameters;@RunWith(Parameterized.class)public class ParameterizedTest {@Parameterspublic static Collection data() {return Arrays.asList(new Object[][] {/* Sport Nation year totWinners */{ “basket”, “usa”, 2002, 5, },{ “soccer”, “argentina”, 2003, 2 },{ “tennis”, “spain”, 2004, 10 },{ “chess”, “ireland”, 2005, 0 },{ “eatingbananas”, “italy”, 2006, 20 }});}private final String sport;private final String nation;private final int year;private final int totWinners;public ParameterizedTest(String sport, String nation, int year, int totWinners) {this.sport = sport;this.nation = nation;this.year = year;this.totWinners = totWinners;}@Testpublic void test() {Assert.assertTrue(isDataCorrect(sport, nation, year, totWinners));}private boolean isDataCorrect(String sport2, String nation2, int year2, int totWinners2) {return true;}} JUnit will create an instance of the ParameterizedTest class and run the testCombination() method (or any method marked as @Test) for each row defined in the static collection. Theories This another interesting feature from JUnit4 that I like. You use Theories in JUnit 4 to test combinations of inputs using the same test method: package com.marco.test;import static org.hamcrest.CoreMatchers.is;import java.math.BigDecimal;import org.junit.Assert;import org.junit.Assume;import org.junit.experimental.theories.DataPoint;import org.junit.experimental.theories.Theories;import org.junit.experimental.theories.Theory;import org.junit.runner.RunWith;@RunWith(Theories.class)public class TheoryTest {@DataPointpublic static int MARKET_FIRST_GOALSCORERE_ID = 2007;@DataPointpublic static int MARKET_WDW_ID = 2008;@DataPointpublic static BigDecimal PRICE_BD = new BigDecimal(6664.0);@DataPointpublic static double PRICE_1 = 0.01;@DataPointpublic static double PRICE_2 = 100.0;@DataPointpublic static double PRICE_3 = 13999.99;@Theorypublic void lowTaxRateIsNineteenPercent(int market_id, double price) {Assume.assumeThat(market_id, is(2008));Assume.assumeThat(price, is(100.0));// run your testAssert.assertThat(price, is(100.0));}@Theorypublic void highTaxRateIsNineteenPercent(int market_id, double price) {Assume.assumeThat(market_id, is(2007));Assume.assumeThat(price, is(13999.99));Assert.assertThat(price, is(13999.99));}@Theorypublic void highTaxRateIsNineteenPercent(int market_id, BigDecimal price) {Assume.assumeThat(market_id, is(2007));Assert.assertThat(price, is(BigDecimal.valueOf(6664)));}} This time you need to mark the test class as @RunWith(Theories.class) and use @DataPoint to define properties that you want to test. JUnit will call the methods market as @Theory using all the possible combinations based on the DataPoint provided and the type of the variable. PRICE_BD DataPoint will be used only in the last method, the only one accepting BigDecimal in its method parameters. Only parameters that satisfy the Assume.assumeThat() condition will make through the asser test. The combinations that don’t satisfy the Assume.assumeThat() condition will be ignored silently.   Reference: JUnit4 Parameterized and Theories from our JCG partner Marco Castigliego at the Remove duplication and fix bad names blog. ...

Gang of Four – Proxy Design Pattern

Proxy is another Structural design pattern which works ‘on behalf of’ or ‘in place of’ another object in order to access the later. When to use this pattern? Proxy pattern is used when we need to create a wrapper to cover the main object’s complexity from the client. What are the usage scenarios?Virtual Proxy – Imagine a situation where there is multiple database call to extract huge size image. Since this is an expensive operation we can possibly use the proxy pattern which would create multiple proxies and point to the huge size memory consuming object for further processing. The real object gets created only when a client first requests/accesses the object and after that we can just refer to the proxy to reuse the object. This avoids duplication of the object and hence saving memory. Remote Proxy – A remote proxy can be thought about the stub in the RPC call. The remote proxy provides a local representation of the object which is present in the different address location. Another example can be providing interface for remote resources such as web service or REST resources. Protective Proxy – The protective proxy acts as an authorisation layer to verify if the actual user has access to appropriate content. An example can be thought about the proxy server which provides restrictive internet access in office. Only the websites and contents which are valid will be allowed and the remaining ones will be blocked. Smart Proxy – A smart proxy provides additional layer of security by interposing specific actions when the object is accessed. An example can be to check if the real object is locked before it is accessed to ensure that no other object can change it.  Structure:  Participants:Subject – This object defines the common interface for RealSubject and Proxy so that a Proxy can be used anywhere a RealSubject is expected. Proxy – It maintains a reference to the RealSubject so that Proxy can access it. It also implements the same interface as the RealSubject so that Proxy can be used in place of RealSubject. Proxy also controls the access to the RealSubject and can create or delete this object. RealSubject – This refers the main object which the proxy represents.  Example: We will discuss two examples in this article. The first one will be virtual proxy pattern and the other one for protection proxy pattern. Virtual Proxy Example: As mentioned earlier virtual proxy is useful to save expensive memory resources. Let’s take a scenario where the real image contains a huge size data which clients needs to access. To save our resources and memory the implementation will be as below:Create an interface which will be accessed by the client. All its methods will be implemented by the ProxyImage class and RealImage class. RealImage runs on the different system and contains the image information is accessed from the database. The ProxyImage which is running on a different system can represent the RealImage in the new system. Using the proxy we can avoid multiple loading of the image.Class Diagram:Code Example: Image.java public interface Image { public void showImage(); } RealImage.java public class RealImage implements Image {private String fileName = null; public RealImage(String strFileName){ this.fileName = strFileName; } @Override public void showImage() { System.out.println('Show Image:' +fileName);} } ProxyImage.java public class ProxyImage implements Image { private RealImage img= null; private String fileName = null;public ProxyImage(String strFileName) { this.fileName = strFileName; } /* * (non-Javadoc) * @see com.proxy.virtualproxy.Image#showImage() */ @Override public void showImage() { if(img == null){ img = new RealImage(fileName); } img.showImage(); } } Client.java public class Client { public static void main(String[] args) { final Image img1 = new ProxyImage('Image***1'); final Image img2 = new ProxyImage('Image***2'); img1.showImage(); img2.showImage(); } }   Protection Proxy Example:Let’s assume that company ABC starts a new policy that employees will now be prohibited internet access based on their roles. All external emails websites will be blocked. In such situation we create InternetAccess interface which consists of operation grantInternetAccess(). The RealInternetAccess class which allows of internet access for all. However to restrict this access we will use ProxyInternetAccess class which will check user’s role and grant access based on their roles.Class Diagram:Code Example: InternetAccess: public interface InternetAccess { public void grantInternetAccess(); } RealInternetAccess.java public class RealInternetAccess implements InternetAccess { private String employeeName = null;public RealInternetAccess(String empName) { this.employeeName = empName; }@Override public void grantInternetAccess() { System.out.println('Internet Access granted for employee: ' + employeeName); } } ProxyInternetAccess.java public class RealInternetAccess implements InternetAccess { private String employeeName = null;public RealInternetAccess(String empName) { this.employeeName = empName; }@Override public void grantInternetAccess() { System.out.println('Internet Access granted for employee: ' + employeeName); } } Client.java public static void main(String[] args) { InternetAccess ia = new ProxyInternetAccess('Idiotechie'); ia.grantInternetAccess(); }   Benefits:One of the advantages of Proxy pattern as you have seen in the above example is about security. This pattern avoids duplication of objects which might be huge size and memory intensive. This in turn increases the performance of the application. The remote proxy also ensures about security by installing the local code proxy (stub) in the client machine and then accessing the server with help of the remote code.Drawbacks/Consequences: This pattern introduces another layer of abstraction which sometimes may be an issue if the RealSubject code is accessed by some of the clients directly and some of them might access the Proxy classes. This might cause disparate behaviour. Interesting points:There are few differences between the related patterns. Like Adapter pattern gives a different interface to its subject, while Proxy patterns provides the same interface from the original object but the decorator provides an enhanced interface. Decorator pattern adds additional behaviour at runtime. Proxy used in Java API: java.rmi.*;Please don’t forget to leave your comments. In case you like this article please share this articles for your friends through the social networking links. Download Sample Code:  Reference: Gang of Four – Proxy Design Pattern from our JCG partner Mainak Goswami at the Idiotechie blog. ...

Java Regular Expression Tutorial with Examples

When I started my career in java, regular expressions were a nightmare for me. This tutorial is aimed to help you mastering java regular expression and for me to come back at regular interval to refresh my regular expressions learning. What Are Regular Expressions? A regular expression defines a pattern for a String. Regular Expressions can be used to search, edit or manipulate text. Regular expressions are not language specific but they differ slightly for each language. Java regular expressions are most similar to Perl. Java Regular Expression classes are present in java.util.regex package that contains three classes: Pattern, Matcher and PatternSyntaxException. 1. Pattern object is the compiled version of the regular expression. It doesn’t have any public constructor and we use it’s public static method compile to create the pattern object by passing regular expression argument. 2. Matcher is the regex engine object that matches the input String pattern with the pattern object created. This class doesn’t have any public construtor and we get a Matcher object using pattern object matcher method that takes the input String as argument. We then use matches method that returns boolean result based on input String matches the regex pattern or not. 3. PatternSyntaxException is thrown if the regular expression syntax is not correct. Let’s see all these classes in action with a simple example: package com.journaldev.util;import java.util.regex.*;public class PatternExample {public static void main(String[] args) { Pattern pattern = Pattern.compile('.xx.'); Matcher matcher = pattern.matcher('MxxY'); System.out.println('Input String matches regex - '+matcher.matches()); // bad regular expression pattern = Pattern.compile('*xx*');}} Output of the above program is: Input String matches regex - true Exception in thread 'main' java.util.regex.PatternSyntaxException: Dangling meta character '*' near index 0 *xx* ^ at java.util.regex.Pattern.error(Pattern.java:1924) at java.util.regex.Pattern.sequence(Pattern.java:2090) at java.util.regex.Pattern.expr(Pattern.java:1964) at java.util.regex.Pattern.compile(Pattern.java:1665) at java.util.regex.Pattern.(Pattern.java:1337) at java.util.regex.Pattern.compile(Pattern.java:1022) at com.journaldev.util.PatternExample.main(PatternExample.java:13) Since regular expressions are revolved around String, String class has been extended in Java 1.4 to provide a matches method that does pattern matching. Internally it uses Pattern and Matcher classes to do the processing but obviously it reduces the code lines. Pattern class also contains matches method that takes regex and input String as argument and return boolean result after matching them. So below code works fine for matching input String with regular expression. String str = 'bbb'; System.out.println('Using String matches method: '+str.matches('.bb')); System.out.println('Using Pattern matches method: '+Pattern.matches('.bb', str)); So if your requirement is just to check if the input String matches with the pattern, you should save time by using simple String matches method. Use Pattern and Matches classes only when you need to manipulate the input String or you need to reuse the pattern. Note that the pattern defined by regex is applied on the String from left to right and once a source character is used in a match, it can’t be reused. For example, regex “121″ will match “31212142121″ only twice as “_121____121″. Regular Expressions common matching symbolsRegular Expression Description Example. Matches any single sign, includes everything (“..”, “a%”) – true(“..”, “.a”) – true (“..”, “a”) – false^xxx Matches xxx regex at the beginning of the line (“^a.c.”, “abcd”) – true(“^a”, “a”) – true (“^a”, “ac”) – falsexxx$ Matches regex xxx at the end of the line (“..cd$”, “abcd”) – true(“a$”, “a”) – true (“a$”, “aca”) – false[abc] Can match any of the letter a, b or c. [] are known as character classes. (“^[abc]d.”, “ad9″) – true(“[ab].d$”, “bad”) – true (“[ab]x”, “cx”) – false[abc][12] Can match a, b or c followed by 1 or 2 (“[ab][12].”, “a2#”) – true(“[ab]..[12]“, “acd2″) – true (“[ab][12]“, “c2″) – false[^abc] When ^ is the first character in [], it negates the pattern, matches anything except a, b or c (“[^ab][^12].”, “c3#”) – true(“[^ab]..[^12]“, “xcd3″) – true (“[^ab][^12]“, “c2″) – false[a-e1-8] Matches ranges between a to e or 1 to 8 (“[a-e1-3].”, “d#”) – true(“[a-e1-3]“, “2″) – true (“[a-e1-3]“, “f2″) – falsexx|yy Matches regex xx or yy (“x.|y”, “xa”) – true(“x.|y”, “y”) – true (“x.|y”, “yz”) – false  Java Regular Expressions MetacharactersRegular Expression Description\d Any digits, short of [0-9]\D Any non-digit, short for [^0-9]\s Any whitespace character, short for [\t\n\x0B\f\r]\S Any non-whitespace character, short for [^\s]\w Any word character, short for [a-zA-Z_0-9]\W Any non-word character, short for [^\w]\b A word boundary\B A non word boundaryThere are two ways to use metacharacters as ordinary characters in regular expressions.Precede the metacharacter with a backslash (\). Keep metacharcter within \Q (which starts the quote) and \E (which ends it).  Regular Expression Quantifiers Quantifiers specify the number of occurrence of a character to match against.Regular Expression Descriptionx? x occurs once or not at allX* X occurs zero or more timesX+ X occurs one or more timesX{n} X occurs exactly n timesX{n,} X occurs n or more timesX{n,m} X occurs at least n times but not more than m timesQuantifiers can be used with character classes and capturing groups also. For example, [abc]+ means a, b or c one or more times. (abc)+ means the group “abc” one more more times. We will discuss about Capturing Group now. Regular Expression Capturing Groups Capturing groups are used to treat multiple characters as a single unit. You can create a group using (). The portion of input String that matches the capturing group is saved into memory and can be recalled using Backreference. You can use matcher.groupCount method to find out the number of capturing groups in a regex pattern. For example in ((a)(bc)) contains 3 capturing groups; ((a)(bc)), (a) and (bc) . You can use Backreference in regular expression with backslash (\) and then the number of group to be recalled. Capturing groups and Backreferences can be confusing, so let’s understand this with an example. System.out.println(Pattern.matches('(\\w\\d)\\1', 'a2a2')); //true System.out.println(Pattern.matches('(\\w\\d)\\1', 'a2b2')); //false System.out.println(Pattern.matches('(AB)(B\\d)\\2\\1', 'ABB2B2AB')); //true System.out.println(Pattern.matches('(AB)(B\\d)\\2\\1', 'ABB2B3AB')); //false In the first example, at runtime first capturing group is (\w\d) which evaluates to “a2″ when matched with the input String “a2a2″ and saved in memory. So \1 is referring to “a2″ and hence it returns true. Due to same reason second statement prints false. Try to understand this scenario for statement 3 and 4 yourself. Now we will look at some important methods of Pattern and Matcher classes. We can create a Pattern object with flags. For example Pattern.CASE_INSENSITIVE enables case insensitive matching. Pattern class also provides split(String) that is similar to String class split() method. Pattern class toString() method returns the regular expression String from which this pattern was compiled. Matcher classes have start() and end() index methods that show precisely where the match was found in the input string. Matcher class also provides String manipulation methods replaceAll(String replacement) and replaceFirst(String replacement). Now we will see these common functions in action through a simple java class: package com.journaldev.util;import java.util.regex.Matcher; import java.util.regex.Pattern;public class RegexExamples {public static void main(String[] args) { // using pattern with flags Pattern pattern = Pattern.compile('ab', Pattern.CASE_INSENSITIVE); Matcher matcher = pattern.matcher('ABcabdAb'); // using Matcher find(), group(), start() and end() methods while (matcher.find()) { System.out.println('Found the text \'' + matcher.group() + '\' starting at ' + matcher.start() + ' index and ending at index ' + matcher.end()); }// using Pattern split() method pattern = Pattern.compile('\\W'); String[] words = pattern.split('one@two#three:four$five'); for (String s : words) { System.out.println('Split using Pattern.split(): ' + s); }// using Matcher.replaceFirst() and replaceAll() methods pattern = Pattern.compile('1*2'); matcher = pattern.matcher('11234512678'); System.out.println('Using replaceAll: ' + matcher.replaceAll('_')); System.out.println('Using replaceFirst: ' + matcher.replaceFirst('_')); }} Output of the above program is: Found the text 'AB' starting at 0 index and ending at index 2 Found the text 'ab' starting at 3 index and ending at index 5 Found the text 'Ab' starting at 6 index and ending at index 8 Split using Pattern.split(): one Split using Pattern.split(): two Split using Pattern.split(): three Split using Pattern.split(): four Split using Pattern.split(): five Using replaceAll: _345_678 Using replaceFirst: _34512678 Regular expressions are one of the area of java interview questions and in next few posts, I will provide some real life examples.   Reference: Java Regular Expression Tutorial with Examples from our JCG partner Pankaj Kumar at the Developer Recipes blog. ...

Spring MVC REST Calls With Ajax

This post provides a simple example of REST calls to a Spring MVC web application. It is based on the Serving Static Resources With Spring MVC and Fetching JSON With Ajax In Spring MVC Context example. The code is  available on GitHub in the Spring-REST-With-Ajax directory. Main Page Our main page contains four buttons linked to Javascript functions performing Ajax calls:         ... <body> <h1>Welcome To REST With Ajax !!!</h1> <button type='button' onclick='RestGet()'>GET</button> <button type='button' onclick='RestPut()'>PUT</button> <button type='button' onclick='RestPost()'>POST</button> <button type='button' onclick='RestDelete()'>DELETE</button> </body> ...   Javascript Our Javascript file contains the four functions: var prefix = '/spring-rest-with-ajax';var RestGet = function() { $.ajax({ type: 'GET', url: prefix + '/MyData/' + Date.now(), dataType: 'json', async: true, success: function(result) { alert('At ' + result.time + ': ' + result.message); }, error: function(jqXHR, textStatus, errorThrown) { alert(jqXHR.status + ' ' + jqXHR.responseText); } }); }var RestPut = function() {var JSONObject= { 'time': Date.now(), 'message': 'User PUT call !!!' };$.ajax({ type: 'PUT', url: prefix + '/MyData', contentType: 'application/json; charset=utf-8', data: JSON.stringify(JSONObject), dataType: 'json', async: true, success: function(result) { alert('At ' + result.time + ': ' + result.message); }, error: function(jqXHR, textStatus, errorThrown) { alert(jqXHR.status + ' ' + jqXHR.responseText); } }); }var RestPost = function() { $.ajax({ type: 'POST', url: prefix + '/MyData', dataType: 'json', async: true, success: function(result) { alert('At ' + result.time + ': ' + result.message); }, error: function(jqXHR, textStatus, errorThrown) { alert(jqXHR.status + ' ' + jqXHR.responseText); } }); }var RestDelete = function() { $.ajax({ type: 'DELETE', url: prefix + '/MyData/' + Date.now(), dataType: 'json', async: true, success: function(result) { alert('At ' + result.time + ': ' + result.message); }, error: function(jqXHR, textStatus, errorThrown) { alert(jqXHR.status + ' ' + jqXHR.responseText); } }); }   Controller Our controller captures the REST calls and returns a JSON. In a real applications, one would perform CRUD operations rather than returning JSONs: @Controller @RequestMapping(value = '/MyData') public class MyRESTController {@RequestMapping(value='/{time}', method = RequestMethod.GET) public @ResponseBody MyData getMyData( @PathVariable long time) {return new MyData(time, 'REST GET Call !!!'); }@RequestMapping(method = RequestMethod.PUT) public @ResponseBody MyData putMyData( @RequestBody MyData md) {return md; }@RequestMapping(method = RequestMethod.POST) public @ResponseBody MyData postMyData() {return new MyData(System.currentTimeMillis(), 'REST POST Call !!!'); }@RequestMapping(value='/{time}', method = RequestMethod.DELETE) public @ResponseBody MyData deleteMyData( @PathVariable long time) {return new MyData(time, 'REST DELETE Call !!!'); } }   Running The Example Once compiled, the example can be run with mvn tomcat:run. Then, browse: http://localhost:8585/spring-rest-with-ajax/ The main page will be displayed:If you click on any button, a pop-up will be displayed:See here for more about REST • More Spring related posts here.   Reference: Spring MVC REST Calls With Ajax from our JCG partner Jerome Versrynge at the Technical Notes blog. ...

JBoss HornetQ for Kids, Parents and Grandparents – Chapter 1

It’s now almost 4 years that I’m working with HornetQ and I think it’s time to share part of what I learnt so far. The main purpose of this post is not to rewrite the official documentation, but it’s to clarify, in simple ways, the concepts we use most here in PaddyPower . What is HornetQ HornetQ is a JMS implementation. JMS is a message oriented middle-ware API to exchange information between producers and consumers in an asynchronous way. HornetQ is one of the numerous framework out there that implement the JMS API. Configuration All the HornetQ configuration we care about is in 1 folder. How beautiful is that?! The folder is hornetq (or hornetq.sar dipending on the jboss version you are using) and you can find it in your jboss profile deploy folder. In this folder we have up to 7 xml configuration files. We really care only about 2:hornetq-jms.xml and hornetq-configuration.xml. hornetq-jms.xmlThis is where you want to define your JNDI names for queues, topics and connection factories. By default all the Connection factories, the dead letter and the expiry Queue are already configured. What you need to add is only the queues or topics that your application needs to use. For example: <queue name='phaseQueueFromEngine'> <entry name='/queue/phaseQueueFromEngine'/> </queue> the entry name is the JNDI name used by your producer and consumer to discover the queue. hornetq-configuration.xml This is where you want to define acceptors, connectors, bridges and other cool stuff. Understanding Connectors & Acceptors Ok, this can be tricky, so I’ll try to be simple and essential. HornetQ run in a server (JBoss for example) or as standalone application. In any of the above cases, HornetQ works by communicating with his own server, the HornetQ server. In order to communicate with it, we have to tell how we connect to and what we accept as connection.Acceptors define which type of connection are accepted by the HornetQ Server. Connectors define how to connect to the HornetQ server.Luckily, only 2 kind of connectors and acceptors are possible, in-vm and netty. in-vm is used when the producer and the consumer lives in the same virtual machine. Example: <acceptor name='in-vm'> <factory-class>org.hornetq.core.remoting.impl.invm.InVMAcceptorFactory</factory-class> </acceptor> <connector name='in-vm'> <factory-class>org.hornetq.core.remoting.impl.invm.InVMConnectorFactory</factory-class> </connector> netty is used when the producer and the consumer lives in different virtual machines. Example: Producer/Consumer in the same machine: <acceptor name='netty'> <factory-class>org.hornetq.integration.transports.netty.NettyAcceptorFactory</factory-class> <param key='host' value='${host:localhost}'/> <param key='port' value='${port:5445}'/> </acceptor><connector name=”netty”> <factory-class>org.hornetq.integration.transports.netty.NettyConnectorFactory</factory-class> <param key=”host” value=”${host:localhost}”/> <param key=”port” value=”${port:5445}”/> </connector>  Producer/Consumer in different machines: Consumer Box<acceptor name=”netty-external-acceptor”> <factory-class>org.hornetq.integration.transports.netty.NettyAcceptorFactory</factory-class> <param key=”host” value=”172.x.x.62″/> <param key=”port” value=”5445″/> </acceptor>Producer Box <connector name='remote-engine-connector'> <factory-class> org.hornetq.integration.transports.netty.NettyConnectorFactory</factory-class> <param key='host' value='172.x.x.62'/> <param key='port' value='5445'/> </connector>So far so good. Pay attention when you configure acceptors and connectors because in order to communicate properly they have to be the same kind with the same host and port. netty acceptor with netty connector (same host and port ) GOOD in-vm acceptor with in-vm connector GOOD in-vm acceptor with netty connector BAD netty acceptor port 5445 with netty connector 5446 BAD netty acceptor host 172.x.x.60 with netty connector 172.x.x.62 BAD Understanding Bridges Another feature I widely used is the bridge. If you have a producer living in the box 172.x.x.60 and the consumer sitting in the box 172.x.x.62 you need to connect them and you do this configuring a bridge in our beloved configuration file hornetq-configuration.xml Example :<bridge name=”from60to62Bridge”> <queue-name>jms.queue.phaseQueueToEngine</queue-name> <forwarding-address>jms.queue.phaseQueueFromInput</forwarding-address> <reconnect-attempts>-1</reconnect-attempts> <connector-ref connector-name=”remote-engine-connector”/> </bridge>Yes, you use the connector to specify where to connect to the other hornetQ server. Easy! I hope this will clarify a couple of aspects and it will help to understand better the sometime scary Hornetq configuration. Coming soon.. HornetQ for Kids, Parents and Grandparents – Chapter 2: the magic of address-settings   Reference: JBoss HornetQ for Kids, Parents and Grandparents – Chapter 1 from our JCG partner Marco Castigliego at the Remove duplication and fix bad names blog. ...

Spring 3.1 – Loading Properties For XML Configuration From Database

Spring makes it easy to inject values obtained from properties files via its PropertyPlaceholderConfigurer and (pre-Spring 3.1) PropertySourcesPlaceholderConfigurer (Spring 3.1). These classes implement the BeanFactoryPostProcessor interface, which enables them to manipulate the values within the Spring XML configuration file before the beans are initialized. So if you specify ${jdbc.driverClassName} to be set to the property ‘driverClassName’, this variable will be replaced/swapped with the value with the key ‘jdbc.driverClassName’ in a properties file. Apart from properties files, the database table can also be a place to get key-value pairs. Great, so just extend the PropertySourcesPlaceholderConfigurer, and have it read a table containing the key-value pairs, populate them and we’re done! However, there’s a slight problem. If the DataSource bean also relies on values obtained from a properties file (e.g. JDBC URL, username, password), and being good Springers, inject this bean to the bean class extending PropertySourcesPlaceholderConfigurer, the bean container will fail to startup properly, because the ‘jdbc.driverClassName’ variable cannot be resolved. Strange, but true. The reason for this is that any bean injected into a BeanFactoryPostProcessor class will trigger bean initialization BEFORE the BeanFactoryPostProcessor classes are run. You know, dependency injection…all depending beans have to be ready before being injected into the consumer. So this creates a cyclic-dependency kind of thing. All dependencies in the XML configuration are resolved first before the BeanFactoryPostProcessor classes are run. So, how to go about this? Well, there’s a trick you can employ. A BeanFactoryPostProcessor class has access to the ConfigurableListableBeanFactory object via the ‘postProcessBeanFactory’ method. From this object, you can do a ‘getBean’ and get a reference of any bean with an id. And guess what, you can get the vaunted DataSource bean without triggering premature bean initialization. Let’s say there’s a table ‘sys_param’ with the following data:PARAM_CD PARAM_VALUE -------------- -------------- service.charge 1.5 rebate.amount 15.99 smtp.ip DbPropertySourcesPlaceholderConfigurer is shown here: package org.gizmo.labs.utils.spring;import javax.sql.DataSource;import org.springframework.beans.BeansException; import org.springframework.beans.factory.config.ConfigurableListableBeanFactory; import org.springframework.context.support.PropertySourcesPlaceholderConfigurer;public class DbPropertySourcesPlaceholderConfigurer extends PropertySourcesPlaceholderConfigurer { @Override public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException { DataSource dataSource = beanFactory.getBean(DataSource.class); DbProperties dbProps = new DbProperties(dataSource);setProperties(dbProps); super.postProcessBeanFactory(beanFactory); } } The DbProperties class will make use of the DataSource reference and queries the database to get the key-value pairs: package org.gizmo.labs.utils.spring;import java.util.List; import java.util.Map; import java.util.Properties;import javax.sql.DataSource;import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.jdbc.core.JdbcTemplate;public class DbProperties extends Properties { private final Logger logger = LoggerFactory.getLogger(DbProperties.class); private static final long serialVersionUID = 1L;public DbProperties(DataSource dataSource) { super(); JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource); List <map > l = jdbcTemplate.queryForList('select param_cd, param_value from sys_param');for(Mapm: l) { logger.debug('Loading from DB: [{}:{}]', m.get('PARAM_CD'), m.get('PARAM_VALUE')); setProperty((m.get('PARAM_CD')).toString(), (m.get('PARAM_VALUE')).toString()); } } } To demonstrate that the values from the table are properly injected, here’s the class which acts as the consumer: package org.gizmo.labs.utils.spring;import java.math.BigDecimal;import org.apache.commons.lang.builder.ReflectionToStringBuilder; import org.apache.commons.lang.builder.ToStringStyle; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.InitializingBean;public class DbPropConsumer implements InitializingBean { private final Logger logger = LoggerFactory.getLogger(DbPropConsumer.class);private BigDecimal serviceCharge; private double rebateAmount; private String smtpIp;@Override public void afterPropertiesSet() throws Exception { logger.debug('I have consumed: {}', this); }public String toString() { return ReflectionToStringBuilder.toString(this, ToStringStyle.MULTI_LINE_STYLE); }public BigDecimal getServiceCharge() { return serviceCharge; }public void setServiceCharge(BigDecimal serviceCharge) { this.serviceCharge = serviceCharge; }public double getRebateAmount() { return rebateAmount; }public void setRebateAmount(double rebateAmount) { this.rebateAmount = rebateAmount; }public String getSmtpIp() { return smtpIp; }public void setSmtpIp(String smtpIp) { this.smtpIp = smtpIp; }} Last but not least, the Spring configuration (DataSource bean not shown, simplified for clarity):classpath:system.propertiesThe first 2 bean definitions are the BeanFactoryPostProcessor classes, and to ensure the first one is run first, the ‘order’ property is set (lower means higher precedence). For the DbPropertySourcesPlaceholderConfigurer, a different placeholder prefix and suffix is used for clarity (notice the placeholders for DbPropConsumer). So, upon Spring container startup, you should be able to view a similar output as below:2012-09-18 00:03:14, DEBUG, org.gizmo.labs.utils.spring.DbProperties, Loading from DB: [service.charge:1.5] 2012-09-18 00:03:14, DEBUG, org.gizmo.labs.utils.spring.DbProperties, Loading from DB: [rebate.amount:15.99] 2012-09-18 00:03:14, DEBUG, org.gizmo.labs.utils.spring.DbProperties, Loading from DB: [smtp.ip:] 2012-09-18 00:03:14, DEBUG, org.gizmo.labs.utils.spring.DbPropConsumer, I have consumed: org.gizmo.labs.utils.spring.DbPropConsumer@189b939[ logger=Logger[org.gizmo.labs.utils.spring.DbPropConsumer] serviceCharge=1.5 rebateAmount=15.99 smtpIp= ]  Reference: Spring 3.1 – Loading Properties For XML Configuration From Database from our JCG partner Allen Julia at the YK’s Workshop blog. ...

Let’s turn packages into a module system!

Many projects are divided into modules/subprojects using the build system (Maven, Gradle, SBT …); and writing modular code is generally a Good Thing. Dividing the code into build modules is mainly used for:isolating parts of code (decreasing coupling) api/impl split adding a third-party dependency only to a specific part of code grouping code with similar functionality statically checking that code in one module only uses code from its dependent modules (inter-module dependencies)While some may say that it is also useful for separate compilation, I don’t think that matters a lot (when considering one project). The build tools are pretty smart nowadays to figure out what needs to be recompiled. Problems with build modules I think there are several problems with this approach. First of all, it is pretty hard to decide when a piece of functionality is “big enough” to turn it into a build module. Is a handful of classes enough? Or do you need more? Should it strictly be one functionality per module? But that would cause a module explosion; and so on. At least in the projects I took part in, it was a common theme of discussions, how coarse-grained the build modules should be. Secondly, build modules are pretty “heavy”. Maven is worst I suppose, you need a large piece of xml to create a module, with lots of boilerplate (for example repeated group id, version number, parent definition); SBT and Gradle are much better, but still, it is a significant effort. A separate directory needs to be created, the whole directory structure (src/main/..., src/test/...), build config updated, etc. Overall it is quite a hassle. And then quite often when we have our beautiful modules separated, it turns out that in order for two of them to cooperate, we need a “common” part. Then we either end up with a bloated foo-common module, which contains loads of unrelated classes, or multiple small foo-foomodule-common modules; the second solution is fine of course, except for the time wasted setting it up. Finally, a build module is an additional thing you have to name; most probably already the package name and the class name reflect what the code is doing, now it also needs to be repeated in the build module name (DRY violation). All in all, I think creating build modules is much too hard and time-consuming. Programmers are lazy (which, of course, is a good thing), and this leads to designs which are not as clean as they could be. Time to change that :). (See also my earlier blog on modules.) Packages Java, Scala and Groovy already have a system for grouping code: packages. However, currently a package is just a string identifier. Except for some very limited visibility options (package-private in Java, package-scoping in Scala) packages have no semantic meaning. So we have several levels of grouping code:Project Build module Package ClassWhat if we merged 2. and 3. together; why shouldn’t packages be used for creating modules? Packages as modules? Let’s see what would it take to extend packages to be modules. Obviously the first thing that we’d need is to associate some meta-data with each module. There are already some mechanisms for this (e.g. via annotations on package-info.java), or this could be an extension of package objects in Scala – some traits to mix in, or vals to override. What kind of meta-data? Of course we don’t want to move the whole build definition to the packages. But let’s separate concerns – the build definition should define how to build the project, not what the module dependencies are. Then the first thing to define in a module’s meta-data would be dependencies on third-party libraries. Such definitions could be only symbols, which would be bound to concrete versions in the build definition. For example, we would specify that package “foo.bar.dao” depends on the “jpa” libraries. The build definition would then contain a mapping from “jpa” to a list of maven artifacts (e.g. hibernate-core, hibernate-entitymanager etc.). Moreover, it would probably make most sense if such dependencies where transitive to sub-packages. So defining a global library would mean adding a dependency on the root package. As a side note, with an extension of Scala’s package objects, this could even be made type-safe. The package objects could implement a trait, where one of the values to override could be the list of third-party dependencies symbols. The symbols themselves could be e.g. contained in an Enumeration, defined in the root package; which could make things like “find all modules dependent on jpa” a simple usage-search in the IDE. Second step is to define inter-module dependencies using this mechanism as well. It would be possible, in the package’s meta-data, to define a list of other packages, from which code is visible. This follows how currently build modules are used: each contains a list of project modules which can be accessed. (Another Scala side-note: as the package objects would implement a trait, this would mean defining a list of objects with a given type.) Taking this further, we could specify api and impl type-packages. Api-type ones would by default be accessible from other packages. Impl-type packages, on the other hand, couldn’t be accessed without explicitly specifying them as a dependency. How could it look like in practice? A very rough sketch in Scala: package foo.user// Even without definition, each package has an implicit package object // implementing a PackageModule trait ... package object dao { // ... which is used here. The type of the val below is // List[PackageModule]. override val moduleDependsOn = List(foo.security, foo.user.model) override val moduleType = ModuleType.API // FooLibs enum is defined in a top-level package or the build system override val moduleLibraries = List(FooLibs.JPA) }   Refactoring Refactoring is an everyday activity; however, refactoring modules is usually a huge task, approached only once in a while. Should it be so? If packages were extended to modules, refactoring modules would be the same as moving around and renaming packages, with the additional need to update the meta-data. It would be much easier than currently, which I think would lead to better overall designs. Build system The above would obviously mean more work to the build system – it would have a harder time figuring out the list of modules, build order, list of artifacts to create etc (by the way, should a separate jar be created for a package, could also be part of the meta-data). Also some validations would be needed – for circular dependencies, or trying to constraint the visibility in a wrong way. But then, people have done more complicated software than that Jigsaw? You would probably say that this overlaps with project Jigsaw, which will come in Java 9 (or not). However, I think Jigsaw aims at a different scale: project-level modules. So one jigsaw module would be your whole project, while you would have multiple (tens) of packages-modules. The name “module” is overloaded here, maybe the name “mini-modules” would be better, or very modestly “packages done right”. Bottom line I think that currently the way to define build modules is way too hard and constraining. On the other hand, lifting packages to modules would be very lightweight. Defining a new module would be the same as creating a new package – couldn’t get much simpler. Third-party libraries could be added only where needed easily. There would be one less thing to name. And there would be one source tree per project. Also such an approach would be scalable and adjustable to the project’s needs. It would be possible to define fine-grained modules or coarse-grained ones without much effort. Or even better, why not create both – modules could be nested and built one on top of the other. Now … the only problem is implementing, and adding IDE support ;)   Reference: Let’s turn packages into a module system! from our JCG partner Adam Warski at the Blog of Adam Warski blog. ...

Bash’ing your git deployment

Chuck Norris deploys after every commit. Smart men deploy after every successful build on their Continuous Integration server. Educated men, deploy code directly from their distributed version control systems. I, being neither, had to write my deployment script in bash. We’re using git and while doing so I wanted us to:deploy from working copy, but… make sure that you can deploy only if you committed everything make sure that you can deploy only if you pushed everything upstream tag the deployed hash display changelog (all the commits between two last tags)Here are some BASH procedures I wrote on the way, if you need them: make sure that you can deploy only if you committed everything   verifyEverythingIsCommited() { gitCommitStatus=$(git status --porcelain) if [ '$gitCommitStatus' != '' ]; then echo 'You have uncommited files.' echo 'Your git status:' echo $gitCommitStatus echo 'Sorry. Rules are rules. Aborting!' exit 1 fi }   make sure that you can deploy only if you pushed everything upstream   verifyEverythingIsPushedToOrigin() { gitPushStatus=$(git cherry -v) if [ '$gitPushStatus' != '' ]; then echo 'You have local commits that were NOT pushed.' echo 'Your 'git cherry -v' status:' echo $gitPushStatus echo 'Sorry. Rules are rules. Aborting!' exit 1 fi }   tag the deployed hash Notice: my script takes first parameter as the name of the server to deploy to (this is $1 passed to this procedure). Also notice, that ‘git push’ without the ‘–tags’ does not push your tags. tagLastCommit() { d=$(date '+%y-%m-%d_%H-%M-%S') git tag '$1_$d' git push --tags } This creates nice looking tags like these: preprod_12-01-11_15-16-24 prod_12-01-12_10-51-33 test_12-01-11_15-11-10 test_12-01-11_15-53-42 display changelog (all the commits between two last tags)   printChangelog() { echo 'This is changelog since last deploy. Send it to the client.' twoLastHashesInOneLine=$(git show-ref --tags -s | tail -n 2 | tr '\\n' '-'); twoLastHashesInOneLineWithThreeDots=${twoLastHashesInOneLine/-/...}; twoLastHashesInOneLineWithThreeDotsNoMinusAtTheEnd=$(echo $twoLastHashesInOneLineWithThreeDots | sed 's/-$//'); git log --pretty=oneline --no-merges --abbrev-commit $twoLastHashesInOneLineWithThreeDotsNoMinusAtTheEnd } The last command gives you a nice log like this: e755c63 deploy: fix for showing changelog from two first tags instead of two last ones 926eb02 pringing changelog between last two tags on deployment 34478b2 added git tagging to deploy   Reference: Bash’ing your git deployment from our JCG partner Jakub Nabrdalik at the Solid Craft blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: